Updates from: 04/26/2021 03:05:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
Previously updated : 12/02/2020 Last updated : 04/21/2021
If you use Exchange Online, some users in your organization might be incorrectly
> [!TIP] > To see if there is a duplicate proxy address, execute the following PowerShell cmdlet against Exchange Online: > ```
-> Get-Recipient -ResultSize unlimited | where {$_.EmailAddresses -match "user@contoso.onmicrosoft.com"} | fL Name, RecipientType,emailaddresses
+> Get-Recipient -Filter "EmailAddresses -eq 'user@contoso.onmicrosoft.com'" | fl Name, RecipientType,Emailaddresses
> ``` > For more information about this problem, see ["Proxy address > is already being used" error message in Exchange Online](https://support.microsoft.com/help/3042584/-proxy-address-address-is-already-being-used-error-message-in-exchange-online). The article also includes information on [how to connect to Exchange Online by using remote PowerShell](/powershell/exchange/connect-to-exchange-online-powershell).
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation.md
Previously updated : 04/06/2021 Last updated : 04/23/2021
Next, you'll configure federation with the identity provider configured in step
### To configure direct federation in Azure AD using PowerShell
-1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)). (If you need detailed steps, the quickstart for adding a guest user includes the section [Install the latest AzureADPreview module](b2b-quickstart-invite-powershell.md#install-the-latest-azureadpreview-module).)
+1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)). (If you need detailed steps, the Quickstart includes the guidance, [Powershell Module](b2b-quickstart-invite-powershell.md#prerequisites).)
2. Run the following command: ```powershell Connect-AzureAD
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/deploy-access-reviews.md
Title: Plan an Azure Active Directory Access Reviews deployment
description: Planning guide for a successful Access Reviews deployment documentationCenter: ''-+ editor:
na
ms.devlang: na Previously updated : 12/23/2020- Last updated : 04/16/2021+
The following videos may be useful as you learn about Access Reviews:
* [How to create Access Reviews in Azure AD](https://youtu.be/6KB3TZ8Wi40)
+* [How to create automatic Access Reviews for all guest users with access to Microsoft 365 groups in Azure AD](https://www.youtube.com/watch?v=3D2_YW2DwQ8)
+ * [How to enable Access Reviews in Azure AD](https://youtu.be/X1SL2uubx9M) * [How to review access using My Access](https://youtu.be/tIKdQhdHLXU) + ### Licenses You need a valid Azure AD Premium (P2) license for each person, other than Global Administrators or User Administrators, who will create or perform Access Reviews. For more information, see [Access Reviews license requirements](access-reviews-overview.md).
Clearly, IT will want to stay in control for all infrastructure-related access d
#### Customize email communication
-When you schedule a review, you nominate users who will perform this review. These reviewers then receive an email notification of new reviews assigned to them, as well as reminders before a review assigned to them expires.
-
-Administrators can choose to send this notification either half-way before the review expires or a day before it expires.
+When you schedule a review, you nominate users who will perform this review. These reviewers then receive an email notification of new reviews assigned to them, as well as reminders before a review assigned to them expires.
The email sent to reviewers can be customized to include a custom short message that encourages them to act on the review. We recommend you use the additional text to: * Include a personal message to reviewers, so they understand it is sent by your Compliance or IT department.
-* Include a hyperlink or reference to internal information on what the expectations of the review are and additional reference or training material.
-
-* Include a link to instructions on [how to perform a self-review of access.](review-your-access.md)
+* Include a reference to internal information on what the expectations of the review are and additional reference or training material.
![Reviewer email](./media/deploy-access-review/2-plan-reviewer-email.png)
In your pilot, we recommend:
* Document any access removed as a part of the pilot in case you need to quickly restore it.
-* Monitor audit logs to ensure everything is all events are properly audited.
+* Monitor audit logs to ensure all events are properly audited.
For more information, see [best practices for a pilot](../fundamentals/active-directory-deployment-plans.md).
The administrative role required to create, manage, or read an Access Review dep
| Privileged roles in Azure (resources)| Global Administrator<p>User Administrator<p>Resource Owner| Creators | | Access package| Global Administrator<p>Creator of Access Package| Global Administrator only | - For more information, see [Administrator role permissions in Azure Active Directory](../roles/permissions-reference.md). ### Who will review the access to the resource?
The creator of the access review decides at the time of creation who will perfor
* End-users who will each self-attest to their need for continued access.
+* Managers review their direct reports' access to the resource.
+ When creating an Access Review, administrators can choose one or more reviewers. All reviewers can start and carry out a review, choosing users for continued access to a resource or removing them. ### Components of an Access Review
To create an access review policy, you must have the following information.
* What communications should be sent based on actions taken? - **Example Access Review plan** | Component| Value |
To create an access review policy, you must have the following information.
| **Resources to review**| Access to Microsoft Dynamics | | **Review frequency**| Monthly | | **Who performs review**| Dynamics business group Program Managers |
-| **Notification**| Email 24 hours prior to review to alias Dynamics-Pms<p>Include encouraging custom message to reviewers to secure their buy-in |
+| **Notification**| Email sent at start of review to alias Dynamics-Pms<p>Include encouraging custom message to reviewers to secure their buy-in |
| **Timeline**| 48 hours from notification | |**Automatic actions**| Remove access from any account that has no interactive sign-in within 90 days, by removing user from security group dynamics-access. <p>*Perform actions if not reviewed within timeline.* | | **Manual actions**| Reviewers may perform removals approval prior to automated action if desired. |
-| **Communications**| Send internal (member) users who are removed an email explaining they are removed and how to regain access. |
--
-
### Automate actions based on Access Reviews
Group membership can be reviewed by:
* Members of the group, attesting for themselves
+* Managers review their direct reports access
+ ### Group ownership We recommend that group owners review membership, as they're best situated to know who needs access. Ownership of groups differs with the type of group:
Groups that are synchronized from on-premises Active Directory cannot have an ow
### Review membership of exclusion groups in Conditional Access policies
-There are times when Conditional Access policies designed to keep your network secure shouldn't apply to all users. For example, a Conditional Access policy that only allows users to sign in while on the corporate network may not apply to the Sales team, which travels extensively. In that case, the Sales team members would be put into a group and that group would be excluded from the Conditional Access policy.
-
-Review such a group membership regularly as the exclusion represents a potential risk if the wrong members are excluded from the requirement.
-
-You can [use Azure AD access reviews to manage users excluded from Conditional Access policies](conditional-access-exclusion.md).
+Go to [Use Azure AD access reviews to manage users excluded from Conditional Access policies](conditional-access-exclusion.md) to learn how to review membership of exclusion groups.
-### Review external userΓÇÖs group memberships
+### Review guest users' group memberships
-To minimize manual work and associated potential errors, consider using [Dynamic Groups](../enterprise-users/groups-create-rule.md) to assign group membership based on a userΓÇÖs attributes. You may want to create one or more Dynamic Groups for external users. The internal sponsor can act as a reviewer for membership in the group.
-
-Note: External users who are removed from a group as the result of an Access Review aren't deleted from the tenant.
-
-They can be deleted from a tenant either removed manually, or via a script.
+Go to [Manage guest access with Azure AD access reviews](https://docs.microsoft.com/azure/active-directory/governance/manage-guest-access-with-access-reviews) to learn how to review guest users' access to group memeberships.
### Review access to on-premises groups
Access Reviews allows reviewers to attest whether users still need to be in a ro
* All Microsoft 365 and Dynamics Service Administration roles
-Roles selected here include permanent and eligible role.
+Roles reviewed include permanent and eligible assignments.
In the Reviewers section, select one or more people to review all the users. Or you can select to have the members review their own access.
To reduce the risk of stale access, administrators can enable periodic reviews o
| [Perform Access Reviews](entitlement-management-access-reviews-review-access.md)| Perform access reviews for other users that are assigned to an Access Package. | | [Self-review assigned Access Package(s)](entitlement-management-access-reviews-self-review.md)| Self-review of assigned Access Package(s) | - > [!NOTE] > End-users who self-review and say they no longer need access are not removed from the Access Package immediately. They are removed from the Access Package when the review ends or if an administrator stops the review.
Access needs to groups and applications for employees and guests likely change o
| [Complete Access Review](complete-access-review.md)| View an Access Review and apply the results | | [Take action for on-premises groups](https://github.com/microsoft/access-reviews-samples/tree/master/AzureADAccessReviewsOnPremises)| Sample PowerShell script to act on Access Reviews for on-premises groups. | - ### Review Azure AD roles To reduce the risk associated with stale role assignments, you should regularly review access of privileged Azure AD roles.
Learn about the below related technologies.
* [What is Azure AD Entitlement Management?](entitlement-management-overview.md)
-* [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)
+* [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)
active-directory Manage Guest Access With Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/manage-guest-access-with-access-reviews.md
Title: Manage guest access with access reviews - Azure AD
description: Manage guest users as members of a group or assigned to an application with Azure Active Directory access reviews documentationcenter: ''-+ editor: markwahl-msft
na
ms.devlang: na Previously updated : 12/23/2020- Last updated : 4/16/2021+
You can review either:
- A group in Azure AD that has one or more guests as members. - An application connected to Azure AD that has one or more guest users assigned to it.
+When reviewing guest user access to Microsoft 365 groups, you can either create a review for each group individually, or turn on automatic, recurring access reviews of guest users across all Microsoft 365 groups. The following video provides more information on recurring access reviews of guest users:
+
+> [!VIDEO https://www.youtube.com/watch?v=3D2_YW2DwQ8]
+ You can then decide whether to ask each guest to review their own access or to ask one or more users to review every guest's access. These scenarios are covered in the following sections.
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-features.md
Azure Arc enabled SQL Managed Instance share a common code base with the latest
|Feature|Azure Arc enabled SQL Managed Instance| |-|-|
-|Log shipping|Yes|
-|Backup compression|Yes|
-|Database snapshot|Yes|
|Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available | |Always On availability groups<sup>2</sup>|HA capabilities are planned.| |Basic availability groups <sup>2</sup>|HA capabilities are planned.| |Minimum replica commit availability group <sup>2</sup>|HA capabilities are planned.| |Clusterless availability group|Yes|
+|Backup database | Yes - `COPY_ONLY` See [BACKUP - (Transact-SQL)](/sql/t-sql/statements/backup-transact-sql?view=azuresqldb-mi-current&preserve-view=true)|
+|Backup compression|Yes|
+|Backup mirror |Yes|
+|Backup encryption|Yes|
+|Backup to Azure to (backup to URL)|Yes|
+|Database snapshot|Yes|
+|Fast recovery|Yes|
+|Hot add memory and CPU|Yes|
+|Log shipping|Yes|
|Online page and file restore|Yes| |Online indexing|Yes|
-|Resumable online index rebuilds|Yes|
|Online schema change|Yes|
-|Fast recovery|Yes|
-|Mirrored backups|Yes|
-|Hot add memory and CPU|Yes|
-|Encrypted backup|Yes|
-|Hybrid backup to Azure (backup to URL)|Yes|
+|Resumable online index rebuilds|Yes|
<sup>1</sup> In the scenario where there is pod failure, a new SQL Managed Instance will start up and re-attach to the persistent volume containing your data. [Learn more about Kubernetes persistent volumes here](https://kubernetes.io/docs/concepts/storage/persistent-volumes).
-<sup>2</sup> Future releases will provide AG capabilities
+<sup>2</sup> Future releases will provide AG capabilities.
+ ### <a name="RDBMSSP"></a> RDBMS Scalability and Performance
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
You will delete the previous CRDs as you cleanup past installations. See [Cleanu
- Deployment of Azure Arc enabled SQL Managed Instance in direct mode can only be done from the Azure portal, and not available from tools such as azdata, Azure Data Studio, or kubectl. - Deployment of Azure Arc enabled PostgeSQL Hyperscale in direct mode is currently not available. - Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.
+- Azure Arc enabled SQL Managed instance and Azure Arc enabled PostgreSQL Hyperscale are not GB18030 certified.
## February 2021
azure-functions Functions Event Grid Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-event-grid-blob-trigger.md
This article demonstrates how to debug and deploy a local Event Grid Blob trigge
1. The default url for your event grid blob trigger is:
+ # [C#](#tab/csharp)
+ ```http http://localhost:7071/runtime/webhooks/blobs?functionName={functionname} ```
+ # [Python](#tab/python)
+
+ ```http
+ http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.{functionname}
+ ```
+
+
+ Note your function app's name and that the trigger type is a blob trigger, which is indicated by `blobs` in the url. This will be needed when setting up endpoints later in the how to guide. 1. Once the function is created, add the Event Grid source parameter.
This article demonstrates how to debug and deploy a local Event Grid Blob trigge
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes"); } ```
-
+ # [Python](#tab/python) Add **"source": "EventGrid"** to the function.json binding data.
Once the Blob Trigger recognizes a new file is uploaded to the storage container
## Deployment
-As you deploy the function app to Azure, update the webhook endpoint from your local endpoint to your deployed app endpoint. To update an endpoint, follow the steps in [Add a storage event](#add-a-storage-event) and use the below for the webhook URL in step 5. The `<BLOB-EXTENSION-KEY>` is the function key for your blob trigger function.
+As you deploy the function app to Azure, update the webhook endpoint from your local endpoint to your deployed app endpoint. To update an endpoint, follow the steps in [Add a storage event](#add-a-storage-event) and use the below for the webhook URL in step 5. The `<BLOB-EXTENSION-KEY>` can be found in the **App Keys** section from the left menu of your **Function App**.
+
+# [C#](#tab/csharp)
+
+```http
+https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY>
+```
+
+# [Python](#tab/python)
```http
-https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Function1&code=<BLOB-EXTENSION-KEY>
+https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY>
``` ++ ## Clean up resources To clean up the resources created in this article, delete the event grid subscription you created in this tutorial.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
az monitor log-analytics workspace data-export create --resource-group resourceG
Use the following command to create a data export rule to a specific event hub using CLI. All tables are exported to the provided event hub name. ```azurecli
-$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventHubName/eventhub-name'
+$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $eventHubResourceId ```
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
You've now created a new private endpoint that is connected to this AMPLS.
## Review and validate your Private Link setup ### Reviewing your Endpoint's DNS settings
-The Private Endpoint you created should now have an four DNS zones configured:
-
-[![Screenshot of Private Endpoint DNS zones.](./media/private-link-security/private-endpoint-dns-zones.png)](./media/private-link-security/private-endpoint-dns-zones-expanded.png#lightbox)
+The Private Endpoint you created should now have an five DNS zones configured:
* privatelink-monitor-azure-com * privatelink-oms-opinsights-azure-com * privatelink-ods-opinsights-azure-com * privatelink-agentsvc-azure-automation-net
+* privatelink-blob-core-windows-net
> [!NOTE] > Each of these zones maps specific Azure Monitor endpoints to private IPs from the VNet's pool of IPs. The IP addresses showns in the below images are only examples. Your configuration should instead show private IPs from your own network.
You can automate the process described earlier using Azure Resource Manager temp
To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
-To manage network access, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/monitor/app-insights/component).
+To manage the network access flag on your workspace or component, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
+
+### Example ARM template
+The below ARM template creates:
+* A private link scope (AMPLS) named "my-scope"
+* A Log Analytics workspace named "my-workspace"
+* Add a scoped resource to the "my-scope" AMPLS, named "my-workspace-connection"
+
+```
+{
+ "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "private_link_scope_name": {
+ "defaultValue": "my-scope",
+ "type": "String"
+ },
+ "workspace_name": {
+ "defaultValue": "my-workspace",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "microsoft.insights/privatelinkscopes",
+ "apiVersion": "2019-10-17-preview",
+ "name": "[parameters('private_link_scope_name')]",
+ "location": "global",
+ "properties": {}
+ },
+ {
+ "type": "microsoft.operationalinsights/workspaces",
+ "apiVersion": "2020-10-01",
+ "name": "[parameters('workspace_name')]",
+ "location": "westeurope",
+ "properties": {
+ "sku": {
+ "name": "pergb2018"
+ },
+ "publicNetworkAccessForIngestion": "Enabled",
+ "publicNetworkAccessForQuery": "Enabled"
+ }
+ },
+ {
+ "type": "microsoft.insights/privatelinkscopes/scopedresources",
+ "apiVersion": "2019-10-17-preview",
+ "name": "[concat(parameters('private_link_scope_name'), '/', concat(parameters('workspace_name'), '-connection'))]",
+ "dependsOn": [
+ "[resourceId('microsoft.insights/privatelinkscopes', parameters('private_link_scope_name'))]",
+ "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspace_name'))]"
+ ],
+ "properties": {
+ "linkedResourceId": "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspace_name'))]"
+ }
+ }
+ ]
+}
+```
## Collect custom logs and IIS log over Private Link
azure-netapp-files Azure Netapp Files Configure Export Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-configure-export-policy.md
Title: Configure export policy for Azure NetApp Files NFS or dual-protocol volume - Azure NetApp Files
+ Title: Configure export policy for Azure NetApp Files NFS or dual-protocol volumes - Azure NetApp Files
description: Describes how to configure export policy to control access to an NFS volume using Azure NetApp Files
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
Title: Move Azure VMs to new subscription or resource group description: Use Azure Resource Manager to move virtual machines to a new resource group or subscription. Previously updated : 12/01/2020 Last updated : 04/23/2021 # Move guidance for virtual machines
If [soft delete](../../../backup/soft-delete-virtual-machines.md) is enabled for
6. After the delete operation is complete, you can move your virtual machine. 3. Move the VM to the target resource group.
-4. Resume the backup.
+4. Reconfigure the backup.
### PowerShell
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
Previously updated : 10/22/2020 Last updated : 04/24/2021 # Connectivity architecture for Azure SQL Managed Instance
Deploy SQL Managed Instance in a dedicated subnet inside the virtual network. Th
- **Dedicated subnet:** The SQL Managed Instance subnet can't contain any other cloud service that's associated with it, and it can't be a gateway subnet. The subnet can't contain any resource but SQL Managed Instance, and you can't later add other types of resources in the subnet. - **Subnet delegation:** The SQL Managed Instance subnet needs to be delegated to the `Microsoft.Sql/managedInstances` resource provider. - **Network security group (NSG):** An NSG needs to be associated with the SQL Managed Instance subnet. You can use an NSG to control access to the SQL Managed Instance data endpoint by filtering traffic on port 1433 and ports 11000-11999 when SQL Managed Instance is configured for redirect connections. The service will automatically provision and keep current [rules](#mandatory-inbound-security-rules-with-service-aided-subnet-configuration) required to allow uninterrupted flow of management traffic.-- **User defined route (UDR) table:** A UDR table needs to be associated with the SQL Managed Instance subnet. You can add entries to the route table to route traffic that has on-premises private IP ranges as a destination through the virtual network gateway or virtual network appliance (NVA). Service will automatically provision and keep current [entries](#user-defined-routes-with-service-aided-subnet-configuration) required to allow uninterrupted flow of management traffic.
+- **User defined route (UDR) table:** A UDR table needs to be associated with the SQL Managed Instance subnet. You can add entries to the route table to route traffic that has on-premises private IP ranges as a destination through the virtual network gateway or virtual network appliance (NVA). Service will automatically provision and keep current [entries](#mandatory-user-defined-routes-with-service-aided-subnet-configuration) required to allow uninterrupted flow of management traffic.
- **Sufficient IP addresses:** The SQL Managed Instance subnet must have at least 32 IP addresses. For more information, see [Determine the size of the subnet for SQL Managed Instance](vnet-subnet-determine-size.md). You can deploy managed instances in [the existing network](vnet-existing-add-subnet.md) after you configure it to satisfy [the networking requirements for SQL Managed Instance](#network-requirements). Otherwise, create a [new network and subnet](virtual-network-subnet-create-arm-template.md). > [!IMPORTANT] > When you create a managed instance, a network intent policy is applied on the subnet to prevent noncompliant changes to networking setup. After the last instance is removed from the subnet, the network intent policy is also removed. Rules below are for the informational purposes only, and you should not deploy them using ARM template / PowerShell / CLI. If you want to use the latest official template you could always [retrieve it from the portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md). ### Mandatory inbound security rules with service-aided subnet configuration
+These rules are necessary to ensure inbound management traffic flow. See [paragraph above](#high-level-connectivity-architecture) for more information on connectivity architecture and management traffic.
| Name |Port |Protocol|Source |Destination|Action| ||-|--|--|--||
Deploy SQL Managed Instance in a dedicated subnet inside the virtual network. Th
|health_probe|Any |Any |AzureLoadBalancer|MI SUBNET |Allow | ### Mandatory outbound security rules with service-aided subnet configuration
+These rules are necessary to ensure outbound management traffic flow. See [paragraph above](#high-level-connectivity-architecture) for more information on connectivity architecture and management traffic.
| Name |Port |Protocol|Source |Destination|Action| ||--|--|--|--|| |management |443, 12000 |TCP |MI SUBNET |AzureCloud |Allow | |mi_subnet |Any |Any |MI SUBNET |MI SUBNET |Allow |
-### User defined routes with service-aided subnet configuration
+### Mandatory user defined routes with service-aided subnet configuration
+These routes are necessary to ensure that management traffic is routed directly to a destination. See [paragraph above](#high-level-connectivity-architecture) for more information on connectivity architecture and management traffic.
|Name|Address prefix|Next hop| |-|--|-|
Deploy SQL Managed Instance in a dedicated subnet inside the virtual network. Th
|mi-storage-internet|Storage|Internet| |mi-storage-REGION-internet|Storage.REGION|Internet| |mi-storage-REGION_PAIR-internet|Storage.REGION_PAIR|Internet|
+|mi-azureactivedirectory-internet|AzureActiveDirectory|Internet|
|||| \* MI SUBNET refers to the IP address range for the subnet in the form x.x.x.x/y. You can find this information in the Azure portal, in subnet properties.
The following virtual network features are currently *not supported* with SQL Ma
- **AzurePlatformDNS**: Using the AzurePlatformDNS [service tag](../../virtual-network/service-tags-overview.md) to block platform DNS resolution would render SQL Managed Instance unavailable. Although SQL Managed Instance supports customer-defined DNS for DNS resolution inside the engine, there is a dependency on platform DNS for platform operations. - **NAT gateway**: Using [Azure Virtual Network NAT](../../virtual-network/nat-overview.md) to control outbound connectivity with a specific public IP address would render SQL Managed Instance unavailable. The SQL Managed Instance service is currently limited to use of basic load balancer that doesn't provide coexistence of inbound and outbound flows with Virtual Network NAT. - **IPv6 for Azure Virtual Network**: Deploying SQL Managed Instance to [dual stack IPv4/IPv6 virtual networks](../../virtual-network/ipv6-overview.md) is expected to fail. Associating network security group (NSG) or route table (UDR) containing IPv6 address prefixes to SQL Managed Instance subnet, or adding IPv6 address prefixes to NSG or UDR that is already associated with Managed instance subnet, would render SQL Managed Instance unavailable. SQL Managed Instance deployments to a subnet with NSG and UDR that already have IPv6 prefixes are expected to fail.-- **Azure DNS private zones with a name reserved for Microsoft services**: Following is the list of reserved names: windows.net, database.windows.net, core.windows.net, blob.core.windows.net, table.core.windows.net, management.core.windows.net, monitoring.core.windows.net, queue.core.windows.net, graph.windows.net, login.microsoftonline.com, login.windows.net, servicebus.windows.net, vault.azure.net. Deploying SQL Managed Instance to a virtual network with associated [Azure DNS private zone](../../dns/private-dns-privatednszone.md) with a name reserved for Microsoft services would fail. Associating Azure DNS private zone with reserved name with a virtual network containing Managed Instance, would render SQL Managed Instance unavailable. Please folow [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md) for the proper Private Link configuration.
+- **Azure DNS private zones with a name reserved for Microsoft services**: Following is the list of reserved names: windows.net, database.windows.net, core.windows.net, blob.core.windows.net, table.core.windows.net, management.core.windows.net, monitoring.core.windows.net, queue.core.windows.net, graph.windows.net, login.microsoftonline.com, login.windows.net, servicebus.windows.net, vault.azure.net. Deploying SQL Managed Instance to a virtual network with associated [Azure DNS private zone](../../dns/private-dns-privatednszone.md) with a name reserved for Microsoft services would fail. Associating Azure DNS private zone with reserved name with a virtual network containing Managed Instance, would render SQL Managed Instance unavailable. Please follow [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md) for the proper Private Link configuration.
## Next steps
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-customer-managed-key.md
az batch account set \
- **How can I rotate my keys?** Customer-managed keys are not automatically rotated. To rotate the key, update the Key Identifier that the account is associated with. - **After I restore access how long will it take for the Batch account to work again?** It can take up to 10 minutes for the account to be accessible again once access is restored. - **While the Batch Account is unavailable what happens to my resources?** Any pools that are running when Batch access to customer-managed keys is lost will continue to run. However, the nodes will transition into an unavailable state, and tasks will stop running (and be requeued). Once access is restored, nodes will become available again and tasks will be restarted.-- **Does this encryption mechanism apply to VM disks in a Batch pool?** No. For Cloud Service Configuration Pools, no encryption is applied for the OS and temporary disk. For Virtual Machine Configuration Pools, the OS and any specified data disks will be encrypted with a Microsoft platform managed key by default. Currently, you cannot specify your own key for these disks. To encrypt the temporary disk of VMs for a Batch pool with a Microsoft platform managed key, you must enable the [diskEncryptionConfiguration](/rest/api/batchservice/pool/add#diskencryptionconfiguration) property in your [Virtual Machine Configuration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) Pool. For highly sensitive environments, we recommend enabling temporary disk encryption and avoiding storing sensitive data on OS and data disks. For more information, see [Create a pool with disk encryption enabled](./disk-encryption.md)
+- **Does this encryption mechanism apply to VM disks in a Batch pool?** No. For Cloud Services Configuration pools (which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/)), no encryption is applied for the OS and temporary disk. For Virtual Machine Configuration pools, the OS and any specified data disks will be encrypted with a Microsoft platform managed key by default. Currently, you cannot specify your own key for these disks. To encrypt the temporary disk of VMs for a Batch pool with a Microsoft platform managed key, you must enable the [diskEncryptionConfiguration](/rest/api/batchservice/pool/add#diskencryptionconfiguration) property in your [Virtual Machine Configuration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) Pool. For highly sensitive environments, we recommend enabling temporary disk encryption and avoiding storing sensitive data on OS and data disks. For more information, see [Create a pool with disk encryption enabled](./disk-encryption.md)
- **Is the system-assigned managed identity on the Batch account available on the compute nodes?** No. The system-assigned managed identity is currently used only for accessing the Azure Key Vault for the customer-managed key. To use a user-assigned managed identity on compute nodes, see [Configure managed identities in Batch pools](managed-identity-pools.md). ## Next steps
batch Batch Low Pri Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-low-pri-vms.md
Low-priority VMs are offered at a significantly reduced price compared with dedi
> [!NOTE] > [Spot VMs](https://azure.microsoft.com/pricing/spot/) are now available for [single instance VMs](../virtual-machines/spot-vms.md) and [VM scale sets](../virtual-machine-scale-sets/use-spot.md). Spot VMs are an evolution of low-priority VMs, but differ in that pricing can vary and an optional maximum price can be set when allocating Spot VMs. >
->Azure Batch pools will start supporting Spot VMs in the future, with new versions of the [Batch APIs and tools](./batch-apis-tools.md). After Spot VM support is available, low-priority VMs will be deprecated - they will continue to be supported using current APIs and tool versions for at least 12 months, to allow sufficient time for migration to Spot VMs.
+>Azure Batch pools will start supporting Spot VMs in the future, with new versions of the [Batch APIs and tools](./batch-apis-tools.md). After Spot VM support is available, low-priority VMs will be deprecated, though they will continue to be supported using current APIs and tool versions for at least 12 months, to allow sufficient time for migration to Spot VMs.
>
-> Spot VMs will only be supported for Virtual Machine Configuration pools. To use Spot VMs, any Cloud Service Configuration pools will need to be [migrated to Virtual Machine Configuration pools](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+> Spot VMs will only be supported for Virtual Machine Configuration pools. To use Spot VMs, any Cloud Services Configuration pools will need to be [migrated to Virtual Machine Configuration pools](batch-pool-cloud-service-to-virtual-machine-configuration.md).
## Batch support for low-priority VMs
To view these metrics in the Azure portal
- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.-- Start to plan the move from low-priority VMs to Spot VMs. If you use low-priority VMs with **Cloud Service configuration** pools, plan to migrate to [**Virtual Machine configuration** pools](nodes-and-pools.md#configurations) instead.
+- Start to plan the move from low-priority VMs to Spot VMs. If you use low-priority VMs with **Cloud Services Configuration** pools (which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/)), plan to migrate to [**Virtual Machine configuration** pools](nodes-and-pools.md#configurations) instead.
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-compute-intensive-sizes.md
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
<sup>*</sup>RDMA-capable N-series sizes also include NVIDIA Tesla GPUs
-### Windows pools - Virtual machine configuration
+### Windows pools - Virtual Machine Configuration
| Size | Capability | Operating systems | Required software | Pool settings | | -- | | -- | -- | -- |
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
<sup>*</sup>RDMA-capable N-series sizes also include NVIDIA Tesla GPUs
-### Windows pools - Cloud services configuration
+### Windows pools - Cloud Services Configuration
-> [!NOTE]
-> N-series sizes are not supported in Batch pools with the Cloud Service configuration.
->
+> [!WARNING]
+> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Please use Virtual Machine Configuration pools instead.
| Size | Capability | Operating systems | Required software | Pool settings | | -- | - | -- | -- | -- | | [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md) | RDMA | Windows Server 2016, 2012 R2, 2012, or<br/>2008 R2 (Guest OS family) | Microsoft MPI 2012 R2 or later, or<br/>Intel MPI 5<br/><br/>Windows RDMA drivers | Enable inter-node communication,<br/> disable concurrent task execution |
+> [!NOTE]
+> N-series sizes are not supported in Cloud Services Configuration pools.
+ ## Pool configuration options To configure a specialized VM size for your Batch pool, you have several options to install required software or drivers:
batch Batch Pool Create Event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-create-event.md
Last updated 10/08/2020
This event is emitted once a pool has been created. The content of the log will expose general information about the pool. Note that if the target size of the pool is greater than 0 compute nodes, a pool resize start event will follow immediately after this event.
- The following example shows the body of a pool create event for a pool created using the `CloudServiceConfiguration` property.
+ The following example shows the body of a pool create event.
``` {
Last updated 10/08/2020
|`displayName`|String|The display name of the pool.| |`vmSize`|String|The size of the virtual machines in the pool. All virtual machines in a pool are the same size. <br/><br/> For information about available sizes of virtual machines for Cloud Services pools (pools created with cloudServiceConfiguration), see [Sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md). Batch supports all Cloud Services VM sizes except `ExtraSmall`.<br/><br/> For information about available VM sizes for pools using images from the Virtual Machines Marketplace (pools created with virtualMachineConfiguration) see [Sizes for Virtual Machines](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) (Linux) or [Sizes for Virtual Machines](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) (Windows). Batch supports all Azure VM sizes except `STANDARD_A0` and those with premium storage (`STANDARD_GS`, `STANDARD_DS`, and `STANDARD_DSV2` series).| |`imageType`|String|The deployment method for the image. Supported values are `virtualMachineConfiguration` or `cloudServiceConfiguration`|
-|[`cloudServiceConfiguration`](#bk_csconf)|Complex Type|The cloud service configuration for the pool.|
+|[`cloudServiceConfiguration`](#bk_csconf)|Complex Type|The cloud services configuration for the pool.|
|[`virtualMachineConfiguration`](#bk_vmconf)|Complex Type|The virtual machine configuration for the pool.| |[`networkConfiguration`](#bk_netconf)|Complex Type|The network configuration for the pool.| |`resizeTimeout`|Time|The timeout for allocation of compute nodes to the pool specified for the last resize operation on the pool. (The initial sizing when the pool is created counts as a resize.)|
Last updated 10/08/2020
### <a name="bk_csconf"></a> cloudServiceConfiguration
+> [!WARNING]
+> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Please use Virtual Machine Configuration pools instead.
+ |Element name|Type|Notes| ||-|--| |`osFamily`|String|The Azure Guest OS family to be installed on the virtual machines in the pool.<br /><br /> Possible values are:<br /><br /> **2** ΓÇô OS Family 2, equivalent to Windows Server 2008 R2 SP1.<br /><br /> **3** ΓÇô OS Family 3, equivalent to Windows Server 2012.<br /><br /> **4** ΓÇô OS Family 4, equivalent to Windows Server 2012 R2.<br /><br /> For more information, see [Azure Guest OS Releases](../cloud-services/cloud-services-guestos-update-matrix.md#releases).|
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
Some VM series, such as [Mv2](../virtual-machines/mv2-series.md), can only be used with [generation 2 VM images](../virtual-machines/generation-2.md). Generation 2 VM images are specified like any VM image, using the 'sku' property of the ['imageReference'](/rest/api/batchservice/pool/add#imagereference) configuration; the 'sku' strings have a suffix such as "-g2" or "-gen2". To get a list of VM images supported by Batch, including generation 2 images, use the ['list supported images'](/rest/api/batchservice/account/listsupportedimages) API, [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images).
-### Pools in Cloud Service configuration
+### Pools in Cloud Services Configuration
-Batch pools in the Cloud Service configuration support all [VM sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md) **except** for the following:
+> [!WARNING]
+> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Please use Virtual Machine Configuration pools instead.
+
+Batch pools in Cloud Services Configuration support all [VM sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md) **except** for the following:
| VM series | Unsupported sizes | ||-|
Batch pools in the Cloud Service configuration support all [VM sizes for Cloud S
- **Quotas** - The [cores quotas](batch-quota-limit.md#resource-quotas) in your Batch account can limit the number of nodes of a given size you can add to a Batch pool. When needed, you can [request a quota increase](batch-quota-limit.md#increase-a-quota). -- **Pool configuration** - In general, you have more VM size options when you create a pool in the Virtual Machine configuration, compared with the Cloud Service configuration.
+- **Pool configuration** - In general, you have more VM size options when you create a pool in Virtual Machine configuration, compared with Cloud Services Configuration.
## Supported VM images
batch Batch Rendering Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-applications.md
Where applicable, pay-for-use licensing is available for the pre-installed rende
Some applications only support Windows, but most are supported on both Windows and Linux. > [!IMPORTANT]
-> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on 29 February 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
+> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
## Applications on latest CentOS 7 rendering image
batch Batch Rendering Functionality https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-functionality.md
Most rendering applications will require licenses obtained from a license server
## Batch pools using rendering VM images > [!IMPORTANT]
-> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on 29 February 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
+> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
### Rendering application installation
batch Batch Rendering Using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-using.md
# Using Azure Batch rendering > [!IMPORTANT]
-> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on 29 February 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
+> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
There are several ways to use Azure Batch rendering:
batch Batch Task Output File Conventions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-task-output-file-conventions.md
Azure Batch provides more than one way to persist task output. The File Conventi
- You can easily modify the code for the application that your task is running to persist files using the File Conventions library. - You want to stream data to Azure Storage while the task is still running.-- You want to persist data from pools created with either the cloud service configuration or the virtual machine configuration.
+- You want to persist data from pools.
- Your client application or other tasks in the job needs to locate and download task output files by ID or by purpose. - You want to view task output in the Azure portal.
batch Batch User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-user-accounts.md
> > To connect to a node running the Linux virtual machine configuration via SSH, see [Install and configure xrdp to use Remote Desktop with Ubuntu](../virtual-machines/linux/use-remote-desktop.md). To connect to nodes running Windows via RDP, see [How to connect and sign on to an Azure virtual machine running Windows](../virtual-machines/windows/connect-logon.md). >
-> To connect to a node running the cloud service configuration via RDP, see [Enable Remote Desktop Connection for a Role in Azure Cloud Services](../cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md).
+> To connect to a node running the via RDP, see [Enable Remote Desktop Connection for a Role in Azure Cloud Services](../cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md).
A task in Azure Batch always runs under a user account. By default, tasks run under standard user accounts, without administrator permissions. For certain scenarios, you may want to configure the user account under which you want a task to run. This article discusses the types of user accounts and how to configure them for your scenario.
Named user accounts enable password-less SSH between Linux nodes. You can use a
### Create named user accounts
-To create named user accounts in Batch, add a collection of user accounts to the pool. The following code snippets show how to create named user accounts in .NET, Java, and Python. These code snippets show how to create both admin and non-admin named accounts on a pool. The examples create pools using the cloud service configuration, but you use the same approach when creating a Windows or Linux pool using the virtual machine configuration.
+To create named user accounts in Batch, add a collection of user accounts to the pool. The following code snippets show how to create named user accounts in .NET, Java, and Python. These code snippets show how to create both admin and non-admin named accounts on a pool.
#### Batch .NET example (Windows)
To create named user accounts in Batch, add a collection of user accounts to the
CloudPool pool = null; Console.WriteLine("Creating pool [{0}]...", poolId);
-// Create a pool using the cloud service configuration.
+// Create a pool using Virtual Machine Configuration.
pool = batchClient.PoolOperations.CreatePool( poolId: poolId, targetDedicatedComputeNodes: 3,
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/overview.md
[Custom Translator](https://portal.customtranslator.azure.ai) is a feature of the Microsoft Translator service, which enables Translator enterprises, app developers, and language service providers to build customized neural machine translation (NMT) systems. The customized translation systems seamlessly integrate into existing applications, workflows, and websites.
-Translation systems built with [Custom Translator](https://portal.customtranslator.azure.ai) are available through the same cloud-based, [secure](https://cognitive.uservoice.com/knowledgebase/articles/1147537-api-and-customization-confidentiality), high performance, highly scalable Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl), that powers billions of translations every day.
+Translation systems built with [Custom Translator](https://portal.customtranslator.azure.ai) are available through the same cloud-based, secure, high performance, highly scalable Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl), that powers billions of translations every day.
Custom Translator supports more than three dozen languages, and maps directly to the languages available for NMT. For a complete list, see [Microsoft Translator Languages](../language-support.md#customization).
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
The [Analyze ID](https://westus.dev.cognitive.microsoft.com/docs/services/form-r
Need to update this with updated APIM links when available -->
-The second step is to call the [**Get Analyze idDocument Result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/GetAnalyzeFormResult) operation. This operation takes as input the Result ID that was created by the Analyze ID operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+The second step is to call the [**Get Analyze idDocument Result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5f74a7738978e467c5fb8707) operation. This operation takes as input the Result ID that was created by the Analyze ID operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
|Field| Type | Possible values | |:--|:-:|:-|
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 03/25/2021 Last updated : 04/22/2021 # Troubleshoot mapping data flows in Azure Data Factory
This article explores common troubleshooting methods for mapping data flows in A
### Error code: DF-Executor-InvalidType - **Message**: Please make sure that the type of parameter matches with type of value passed in. Passing float parameters from pipelines isn't currently supported.-- **Cause**: The data type for the declared type isn't compatible with the actual parameter value.
+- **Cause**: Data types are incompatible between the declared type and the actual parameter value.
- **Recommendation**: Check that the parameter values passed into the data flow match the declared type. ### Error code: DF-Executor-ParseError-- **Message**: Expression cannot be parsed
+- **Message**: Expression cannot be parsed.
- **Cause**: An expression generated parsing errors because of incorrect formatting. - **Recommendation**: Check the formatting in the expression.
This article explores common troubleshooting methods for mapping data flows in A
- **Message**: Please make sure that the access key in your Linked Service is correct - **Cause**: The account name or access key is incorrect.-- **Recommendation**: Ensure the account name or access key specified in your linked service is correct.
+- **Recommendation**: Ensure that the account name or access key specified in your linked service is correct.
### Error code: DF-Executor-ColumnUnavailable - **Message**: Column name used in expression is unavailable or invalid.
This article explores common troubleshooting methods for mapping data flows in A
### Error code: DF-Executor-StoreIsNotDefined - **Message**: The store configuration is not defined. This error is potentially caused by invalid parameter assignment in the pipeline.-- **Cause**: Undetermined.
+- **Cause**: Invalid store configuration is provided.
- **Recommendation**: Check parameter value assignment in the pipeline. A parameter expression might contain invalid characters.
This article explores common troubleshooting methods for mapping data flows in A
### Error code: DF-Xml-InvalidValidationMode - **Message**: Invalid xml validation mode is provided.
+- **Cause**: An invalid XML validation mode is provided.
- **Recommendation**: Check the parameter value and specify the right validation mode. ### Error code: DF-Xml-InvalidDataField - **Message**: The field for corrupt records must be string type and nullable.-- **Recommendation**: Make sure that the column `\"_corrupt_record\"` in the source project has a string data type.
+- **Cause**: An invalid data type of the column `\"_corrupt_record\"` is provided in the XML source.
+- **Recommendation**: Make sure that the column `\"_corrupt_record\"` in the XML source has a string data type and nullable.
### Error code: DF-Xml-MalformedFile-- **Message**: Malformed xml in 'FailFastMode'.
+- **Message**: Malformed xml with path in FAILFAST mode.
+- **Cause**: Malformed XML with path exists in the FAILFAST mode.
- **Recommendation**: Update the content of the XML file to the right format.
-### Error code: DF-Xml-InvalidDataType
-- **Message**: XML Element has sub elements or attributes and it can't be converted.- ### Error code: DF-Xml-InvalidReferenceResource-- **Message**: Reference resource in the xml data file cannot be resolved.
+- **Message**: Reference resource in xml data file cannot be resolved.
+- **Cause**: The reference resource in the XML data file cannot be resolved.
- **Recommendation**: You should check the reference resource in the XML data file. ### Error code: DF-Xml-InvalidSchema - **Message**: Schema validation failed.
+- **Cause**: The invalid schema is provided on the XML source.
+- **Recommendation**: Check the schema settings on the XML source to make sure that it is the subset schema of the source data.
### Error code: DF-Xml-UnsupportedExternalReferenceResource - **Message**: External reference resource in xml data file is not supported.
+- **Cause**: The external reference resource in the XML data file is not supported.
- **Recommendation**: Update the XML file content when the external reference resource is not supported now. ### Error code: DF-GEN2-InvalidAccountConfiguration - **Message**: Either one of account key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken should be specified.-- **Recommendation**: Configure the right account in the related GEN2 linked service.
+- **Cause**: An invalid credential is provided in the ADLS Gen2 linked service.
+- **Recommendation**: Update the ADLS Gen2 linked service to have the right credential configuration.
### Error code: DF-GEN2-InvalidAuthConfiguration-- **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified. -- **Recommendation**: Choose the right auth type in the related GEN2 linked service.
+- **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified.
+- **Cause**: Invalid auth method is provided in ADLS gen2 linked service.
+- **Recommendation**: Update the ADLS Gen2 linked service to have one of three authentication methods that are Key, ServicePrincipal and MI.
### Error code: DF-GEN2-InvalidServicePrincipalCredentialType-- **Message**: ServicePrincipalCredentialType is invalid.-
-### Error code: DF-GEN2-InvalidDataType
-- **Message**: Cloud type is invalid.
+- **Message**: Service principal credential type is invalid.
+- **Cause**: The service principal credential type is invalid.
+- **Recommendation**: Please update the ADLS Gen2 linked service to set the right service principal credential type.
### Error code: DF-Blob-InvalidAccountConfiguration-- **Message**: Either one of account key or sas_token should be specified.
+- **Message**: Either one of account key or sas token should be specified.
+- **Cause**: An invalid credential is provided in the Azure Blob linked service.
+- **Recommendation**: Use either account key or SAS token for the Azure Blob linked service.
### Error code: DF-Blob-InvalidAuthConfiguration - **Message**: Only one of the two auth methods (Key, SAS) can be specified.-
-### Error code: DF-Blob-InvalidDataType
-- **Message**: Cloud type is invalid.
+- **Cause**: An invalid authentication method is provided in the linked service.
+- **Recommendation**: Use key or SAS authentication for the Azure Blob linked service.
### Error code: DF-Cosmos-PartitionKeyMissed - **Message**: Partition key path should be specified for update and delete operations.-- **Recommendation**: Use the providing partition key in Cosmos sink settings.
+- **Cause**: The partition key path is missed in the Azure Cosmos DB sink.
+- **Recommendation**: Use the providing partition key in the Azure Cosmos DB sink settings.
### Error code: DF-Cosmos-InvalidPartitionKey - **Message**: Partition key path cannot be empty for update and delete operations.-- **Recommendation**: Use the providing partition key in Cosmos sink settings.
+- **Cause**: The partition key path is empty for update and delete operations.
+- **Recommendation**: Use the providing partition key in the Azure Cosmos DB sink settings.
+ ### Error code: DF-Cosmos-IdPropertyMissed - **Message**: 'id' property should be mapped for delete and update operations.-- **Recommendation**: Make sure that the input data has an `id` column in Cosmos sink settings. If no, use **select or derive transformation** to generate this column before sink.
+- **Cause**: The `id` property is missed for update and delete operations.
+- **Recommendation**: Make sure that the input data has an `id` column in Cosmos DB sink settings. If no, use **select or derive transformation** to generate this column before sink.
### Error code: DF-Cosmos-InvalidPartitionKeyContent - **Message**: partition key should start with /.-- **Recommendation**: Make the partition key start with `/` in Cosmos sink settings, for example: `/movieId`.
+- **Cause**: An invalid partition key is provided.
+- **Recommendation**: Ensure that the partition key start with `/` in Cosmos DB sink settings, for example: `/movieId`.
### Error code: DF-Cosmos-InvalidPartitionKey-- **Message**: partitionKey not mapped in sink for delete and update operations.-- **Recommendation**: In Cosmos sink settings, use the partition key that is same as your container's partition key.
+- **Message**: Partition key is not mapped in sink for delete and update operations.
+- **Cause**: An invalid partition key is provided.
+- **Recommendation**: In Cosmos DB sink settings, use the right partition key that is same as your container's partition key.
### Error code: DF-Cosmos-InvalidConnectionMode-- **Message**: Invalid connectionMode.-- **Recommendation**: Confirm that the supported mode is **Gateway** and **DirectHttps** in Cosmos settings.
+- **Message**: Invalid connection mode.
+- **Cause**: An invalid connection mode is provided.
+- **Recommendation**: Confirm that the supported mode is **Gateway** and **DirectHttps** in Cosmos DB settings.
### Error code: DF-Cosmos-InvalidAccountConfiguration - **Message**: Either accountName or accountEndpoint should be specified.
+- **Cause**: Invalid account information is provided.
+- **Recommendation**: In the Cosmos DB linked service, specify the account name or account endpoint.
### Error code: DF-Github-WriteNotSupported - **Message**: Github store does not allow writes.-
+- **Cause**: The GitHub store is read only.
+- **Recommendation**: The store entity definition is in some other place.
+
### Error code: DF-PGSQL-InvalidCredential - **Message**: User/password should be specified.-- **Recommendation**: Make sure you have right credential settings in the related postgresql linked service.
+- **Cause**: The User/password is missed.
+- **Recommendation**: Make sure that you have right credential settings in the related PostgreSQL linked service.
### Error code: DF-Snowflake-InvalidStageConfiguration - **Message**: Only blob storage type can be used as stage in snowflake read/write operation.
+- **Cause**: An invalid staging configuration is provided in the Snowflake.
+- **Recommendation**: Update Snowflake staging settings to ensure that only Azure Blob linked service is used.
### Error code: DF-Snowflake-InvalidStageConfiguration - **Message**: Snowflake stage properties should be specified with azure blob + sas authentication.
+- **Cause**: An invalid staging configuration is provided in the Snowflake.
+- **Recommendation**: Ensure that only the Azure Blob + SAS authentication is specified in the Snowflake staging settings.
### Error code: DF-Snowflake-InvalidDataType - **Message**: The spark type is not supported in snowflake.-- **Recommendation**: Use the **derive transformation** to change the related column of input data into the string type before snowflake sink.
+- **Cause**: An invalid data type is provided in the Snowflake.
+- **Recommendation**: Please use the derive transformation before applying the Snowflake sink to update the related column of the input data into the string type.
### Error code: DF-Hive-InvalidBlobStagingConfiguration - **Message**: Blob storage staging properties should be specified.
+- **Cause**: An invalid staging configuration is provided in the Hive.
+- **Recommendation**: Please check if the account key, account name and container are set properly in the related Blob linked service which is used as staging.
### Error code: DF-Hive-InvalidGen2StagingConfiguration - **Message**: ADLS Gen2 storage staging only support service principal key credential.-- **Recommendation**: Confirm that you apply the service principal key credential in the ADLS Gen2 linked service that is used as staging.
+- **Cause**: An invalid staging configuration is provided in the Hive.
+- **Recommendation**: Please update the related ADLS Gen2 linked service that is used as staging. Currently, only the service principal key credential is supported.
### Error code: DF-Hive-InvalidGen2StagingConfiguration - **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnKey or miServiceUri/miServiceToken is required.-- **Recommendation**: Apply the right credential that is used as staging in the hive in the related ADLS Gen2 linked service.
+- **Cause**: An invalid staging configuration is provided in the Hive.
+- **Recommendation**: Update the related ADLS Gen2 linked service with right credentials that are used as staging in the Hive.
### Error code: DF-Hive-InvalidDataType - **Message**: Unsupported Column(s).-- **Recommendation**: Update the column of input data to match the data type supported by the hive.
+- **Cause**: Unsupported Column(s) are provided.
+- **Recommendation**: Update the column of input data to match the data type supported by the Hive.
### Error code: DF-Hive-InvalidStorageType - **Message**: Storage type can either be blob or gen2.
+- **Cause**: Only Azure Blob or ADLS Gen2 storage type is supported.
+- **Recommendation**: Choose the right storage type from Azure Blob or ADLS Gen2.
### Error code: DF-Delimited-InvalidConfiguration - **Message**: Either one of empty lines or custom header should be specified.-- **Recommendation**: Specify empty lines or custom headers in CSV settings.
+- **Cause**: An invalid delimited configuration is provided.
+- **Recommendation**: Please update the CSV settings to specify one of empty lines or the custom header.
### Error code: DF-Delimited-ColumnDelimiterMissed-- **Message**: Column delimiter is required for parse.-- **Recommendation**: Confirm you have the column delimiter in your CSV settings.
+- **Message**: Column delimiter is required for parse.
+- **Cause**: The column delimiter is missed.
+- **Recommendation**: In your CSV settings, confirm that you have the column delimiter which is required for parse.
### Error code: DF-MSSQL-InvalidCredential - **Message**: Either one of user/pwd or tenant/spnId/spnKey or miServiceUri/miServiceToken should be specified.-- **Recommendation**: Apply right credentials in the related MSSQL linked service.
+- **Cause**: An invalid credential is provided in the MSSQL linked service.
+- **Recommendation**: Please update the related MSSQL linked service with right credentials, and one of **user/pwd** or **tenant/spnId/spnKey** or **miServiceUri/miServiceToken** should be specified.
### Error code: DF-MSSQL-InvalidDataType - **Message**: Unsupported field(s).
+- **Cause**: Unsupported field(s) are provided.
- **Recommendation**: Modify the input data column to match the data type supported by MSSQL. ### Error code: DF-MSSQL-InvalidAuthConfiguration - **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified.-- **Recommendation**: You can only specify one of the three auth methods (Key, ServicePrincipal and MI) in the related MSSQL linked service.
+- **Cause**: An invalid authentication method is provided in the MSSQL linked service.
+- **Recommendation**: You can only specify one of the three authentication methods (Key, ServicePrincipal and MI) in the related MSSQL linked service.
### Error code: DF-MSSQL-InvalidCloudType - **Message**: Cloud type is invalid.
+- **Cause**: An invalid cloud type is provided.
- **Recommendation**: Check your cloud type in the related MSSQL linked service. ### Error code: DF-SQLDW-InvalidBlobStagingConfiguration - **Message**: Blob storage staging properties should be specified.
+- **Cause**: Invalid blob storage staging settings are provided
+- **Recommendation**: Please check if the Blob linked service used for staging has correct properties.
### Error code: DF-SQLDW-InvalidStorageType - **Message**: Storage type can either be blob or gen2.
+- **Cause**: An invalid storage type is provided for staging.
+- **Recommendation**: Check the storage type of the linked service used for staging and make sure that it is Blob or Gen2.
### Error code: DF-SQLDW-InvalidGen2StagingConfiguration - **Message**: ADLS Gen2 storage staging only support service principal key credential.
+- **Cause**: An invalid credential is provided for the ADLS gen2 storage staging.
+- **Recommendation**: Use the service principal key credential of the Gen2 linked service used for staging.
+
### Error code: DF-SQLDW-InvalidConfiguration - **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken is required.
+- **Cause**: Invalid ADLS Gen2 staging properties are provided.
+- **Recommendation**: Please update ADLS Gen2 storage staging settings to have one of **key** or **tenant/spnId/spnCredential/spnCredentialType** or **miServiceUri/miServiceToken**.
### Error code: DF-DELTA-InvalidConfiguration - **Message**: Timestamp and version can't be set at the same time.
+- **Cause**: The timestamp and version can't be set at the same time.
+- **Recommendation**: Set the timestamp or version in the delta settings.
### Error code: DF-DELTA-KeyColumnMissed - **Message**: Key column(s) should be specified for non-insertable operations.
+- **Cause**: Key column(s) are missed for non-insertable operations.
+- **Recommendation**: Specify key column(s) on delta sink to have non-insertable operations.
### Error code: DF-DELTA-InvalidTableOperationSettings - **Message**: Recreate and truncate options can't be both specified.
+- **Cause**: Recreate and truncate options can't be specified simultaneously.
+- **Recommendation**: Update delta settings to have either recreate or truncate operation.
### Error code: DF-Excel-WorksheetConfigMissed - **Message**: Excel sheet name or index is required.-- **Recommendation**: Check the parameter value and specify the sheet name or index to read the excel data.
+- **Cause**: An invalid Excel worksheet configuration is provided.
+- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
### Error code: DF-Excel-InvalidWorksheetConfiguration - **Message**: Excel sheet name and index cannot exist at the same time.-- **Recommendation**: Check the parameter value and specify the sheet name or index to read the excel data.
+- **Cause**: The Excel sheet name and index are provided at the same time.
+- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
### Error code: DF-Excel-InvalidRange - **Message**: Invalid range is provided.
+- **Cause**: An invalid range is provided.
- **Recommendation**: Check the parameter value and specify the valid range by the following reference: [Excel format in Azure Data Factory-Dataset properties](./format-excel.md#dataset-properties). ### Error code: DF-Excel-WorksheetNotExist - **Message**: Excel worksheet does not exist.-- **Recommendation**: Check the parameter value and specify the valid sheet name or index to read the excel data.
+- **Cause**: An invalid worksheet name or index is provided.
+- **Recommendation**: Check the parameter value and specify a valid sheet name or index to read the Excel data.
### Error code: DF-Excel-DifferentSchemaNotSupport - **Message**: Read excel files with different schema is not supported now.
+- **Cause**: Reading excel files with different schemas is not supported now.
+- **Recommendation**: Please apply one of following options to solve this problem:
+ 1. Use **ForEach** + **data flow** activity to read Excel worksheets one by one.
+ 1. Update each worksheet schema to have the same columns manually before reading data.
### Error code: DF-Excel-InvalidDataType - **Message**: Data type is not supported.
+- **Cause**: The data type is not supported.
+- **Recommendation**: Please change the data type to **'string'** for related input data columns.
### Error code: DF-Excel-InvalidFile - **Message**: Invalid excel file is provided while only .xlsx and .xls are supported.
+- **Cause**: Invalid Excel files are provided.
+- **Recommendation**: Use the wildcard to filter, and get `.xls` and `.xlsx` Excel files before reading data.
+
+### Error code: DF-Executor-OutOfMemorySparkBroadcastError
+- **Message**: Explicitly broadcasted dataset using left/right option should be small enough to fit in node's memory. You can choose broadcast option 'Off' in join/exists/lookup transformation to avoid this issue or use an integration runtime with higher memory.
+- **Cause**: The size of the broadcasted table far exceeds the limitation of the node memory.
+- **Recommendation**: The broadcast left/right option should be used only for smaller dataset size which can fit into node's memory, so make sure to configure the node size appropriately or turn off the broadcast option.
+
+### Error code: DF-MSSQL-InvalidFirewallSetting
+- **Message**: The TCP/IP connection to the host has failed. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
+- **Cause**: The SQL database's firewall setting blocks the data flow to access.
+- **Recommendation**: Please check the firewall setting for your SQL database, and allow Azure services and resources to access this server.
+
+### Error code: DF-Executor-AcquireStorageMemoryFailed
+- **Message**: Transferring unroll memory to storage memory failed. Cluster ran out of memory during execution. Please retry using an integration runtime with more cores and/or memory optimized compute type.
+- **Cause**: The cluster has insufficient memory.
+- **Recommendation**: Please use an integration runtime with more cores and/or the memory optimized compute type.
+
+### Error code: DF-Cosmos-DeleteDataFailed
+- **Message**: Failed to delete data from cosmos after 3 times retry.
+- **Cause**: The throughput on the Cosmos collection is small and leads to meet throttling or row data not existing in Cosmo.
+- **Recommendation**: Please take the following actions to solve this problem:
+ 1. If the error is 404, make sure that the related row data exist in the Cosmos collection.
+ 1. If the error is throttling, please increase the Cosmos collection throughput or set it to the automatic scale.
+
+### Error code: DF-SQLDW-ErrorRowsFound
+- **Message**: Error/Invalid rows found while writing to sql sink. Error/Invalid rows are written to the rejected data storage location if configured.
+- **Cause**: Error or invalid rows are found when writing to the SQL sink.
+- **Recommendation**: Please find the error rows in the rejected data storage location if it is configured.
+
+### Error code: DF-SQLDW-ExportErrorRowFailed
+- **Message**: Exception is happened while writing error rows to storage.
+- **Cause**: An exception is happened while writing error rows to storage.
+- **Recommendation**: Please check your rejected data linked service configuration.
+
+### Error code: DF-Executor-FieldNotExist
+- **Message**: Field in struct does not exist.
+- **Cause**: Invalid or unavailable field names are used in expressions.
+- **Recommendation**: Check field names used in expressions.
+
+### Error code: DF-Xml-InvalidElement
+- **Message**: XML Element has sub elements or attributes which can't be converted.
+- **Cause**: The XML element has sub elements or attributes which can't be converted.
+- **Recommendation**: Update the XML file to make the XML element has right sub elements or attributes.
+
+### Error code: DF-GEN2-InvalidCloudType
+- **Message**: Cloud type is invalid.
+- **Cause**: An invalid cloud type is provided.
+- **Recommendation**: Check the cloud type in your related ADLS Gen2 linked service.
+### Error code: DF-Blob-InvalidCloudType
+- **Message**: Cloud type is invalid.
+- **Cause**: An invalid cloud type is provided.
+- **Recommendation**: Please check the cloud type in your related Azure Blob linked service.
+
+### Error code: DF-Cosmos-FailToResetThroughput
+- **Message**: Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.
+- **Cause**: The throughput scale operation of the Cosmos DB cannot be performed because another scale operation is in progress.
+- **Recommendation**: Please log in your Cosmos account, and manually change its container's throughput to be auto scale or add custom activities after data flows to reset the throughput.
+
+### Error code: DF-Executor-InvalidPath
+- **Message**: Path does not resolve to any file(s). Please make sure the file/folder exists and is not hidden.
+- **Cause**: An invalid file/folder path is provided, which cannot be found or accessed.
+- **Recommendation**: Please check the file/folder path, and make sure it is existed and can be accessed in your storage.
+
+### Error code: DF-Executor-InvalidPartitionFileNames
+- **Message**: File names cannot have empty value(s) while file name option is set as per partition.
+- **Cause**: Invalid partition file names are provided.
+- **Recommendation**: Please check your sink settings to have the right value of file names.
+
+### Error code: DF-Executor-InvalidOutputColumns
+- **Message**: The result has 0 output columns. Please ensure at least one column is mapped.
+- **Cause**: No column is mapped.
+- **Recommendation**: Please check the sink schema to ensure that at least one column is mapped.
+
+### Error code: DF-Executor-InvalidInputColumns
+- **Message**: The column in source configuration cannot be found in source data's schema.
+- **Cause**: Invalid columns are provided on the source.
+- **Recommendation**: Check columns in the source configuration and make sure that it is the subset of the source data's schemas.
+
+### Error code: DF-AdobeIntegration-InvalidMapToFilter
+- **Message**: Custom resource can only have one Key/Id mapped to filter.
+- **Cause**: Invalid configurations are provided.
+- **Recommendation**: In your AdobeIntegration settings, make sure that the custom resource can only have one Key/Id mapped to filter.
+
+### Error code: DF-AdobeIntegration-InvalidPartitionConfiguration
+- **Message**: Only single partition is supported. Partition schema may be RoundRobin or Hash.
+- **Cause**: Invalid partition configurations are provided.
+- **Recommendation**: In AdobeIntegration settings, confirm that only the single partition is set and partition schemas may be RoundRobin or Hash.
+
+### Error code: DF-AdobeIntegration-KeyColumnMissed
+- **Message**: Key must be specified for non-insertable operations.
+- **Cause**: Key columns are missed.
+- **Recommendation**: Update AdobeIntegration settings to ensure key columns are specified for non-insertable operations.
+
+### Error code: DF-AdobeIntegration-InvalidPartitionType
+- **Message**: Partition type has to be roundRobin.
+- **Cause**: Invalid partition types are provided.
+- **Recommendation**: Please update AdobeIntegration settings to make your partition type is RoundRobin.
+
+### Error code: DF-AdobeIntegration-InvalidPrivacyRegulation
+- **Message**: Only privacy regulation supported currently is gdpr.
+- **Cause**: Invalid privacy configurations are provided.
+- **Recommendation**: Please update AdobeIntegration settings while only privacy 'GDPR' is supported.
## Miscellaneous troubleshooting tips - **Issue**: Unexpected exception occurred and execution failed.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/managed-virtual-network-private-endpoint.md
Below data sources are supported to connect through private link from ADF Manage
- Azure Database for MariaDB ### Azure Data Factory Managed Virtual Network is available in the following Azure regions:-- East US-- East US 2-- West Central US-- West US-- West US 2-- South Central US-- Central US-- North Europe-- West Europe-- UK South-- Southeast Asia - Australia East - Australia Southeast-- Norway East
+- Brazil South
+- Canada Central
+- Canada East
+- Central India
+- Central US
+- East US
+- East US2
+- France Central
- Japan East - Japan West - Korea Central-- Brazil South-- France Central
+- North Europe
+- Norway East
+- South Africa North
+- South Central US
+- South East Asia
- Switzerland North
+- UAE North
+- UK South
- UK West-- Canada East-- Canada Central
+- West Central US
+- West Europe
+- West US
+- West US2
+ ### Outbound communications through public endpoint from ADF Managed Virtual Network-- Only port 443 is opened for outbound communications.
+- All ports are opened for outbound communications.
- Azure Storage and Azure Data Lake Gen2 are not supported to be connected through public endpoint from ADF Managed Virtual Network. ### Linked Service creation of Azure Key Vault
data-share Data Share Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/data-share-troubleshoot.md
Previously updated : 12/16/2020 Last updated : 04/22/2021 # Troubleshoot common problems in Azure Data Share
This article explains how to troubleshoot common problems in Azure Data Share.
## Azure Data Share invitations
-In some cases, when new users select **Accept Invitation** in an email invitation, they might see an empty list of invitations.
--
-This problem could have one of the following causes:
+In some cases, when new users select **Accept Invitation** in an email invitation, they might see an empty list of invitations. This problem could have one of the following causes:
* **The Azure Data Share service isn't registered as a resource provider of any Azure subscription in the Azure tenant.** This problem happens when your Azure tenant has no Data Share resource.
For storage accounts, a snapshot can fail because a file is being updated at the
For SQL sources, a snapshot can fail for these other reasons:
-* The source SQL script or target SQL script that grants Data Share permission hasn't run. Or for Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL Data Warehouse), the script runs by using SQL authentication rather than Azure Active Directory authentication.
+* The source SQL script or target SQL script that grants Data Share permission hasn't run. Or for Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL Data Warehouse), the script runs by using SQL authentication rather than Azure Active Directory authentication. You can run the below query to check if the Data Share account has proper permission to the SQL database. For source SQL database, query result should show Data Share account has *db_datareader* role. For target SQL database, query result should show Data Share account has *db_datareader*, *db_datawriter*, and *db_dlladmin* roles.
+
+ ```sql
+ SELECT DP1.name AS DatabaseRoleName,
+ isnull (DP2.name, 'No members') AS DatabaseUserName
+ FROM sys.database_role_members AS DRM
+ RIGHT OUTER JOIN sys.database_principals AS DP1
+ ON DRM.role_principal_id = DP1.principal_id
+ LEFT OUTER JOIN sys.database_principals AS DP2
+ ON DRM.member_principal_id = DP2.principal_id
+ WHERE DP1.type = 'R'
+ ORDER BY DP1.name;
+ ```
+ * The source data store or target SQL data store is paused. * The snapshot process or target data store doesn't support SQL data types. For more information, see [Share from SQL sources](how-to-share-from-sql.md#supported-data-types). * The source data store or target SQL data store is locked by other processes. Azure Data Share doesn't lock these data stores. But existing locks on these data stores can make a snapshot fail. * The target SQL table is referenced by a foreign key constraint. During a snapshot, if a target table has the same name as a table in the source data, Azure Data Share drops the table and creates a new table. If the target SQL table is referenced by a foreign key constraint, the table can't be dropped. * A target CSV file is generated, but the data can't be read in Excel. You might see this problem when the source SQL table contains data that includes non-English characters. In Excel, select the **Get Data** tab and choose the CSV file. Select the file origin **65001: Unicode (UTF-8)**, and then load the data.
-## Updated snapshot schedules
-After the data provider updates the snapshot schedule for the sent share, the data consumer needs to disable the previous snapshot schedule. Then enable the updated snapshot schedule for the received share.
+## Update snapshot schedule
+After the data provider updates the snapshot schedule for the sent share, the data consumer needs to disable the previous snapshot schedule, then enable the updated snapshot schedule for the received share. Snapshot schedule is stored in UTC, and shown in the UI as the computer local time. It does not automatically adjust for daylight saving time.
+
+## In-place sharing
+Dataset mapping can fail for Azure Data Explorer clusters due to the following reasons:
+
+* User does not have *write* permission to the Azure Data Explorer cluster. This permission is typically part of the Contributor role.
+* The source or target Azure Data Explorer cluster is paused.
+* Source Azure Data Explorer cluster is EngineV2 and target is EngineV3, or vice versa. Sharing between Azure Data Explorer clusters of different engine versions is not supported.
## Next steps
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md
Title: What's new in Azure Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 04/19/2021 Last updated : 04/25/2021
-# What's new in Azure Defender for IoT?
+# What's new in Azure Defender for IoT?
This article lists new features and feature enhancements for Defender for IoT.
-Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Versioning and support for Azure Defender for IoT
+
+Listed below are the support, breaking change policies for Defender for IoT, and the versions of Azure Defender for IoT that are currently available.
+
+### Servicing information and timelines
+
+Microsoft plans to release updates for Azure Defender for IoT no less than once per quarter. Each general availability (GA) version of the Azure Defender for IoT sensor, and on premises management console is supported for up to nine months after its release. Fixes, and new functionality will be applied to the current GA version that are currently in support, and will not be applied to older GA versions.
+
+### Versions and support dates
+
+| Version | Date released | End support date |
+|--|--|--|
+| 10.0 | 01/2021 | 10/2021 |
+| 10.3 | 04/2021 | 02/2022 |
## April 2021
API version 2 is required when working with the new fields.
### Features delivered as Generally Available (GA)
-The following features were previously available for Public Preview, and are now Generally Available (GA)features:
+The following features were previously available for Public Preview, and are now Generally Available (GA) features:
- Sensor - enhanced custom alert rules - On-premises management console - export alerts
Certificate and password recovery enhancements were made for this release.
This version lets you: - Upload SSL certificates directly to the sensors and on-premises management consoles.-- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity and Certificate Revocation Lists. If validation fails, the session will not continue.
+- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity, and Certificate Revocation Lists. If validation fails, the session will not continue.
For upgrades:
event-hubs Event Hubs Availability And Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-availability-and-consistency.md
producer.SendAsync(events, sendOptions)
### [Java](#tab/java)
-To send events to a specific partition, create the batch using the [createBatch](/java/api/com.azure.messaging.eventhubs.eventhubproducerclient.createbatch) method by specifying either **partition ID** or **partition key** in [createBatchOptions](/java/api/com.azure.messaging.eventhubs.models.createbatchoptions). The following code sends a batch of events to a specific partition by specifying a partition key.
+To send events to a specific partition, create the batch using the [createBatch](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs/src/main/java/com/azure/messaging/eventhubs/EventHubProducerClient.java) method by specifying either **partition ID** or **partition key** in [createBatchOptions](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs/src/main/java/com/azure/messaging/eventhubs/models/CreateBatchOptions.java). The following code sends a batch of events to a specific partition by specifying a partition key.
```java CreateBatchOptions batchOptions = new CreateBatchOptions(); batchOptions.setPartitionKey("cities"); ```
-You can also use the [EventHubProducerClient.send](/java/api/com.azure.messaging.eventhubs.eventhubproducerclient.send#com_azure_messaging_eventhubs_EventHubProducerClient_send_java_lang_Iterable_com_azure_messaging_eventhubs_EventData__com_azure_messaging_eventhubs_models_SendOptions_) method by specifying either **partition ID** or **partition key** in [SendOptions](/java/api/com.azure.messaging.eventhubs.models.sendoptions).
+You can also use the [EventHubProducerClient.send](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs/src/main/java/com/azure/messaging/eventhubs/EventHubProducerClient.java) method by specifying either **partition ID** or **partition key** in [SendOptions](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs/src/main/java/com/azure/messaging/eventhubs/models/SendOptions.java).
```java List<EventData> events = Arrays.asList(new EventData("Melbourne"), new EventData("London"), new EventData("New York"));
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
All Event Hubs consumers connect via an AMQP 1.0 session, a state-aware bidirect
When connecting to partitions, it's common practice to use a leasing mechanism to coordinate reader connections to specific partitions. This way, it's possible for every partition in a consumer group to have only one active reader. Checkpointing, leasing, and managing readers are simplified by using the clients within the Event Hubs SDKs, which act as intelligent consumer agents. These are: - The [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) for .NET-- The [EventProcessorClient](/java/api/com.azure.messaging.eventhubs.eventprocessorclient) for Java
+- The [EventProcessorClient](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs/src/main/java/com/azure/messaging/eventhubs/EventProcessorClient.java) for Java
- The [EventHubConsumerClient](/python/api/azure-eventhub/azure.eventhub.aio.eventhubconsumerclient) for Python - The [EventHubConsumerClient](/javascript/api/@azure/event-hubs/eventhubconsumerclient) for JavaScript/TypeScript
For more information about Event Hubs, visit the following links:
* [Event Hubs programming guide](event-hubs-programming-guide.md) * [Availability and consistency in Event Hubs](event-hubs-availability-and-consistency.md) * [Event Hubs FAQ](event-hubs-faq.yml)
-* [Event Hubs samples](event-hubs-samples.md)
+* [Event Hubs samples](event-hubs-samples.md)
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-about-virtual-network-gateways.md
Previously updated : 10/14/2019 Last updated : 04/23/2021
If you want to upgrade your gateway to a more powerful gateway SKU, in most case
### <a name="aggthroughput"></a>Estimated performances by gateway SKU The following table shows the gateway types and the estimated performances. This table applies to both the Resource Manager and classic deployment models. - > [!IMPORTANT] > Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment. > > > [!NOTE]
-> The maximum number of ExpressRoute circuits from the same peering location that can connect to the same virtual network remains at 4.
+> The maximum number of ExpressRoute circuits from the same peering location that can connect to the same virtual network is 4 for all gateways.
> > ++ ## <a name="gwsub"></a>Gateway subnet Before you create an ExpressRoute gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required ExpressRoute gateway settings. Never deploy anything else (for example, additional VMs) to the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know that this is the subnet to deploy the virtual network gateway VMs and services to.
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/integrate-with-nat-gateway.md
+
+ Title: Scale SNAT ports with Azure NAT Gateway
+description: You can integrate Azure Firewall with NAT Gateway to increase SNAT ports.
++++ Last updated : 04/23/2021+++
+# Scale SNAT ports with Azure NAT Gateway
+
+Azure Firewall provides 2048 SNAT ports per public IP address configured, and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 512,000 available SNAT ports with this configuration. For example, when you use it to protect large [Windows Virtual Desktop deployments](./protect-windows-virtual-desktop.md) that integrate with Microsoft 365 Apps.
+
+Another challenge with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes.
+
+A better option to scale outbound SNAT ports is to use [NAT gateway resource](../virtual-network/nat-overview.md). It provides 64,000 SNAT ports per public IP address and supports up to 16 public IP addresses, effectively providing up to 1,024,000 outbound SNAT ports.
+
+When a NAT gateway resource is associated with an Azure Firewall subnet, all outbound Internet traffic automatically uses the public IP address of the NAT gateway. There is no need to configure [User Defined Routes](../virtual-network/tutorial-create-route-table-portal.md). Response traffic uses the Azure Firewall public IP address to maintain flow symmetry. If there are multiple IP addresses associated with the NAT gateway the IP address is randomly selected. It isn't possible to specify what address to use.
+
+There is no double NAT with this architecture. Azure Firewall instances send the traffic to NAT gateway using their private IP address rather than Azure Firewall public IP address.
+
+## Associate NAT gateway with Azure Firewall subnet - Azure PowerShell
+
+The following example creates and attaches a NAT gateway with an Azure Firewall subnet using Azure PowerShell.
+
+```azurepowershell-interactive
+# Create public IP addresses
+New-AzPublicIpAddress -Name public-ip-1 -ResourceGroupName nat-rg -Sku Standard -AllocationMethod Static -Location 'South Central US'
+New-AzPublicIpAddress -Name public-ip-2 -ResourceGroupName nat-rg -Sku Standard -AllocationMethod Static -Location 'South Central US'
+
+# Create NAT gateway
+$PublicIPAddress1 = Get-AzPublicIpAddress -Name public-ip-1 -ResourceGroupName nat-rg
+$PublicIPAddress2 = Get-AzPublicIpAddress -Name public-ip-2 -ResourceGroupName nat-rg
+New-AzNatGateway -Name firewall-nat -ResourceGroupName nat-rg -PublicIpAddress $PublicIPAddress1,$PublicIPAddress2 -Location 'South Central US' -Sku Standard
+
+# Associate NAT gateway to subnet
+$virtualNetwork = Get-AzVirtualNetwork -Name nat-vnet -ResourceGroupName nat-rg
+$natGateway = Get-AzNatGateway -Name firewall-nat -ResourceGroupName nat-rg
+$firewallSubnet = $virtualNetwork.subnets | Where-Object -Property Name -eq AzureFirewallSubnet
+$firewallSubnet.NatGateway = $natGateway
+$virtualNetwork | Set-AzVirtualNetwork
+```
+
+## Associate NAT gateway with Azure Firewall subnet - Azure CLI
+
+The following example creates and attaches a NAT gateway with an Azure Firewall subnet using Azure CLI.
+
+```azurecli-interactive
+# Create public IP addresses
+az network public-ip create --name public-ip-1 --resource-group nat-rg --sku standard
+az network public-ip create --name public-ip-2 --resource-group nat-rg --sku standard
+
+# Create NAT gateway
+az network nat gateway create --name firewall-nat --resource-group nat-rg --public-ip-addresses public-ip-1 public-ip-2
+
+# Associate NAT gateway to subnet
+az network vnet subnet update --name AzureFirewallSubnet --vnet-name nat-vnet --resource-group nat-rg --nat-gateway firewall-nat
+```
+
+## Next steps
+
+- [Designing virtual networks with NAT gateway resources](../virtual-network/nat-gateway-resource.md)
governance Extension For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/extension-for-vscode.md
Title: Azure Policy extension for Visual Studio Code description: Learn how to use the Azure Policy extension for Visual Studio Code to look up Azure Resource Manager aliases. Previously updated : 01/11/2021 Last updated : 04/25/2021 # Use Azure Policy extension for Visual Studio Code
> Applies to Azure Policy extension version **0.1.1** and newer Learn how to use the Azure Policy extension for Visual Studio Code to look up
-[aliases](../concepts/definition-structure.md#aliases), review resources and policies, export
-objects, and evaluate policy definitions. First, we'll describe how to install the Azure Policy
-extension in Visual Studio Code. Then we'll walk through how to look up aliases.
+[aliases](../concepts/definition-structure.md#aliases), review resources and policy definitions,
+export objects, and evaluate policy definitions. First, we'll describe how to install the Azure
+Policy extension in Visual Studio Code. Then we'll walk through how to look up aliases.
The Azure Policy extension for Visual Studio Code can be installed on Windows.
For a national cloud user, follow these steps to set the Azure environment first
## Using the Policy extension > [!NOTE]
-> Changes made locally to policies viewed in the Azure Policy extension for Visual Studio Code
-> aren't synced to Azure.
+> Changes made locally to policy definitions viewed in the Azure Policy extension for Visual Studio
+> Code aren't synced to Azure.
### Connect to an Azure account
to connect to Azure from Visual Studio Code:
### Select subscriptions
-When you first sign in, only the default subscription resources and policies are loaded by the Azure
-Policy extension. To add or remove subscriptions from displaying resources and policies, follow
-these steps:
+When you first sign in, only the default subscription resources and policy definitions are loaded by
+the Azure Policy extension. To add or remove subscriptions from displaying resources and policy
+definitions, follow these steps:
1. Start the subscription command from the Command Palette or the window footer.
resource with the following steps:
- Command Palette:
- From the menu bar, go to **View** > **Command Palette**, and enter **Resources: Search
+ From the menu bar, go to **View** > **Command Palette**, and enter **Azure Policy: Search
Resources**. 1. If more than one subscription is selected for display, use the filter to select which
matching aliases.
> The VS Code extension only supports evaluation of Resource Manager mode properties. For more > information about the modes, see the [mode definitions](../concepts/definition-structure.md#mode).
-### Search for and view policies and assignments
+### Search for and view policy definitions and assignments
The Azure Policy extension lists policy types and policy assignments as a treeview for the subscriptions selected to be displayed in the **Policies** pane. Customers with hundreds or
-thousands of policies or assignments in a single subscription may prefer a searchable way to locate
-their policies or assignments. The Azure Policy extension makes it possible to search for a specific
-policy or assignment with the following steps:
+thousands of policy definitions or assignments in a single subscription may prefer a searchable way
+to locate their policy definitions or assignments. The Azure Policy extension makes it possible to
+search for a specific policy or assignment with the following steps:
1. Start the search interface from the Azure Policy extension or the Command Palette.
policy or assignment with the following steps:
- Command Palette:
- From the menu bar, go to **View** > **Command Palette**, and enter **Policies: Search
+ From the menu bar, go to **View** > **Command Palette**, and enter **Azure Policy: Search
Policies**. 1. If more than one subscription is selected for display, use the filter to select which
policy assignment.
> [!NOTE] > For [AuditIfNotExists](../concepts/effects.md#auditifnotexists) or > [DeployIfNotExists](../concepts/effects.md#deployifnotexists) policy definitions, use the plus
-> icon in the **Evaluation** pane to select a _related_ resource for the existence check.
+> icon in the **Evaluation** pane or **Azure Policy: Select a resource for existence check (only
+> used for if-not-exists policies)** from the Command Palette to select a _related_ resource for the
+> existence check.
The evaluation results provide information about the policy definition and policy assignment along with the **policyEvaluations.evaluationResult** property. The output looks similar to the following
From the menu bar, go to **View** > **Command Palette**, and then enter **Azure:
- Review examples at [Azure Policy samples](../samples/index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md). - Review [Understanding policy effects](../concepts/effects.md).-- Understand how to [programmatically create policies](programmatically-create.md).
+- Understand how to [programmatically create policy definitions](programmatically-create.md).
- Learn how to [remediate non-compliant resources](remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
iot-dps Quick Create Device Symmetric Key Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-device-symmetric-key-csharp.md
Title: Quickstart - Use symmetric key to provision a device to Azure IoT Hub usi
description: In this quickstart, you will use the C# device SDK for the Device Provisioning Service (DPS) to provision a symmetric key device to an IoT hub Previously updated : 10/21/2020 Last updated : 04/23/2021
In this quickstart, you will learn how to provision a Windows development machin
Although this article demonstrates provisioning with an individual enrollment, you can also use enrollment groups. There are some differences when using enrollment groups. For example, you must use a derived device key with a unique registration ID for the device. [Provision devices with symmetric keys](how-to-legacy-device-symm-key.md) provides an enrollment group example. For more information on enrollment groups, see [Group Enrollments for Symmetric Key Attestation](concepts-symmetric-key-attestation.md#group-enrollments).
-If you're unfamiliar with the process of auto-provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
+If you're unfamiliar with the process of autoprovisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
Also, make sure you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) before continuing with this quickstart. This quickstart requires you to have already created your Device Provisioning Service instance.
This article is oriented toward a Windows-based workstation. However, you can pe
4. Once you have saved your enrollment, the **Primary Key** and **Secondary Key** will be generated and added to the enrollment entry. Your symmetric key device enrollment appears as **symm-key-csharp-device-01** under the *Registration ID* column in the *Individual Enrollments* tab.
-5. Open the enrollment and copy the value of your generated **Primary Key** and **Secondary Key**. You will use this key value and the **Registration ID** later when you add environment variables for use with the device provisioning sample code.
+5. Open the enrollment and copy the value of your generated **Primary Key**. You will use this key value and the **Registration ID** later when you run the device provisioning sample code.
This article is oriented toward a Windows-based workstation. However, you can pe
<a id="firstbootsequence"></a>
-## Prepare the device provisioning code
+## Run the device provisioning code
-In this section, you will add the following four environment variables that will be used as parameters for the device provisioning sample code to provision your symmetric key device.
+In this section, you will run the device provisioning sample using three parameters that will authenticate the device provisioning sample code as the symmetric key device for the enrollment in your DPS resource. These three parameters are:
-* `DPS_IDSCOPE`
-* `PROVISIONING_REGISTRATION_ID`
-* `PRIMARY_SYMMETRIC_KEY`
-* `SECONDARY_SYMMETRIC_KEY`
+* ID Scope
+* Registration ID for an individual enrollment.
+* Primary symmetric key for an individual enrollment.
-The provisioning code will contact the DPS instance based on these variables in order to authenticate your device. The device will then be assigned to an IoT hub already linked to the DPS instance based on the individual enrollment configuration. Once provisioned, the sample code will send some test telemetry to the IoT hub.
+The provisioning code will contact the DPS resource using these parameters in order to authenticate your device. The device will then be assigned to an IoT hub already linked to the DPS instance based on the individual enrollment configuration. Once provisioned, the sample code will send a test telemetry message to the IoT hub.
-1. In the [Azure portal](https://portal.azure.com), on your Device Provisioning Service menu, select **Overview** and copy your _Service Endpoint_ and _ID Scope_. You will use these values for the `PROVISIONING_HOST` and `DPS_IDSCOPE` environment variables.
-
- ![Service information](./media/quick-create-device-symmetric-key-csharp/extract-dps-endpoints.png)
+1. In the [Azure portal](https://portal.azure.com), on your Device Provisioning Service menu, select **Overview** and copy your **ID Scope** value. You will use this value for the `IdScope` parameter when running the sample code.
2. Open a command prompt and navigate to the *SymmetricKeySample* in the cloned samples repository: ```cmd
- cd provisioning\Samples\device\SymmetricKeySample
+ cd azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample
```
-3. In the *SymmetricKeySample* folder, open *Program.cs* in a text editor and find the lines of code that set the `individualEnrollmentPrimaryKey` and `individualEnrollmentSecondaryKey` strings. Update those lines of code as follows so that the environment variables are used instead of hard coding the keys.
+3. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
- ```csharp
- //These are the two keys that belong to your individual enrollment.
- // Leave them blank if you want to try this sample for an individual enrollment instead
- //private const string individualEnrollmentPrimaryKey = "";
- //private const string individualEnrollmentSecondaryKey = "";
-
- private static string individualEnrollmentPrimaryKey = Environment.GetEnvironmentVariable("PRIMARY_SYMMETRIC_KEY");;
- private static string individualEnrollmentSecondaryKey = Environment.GetEnvironmentVariable("SECONDARY_SYMMETRIC_KEY");;
- ```
-
- Also, find the line of code that sets the `registrationId` string and update it as follows to also use an environment variable as follows:
-
- ```csharp
- //This field is mandatory to provide for this sample
- //private static string registrationId = "";
-
- private static string registrationId = Environment.GetEnvironmentVariable("PROVISIONING_REGISTRATION_ID");;
- ```
-
- Save the changes to *Program.cs*.
-
-3. In your command prompt, add the environment variables for the ID Scope, registration ID, primary, and secondary symmetric keys you copied from the individual enrollment in the previous section.
-
- The following commands are examples to show command syntax. Make sure to use your correct values.
-
- ```console
- set DPS_IDSCOPE=0ne00000A0A
- ```
-
- ```console
- set PROVISIONING_REGISTRATION_ID=symm-key-csharp-device-01
- ```
-
- ```console
- set PRIMARY_SYMMETRIC_KEY=sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
- ```
-
+ | Parameter | Required | Description |
+ | :-- | :- | :-- |
+ | `--s` or `--IdScope` | True | The ID Scope of the DPS instance |
+ | `--i` or `--Id` | True | The registration ID when using individual enrollment, or the desired device ID when using group enrollment. |
+ | `--p` or `--PrimaryKey` | True | The primary key of the individual or group enrollment. |
+ | `--e` or `--EnrollmentType` | False | The type of enrollment: `Individual` or `Group`. Defaults to `Individual` |
+ | `--g` or `--GlobalDeviceEndpoint` | False | The global endpoint for devices to connect to. Defaults to `global.azure-devices-provisioning.net` |
+ | `--t` or `--TransportType` | False | The transport to use to communicate with the device provisioning instance. Defaults to `Mqtt`. Possible values include `Mqtt`, `Mqtt_WebSocket_Only`, `Mqtt_Tcp_Only`, `Amqp`, `Amqp_WebSocket_Only`, `Amqp_Tcp_only`, and `Http1`.|
+
+4. In the *SymmetricKeySample* folder, open *ProvisioningDeviceClientSample.cs* in a text editor. This file shows how the [SecurityProviderSymmetricKey](/dotnet/api/microsoft.azure.devices.shared.securityprovidersymmetrickey?view=azure-dotnet&preserve-view=true) class is used along with the [ProvisioningDeviceClient](/dotnet/api/microsoft.azure.devices.provisioning.client.provisioningdeviceclient?view=azure-dotnet&preserve-view=true) class to provision your symmetric key device. Review the code in this file. No changes are needed.
+
+5. Build and run the sample code using the following command after replacing the three example parameters. Use your correct values for ID Scope, enrollment registration ID, and enrollment primary key.
+
```console
- set SECONDARY_SYMMETRIC_KEY=Zx8/eE7PUBmnouB1qlNQxI7fcQ2HbJX+y96F1uCVQvDj88jFL+q6L9YWLLi4jqTmkRPOulHlSbSv2uFgj4vKtw==
- ```
-
+ dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+ ```
-4. Build and run the sample code using the following command.
-
- ```console
- dotnet run
- ```
-5. The expected output should look similar to the following which shows the linked IoT hub that the device was assigned to based on the individual enrollment settings. An example "TestMessage" string is sent to the hub as a test:
+6. The expected output should look similar to the following output that shows the linked IoT hub that the device was assigned to based on the individual enrollment settings. An example "TestMessage" string is sent to the hub as a test:
```output
- D:\azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample>dotnet run
- RegistrationID = symm-key-csharp-device-01
- ProvisioningClient RegisterAsync . . . Assigned
- ProvisioningClient AssignedHub: docs-test-iot-hub.azure-devices.net; DeviceID: csharp-device-01
- Creating Symmetric Key DeviceClient authentication
- DeviceClient OpenAsync.
- DeviceClient SendEventAsync.
- DeviceClient CloseAsync.
- Enter any key to exit
+ D:\azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+
+ Initializing the device provisioning client...
+ Initialized for registration Id symm-key-csharp-device-01.
+ Registering with the device provisioning service...
+ Registration status: Assigned.
+ Device csharp-device-01 registered to ExampleIoTHub.azure-devices.net.
+ Creating symmetric key authentication for IoT Hub...
+ Testing the provisioned device with IoT Hub...
+ Sending a telemetry message...
+ Finished.
+ Enter any key to exit.
```
-6. In the Azure portal, navigate to the IoT hub linked to your provisioning service and open the **IoT devices** blade. After successfully provisioning the symmetric key device to the hub, the device ID is shown with *STATUS* as **enabled**. You might need to press the **Refresh** button at the top if you already opened the blade prior to running the device sample code.
+7. In the Azure portal, navigate to the IoT hub linked to your provisioning service and open the **IoT devices** blade. After successfully provisioning the symmetric key device to the hub, the device ID is shown with *STATUS* as **enabled**. You might need to press the **Refresh** button at the top if you already opened the blade prior to running the device sample code.
![Device is registered with the IoT hub](./media/quick-create-device-symmetric-key-csharp/hub-registration-csharp.png)
iot-dps Quick Create Simulated Device X509 Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-x509-csharp.md
You will use sample code from the [X509Sample](https://github.com/Azure-Samples/
If you plan to continue working on and exploring the device client sample, do not clean up the resources created in this quickstart. If you do not plan to continue, use the following steps to delete all resources created by this quickstart. 1. Close the device client sample output window on your machine.
-1. Close the TPM simulator window on your machine.
1. From the left-hand menu in the Azure portal, select **All resources** and then select your Device Provisioning service. At the top of the **Overview** blade, press **Delete** at the top of the pane. 1. From the left-hand menu in the Azure portal, select **All resources** and then select your IoT hub. At the top of the **Overview** blade, press **Delete** at the top of the pane.
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-compatibility.md
Last updated 3/18/2020
This article describes the drivers and management tools that are compatible with Azure Database for MySQL Single Server. >[!NOTE]
->This article is only applicable to Azure Database for MySQL Single Server to ensure drivers are compatible with [connectivity architecture](concepts-connectivity-architecture.md) of Single Server service. [Azure Database for MySQL Flexible Server](/../flexible-server/overview.md) is compatible with all the drivers and tools supported and compatible with MySQL community edition.
+>This article is only applicable to Azure Database for MySQL Single Server to ensure drivers are compatible with [connectivity architecture](concepts-connectivity-architecture.md) of Single Server service. [Azure Database for MySQL Flexible Server](/azure/mysql/flexible-server/overview) is compatible with all the drivers and tools supported and compatible with MySQL community edition.
## MySQL Drivers Azure Database for MySQL uses the world's most popular community edition of MySQL database. Therefore, it is compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MySQL drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 is provided in the following table:
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-migrate-dbforge-studio-for-mysql.md
Title: Connect to Azure Database for MySQL using dbForge Studio for MySQL
-description: The article demonstrates how to connect to Azure Database for MySQL Server via dbForge Studio for MySQL.
+ Title: Use dbForge Studio for MySQL to migrate a MySQL database to Azure Database for MySQL
+description: The article demonstrates how to migrate to Azure Database for MySQL by using dbForge Studio for MySQL.
Last updated 03/03/2021
+# Migrate data to Azure Database for MySQL with dbForge Studio for MySQL
-# Connect to Azure Database for MySQL using dbForge Studio for MySQL
+Looking to move your MySQL databases to Azure Database for MySQL? Consider using the migration tools in dbForge Studio for MySQL. With it, database transfer can be configured, saved, edited, automated, and scheduled.
-To connect to Azure Database for MySQL using [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/):
+To complete the examples in this article, you'll need to download and install [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/).
-1. On the Database menu, select New Connection.
+## Connect to Azure Database for MySQL
-2. Provide a host name and login credentials.
+1. In dbForge Studio for MySQL, select **New Connection** from the **Database** menu.
-3. Select the Test Connection button to check the configuration.
+1. Provide a host name and sign-in credentials.
+1. Select **Test Connection** to check the configuration.
-## Migrate a database using the Backup and Restore functionality
-The Studio allows migrating databases to Azure in many ways, the choice of which depends solely on your needs. If you need to move the entire database, it's best to use the Backup and Restore functionality. In this example, we migrate the *sakila* database that resides on MySQL server to Azure Database for MySQL. The logic behind the migration process using the Backup and Restore functionality of dbForge Studio for MySQL is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
+## Migrate with the Backup and Restore functionality
+
+You can choose from many options when using dbForge Studio for MySQL to migrate databases to Azure. If you need to move the entire database, it's best to use the **Backup and Restore** functionality.
+
+In this example, we migrate the *sakila* database from MySQL server to Azure Database for MySQL. The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
### Back up the database
-1. On the Database menu, point to Back up and Restore, and then select Backup Database. The Database Backup Wizard appears.
+1. In dbForge Studio for MySQL, select **Backup Database** from the **Backup and Restore** menu. The **Database Backup Wizard** appears.
-2. On the Backup content tab of the Database Backup Wizard, select database objects you want to back up.
+1. On the **Backup content** tab of the **Database Backup Wizard**, select database objects you want to back up.
-3. On the Options tab, configure the backup process to fit your requirements.
+1. On the **Options** tab, configure the backup process to fit your requirements.
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png" alt-text="Back up Wizard options":::
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png" alt-text="Screenshot showing the options pane of the Backup wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/back-up-wizard-options.png":::
-4. Next, specify errors processing behavior and logging options.
+1. Select **Next**, and then specify error processing behavior and logging options.
-5. Select Backup.
+1. Select **Backup**.
### Restore the database
-1. Connect to Azure for Database for MySQL as described above.
+1. In dbForge Studio for MySQL, connect to Azure Database for MySQL. [Refer to the instructions](#connect-to-azure-database-for-mysql).
+
+1. Select **Restore Database** from the **Backup and Restore** menu. The **Database Restore Wizard** appears.
+
+1. In the **Database Restore Wizard**, select a file with a database backup.
-2. Right-click the Database Explorer body, point to Back up and Restore, and then select Restore Database.
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png" alt-text="Screenshot showing the Restore step of the Database Restore wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png":::
-3. In the Database Restore Wizard that opens, select a file with a database backup.
+1. Select **Restore**.
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/restore-step-1.png" alt-text="Restore step":::
+1. Check the result.
-4. Select Restore.
+## Migrate with the Copy Databases functionality
-5. Check the result.
+The **Copy Databases** functionality in dbForge Studio for MySQL is similar to **Backup and Restore**, except that it doesn't require two steps to migrate a database. It also lets you transfer two or more databases at once.
-## Migrate a database using the Copy Databases functionality
+>[!NOTE]
+> The **Copy Databases** functionality is only available in the Enterprise edition of dbForge Studio for MySQL.
-The Copy Databases functionality is similar to the Backup and Restore, except that with it you do not need two steps to migrate a database. And what is more, the feature allows transferring two or more databases in one go. The Copy Databases functionality is only available in the Enterprise edition of dbForge Studio for MySQL.
In this example, we migrate the *world_x* database from MySQL server to Azure Database for MySQL.+ To migrate a database using the Copy Databases functionality:
-1. On the Database menu, select Copy Databases.
+1. In dbForge Studio for MySQL, select **Copy Databases** from the **Database** menu.
+
+1. On the **Copy Databases** tab, specify the source and target connection. Also select the databases to be migrated.
+
+ We enter the Azure MySQL connection and select the *world_x* database. Select the green arrow to start the process.
-2. In the Copy Databases tab that appears, specify the source and target connection and select the database(s) to be migrated. We enter Azure MySQL connection and select the *world_x* database. Select the green arrow to initiate the process.
+1. Check the result.
-3. Check the result.
+You'll see that the *world_x* database has successfully appeared in Azure MySQL.
-As a result of our database migration efforts, the *world_x* database has successfully appeared in Azure MySQL.
+## Migrate a database with schema and data comparison
-## Migrate a database using Schema and Data Compare tools
+You can choose from many options when using dbForge Studio for MySQL to migrate databases, schemas, and/or data to Azure. If you need to move selective tables from a MySQL database to Azure, it's best to use the **Schema Comparison** and the **Data Comparison** functionality.
-dbForge Studio for MySQL incorporates a few tools that allow migrating MySQL databases, MySQL schemas and\or data to Azure. The choice of functionality depends on your needs and the requirements of your project. If you need to selectively move a database, that is, migrate certain MySQL tables to Azure, it's best to use Schema and Data Compare functionality.
-In this example, we migrate the *world* database that resides on MySQL server to Azure Database for MySQL. The logic behind the migration process using Schema and Data Compare functionality of dbForge Studio for MySQL is to create an empty database in Azure Database for MySQL, synchronize it with the required MySQL database first using Schema Compare tool and then using Data Compare tool. This way MySQL schemas and data are accurately moved to Azure.
+In this example, we migrate the *world* database from MySQL server to Azure Database for MySQL.
-### Step 1. Connect to Azure Database for MySQL and create an empty database
+The logic behind using the **Backup and Restore** functionality is to create a backup of the MySQL database and then restore it in Azure Database for MySQL.
-### Step 2. Schema synchronization
+The logic behind this approach is to create an empty database in Azure Database for MySQL and synchronize it with the source MySQL database. We first use the **Schema Comparison** tool, and next we use the **Data Comparison** functionality. These steps ensure that the MySQL schemas and data are accurately moved to Azure.
-1. On the Comparison menu, select New Schema Comparison.
-The New Schema Comparison Wizard appears.
+To complete this exercise, you'll first need to [connect to Azure Database for MySQL](#connect-to-azure-database-for-mysql) and create an empty database.
-2. Select the Source and the Target, then specify the schema comparison options. Select Compare.
+### Schema synchronization
-3. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the Schema Synchronization Wizard.
+1. On the **Comparison** menu, select **New Schema Comparison**. The **New Schema Comparison Wizard** appears.
-4. Walk through the steps of the wizard configuring synchronization. Select Synchronize to deploy the changes.
+1. Choose your source and target, and then specify the schema comparison options. Select **Compare**.
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png" alt-text="Schema sync wizard":::
+1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Schema Synchronization Wizard**.
-### Step 3. Data Comparison
+1. Walk through the steps of the wizard to configure synchronization. Select **Synchronize** to deploy the changes.
-1. On the Comparison menu, select New Data Comparison. The New Data Comparison Wizard appears.
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png" alt-text="Screenshot showing the schema synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/schema-sync-wizard.png":::
-2. Select the Source and the Target, then specify the data comparison options and change mappings if necessary. Select Compare.
+### Data Comparison
-3. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the Data Synchronization Wizard.
+1. On the **Comparison** menu, select **New Data Comparison**. The **New Data Comparison Wizard** appears.
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png" alt-text="Data comp result":::
+1. Choose your source and target, and then specify the data comparison options. Change mappings if necessary, and then select **Compare**.
-4. Walk through the steps of the wizard configuring synchronization. Select Synchronize to deploy the changes.
+1. In the comparison results grid that appears, select objects for synchronization. Select the green arrow button to open the **Data Synchronization Wizard**.
-5. Check the result.
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png" alt-text="Screenshot showing the results of the data comparison." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-comp-result.png":::
- :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png" alt-text="Data sync result":::
+1. Walk through the steps of the wizard configuring synchronization. Select **Synchronize** to deploy the changes.
-## Summary
+1. Check the result.
-Nowadays more businesses move their databases to Azure Database for MySQL, as this database service is easy to set up, manage, and scale. That migration doesn't need to be painful. dbForge Studio for MySQL boasts immaculate migration tools that can significantly facilitate the process. The Studio allows database transfer to be easily configured, saved, edited, automated, and scheduled.
+ :::image type="content" source="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png" alt-text="Screenshot showing the results of the Data Synchronization wizard." lightbox="media/concepts-migrate-dbforge-studio-for-mysql/data-sync-result.png":::
## Next steps - [MySQL overview](overview.md)
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/quickstart-create-server-up-azure-cli.md
az account set --subscription <subscription id>
## Create an Azure Database for MySQL server
-To use the commands, install the [db-up](/cli/azure/) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
+To use the commands, install the [db-up](/cli/azure/ext/db-up/mysql
+) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
```azurecli az extension add --name db-up
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-backup-restore.md
The physical database files are first restored from the snapshot backups to the
Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect. You will be able to restore to the last transaction due to continuous backup of transaction logs.
-You can choose between an earliest restore point and a custom restore point.
+You can choose between a latest restore point and a custom restore point.
-- **Earliest restore point**: Depending on your retention period, it will be the earliest time that you can restore. The oldest backup time will be auto-selected and is displayed on the portal. This is useful if you want to investigate or do some testing starting that point in time.
+- **Latest restore point (now)**: This is the default option which allows you to restore the server to the latest point-in-time.
- **Custom restore point**: This option allows you to choose any point-in-time within the retention period defined for this flexible server. By default, the latest time in UTC is auto-selected, and useful if you want to restore to the last committed transaction for your test purposes. You can optionally choose other days and time.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-extensions.md
Previously updated : 03/17/2021 Last updated : 04/22/2021 # PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_visibility](https://www.postgresql.org/docs/12/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info| > |[pgaudit](https://www.pgaudit.org/) | 1.4 | provides auditing functionality| > |[pgcrypto](https://www.postgresql.org/docs/12/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pglogical](https://github.com/2ndQuadrant/pglogical) | 2.3.2 | PostgreSQL logical replication|
> |[pgrowlocks](https://www.postgresql.org/docs/12/pgrowlocks.html) | 1.2 | show row-level locking information| > |[pgstattuple](https://www.postgresql.org/docs/12/pgstattuple.html) | 1.5 | show tuple-level statistics| > |[plpgsql](https://www.postgresql.org/docs/12/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_visibility](https://www.postgresql.org/docs/11/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info| > |[pgaudit](https://www.pgaudit.org/) | 1.3.1 | provides auditing functionality| > |[pgcrypto](https://www.postgresql.org/docs/11/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pglogical](https://github.com/2ndQuadrant/pglogical) | 2.3.2 | PostgreSQL logical replication|
> |[pgrowlocks](https://www.postgresql.org/docs/11/pgrowlocks.html) | 1.2 | show row-level locking information| > |[pgstattuple](https://www.postgresql.org/docs/11/pgstattuple.html) | 1.5 | show tuple-level statistics| > |[plpgsql](https://www.postgresql.org/docs/11/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-logical.md
Previously updated : 09/23/2020 Last updated : 04/22/2021 # Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server
Last updated 09/23/2020
> [!IMPORTANT] > Azure Database for PostgreSQL - Flexible Server is in preview
-PostgreSQL's logical replication and logical decoding features are supported in Azure Database for PostgreSQL - Flexible Server, for Postgres version 11.
+Azure Database for PostgreSQL - Flexible Server supports the following logical data extraction and replication methodologies:
+1. **Logical replication**
+ 1. Using PostgreSQL [native logical replication](https://www.postgresql.org/docs/12/logical-replication.html) to replicate data objects. Logical replication allows fine-grained control over the data replication, including table-level data replication.
+ 2. Using [pglogical](https://github.com/2ndQuadrant/pglogical) extension that provides logical streaming replication and additional capabilities such as copying initial schema of the database, support for TRUNCATE, ability to replicate DDL etc.
+2. **Logical decoding** which is implemented by [decoding](https://www.postgresql.org/docs/12/logicaldecoding-explanation.html) the content of write-ahead log (WAL).
## Comparing logical replication and logical decoding Logical replication and logical decoding have several similarities. They both
Logical replication
Logical decoding * extracts changes across all tables in a database
-* cannot directly send data between PostgreSQL instances
-
+* cannot directly send data between PostgreSQL instances.
## Pre-requisites for logical replication and logical decoding
Logical decoding
ALTER ROLE <adminname> WITH REPLICATION; ``` - ## Using logical replication and logical decoding
-### Logical replication
+### Native logical replication
Logical replication uses the terms 'publisher' and 'subscriber'. * The publisher is the PostgreSQL database you are sending data **from**. * The subscriber is the PostgreSQL database you are sending data **to**.
You can add more rows to the publisher's table and view the changes on the subsc
Visit the PostgreSQL documentation to understand more about [logical replication](https://www.postgresql.org/docs/current/logical-replication.html). +
+### pglogical extension
+
+Here is an example of configuring pglogical at the provider database server and the subscriber. Please refer to pglogical extension documentation for more details.
+
+1. Install pglogical extension in both the provider and the subscriber database servers.
+ ```SQL
+ CREATE EXTENSION pglogical;
+ ```
+2. At the provider database server, create the provider node.
+ ```SQL
+ select pglogical.create_node( node_name := 'provider1', dsn := ' host=myProviderDB.postgres.database.azure.com port=5432 dbname=myDB');
+ ```
+3. Add tables in testUser schema to the default replication set.
+ ```SQL
+ SELECT pglogical.replication_set_add_all_tables('default', ARRAY['testUser']);
+ ```
+4. At the subscriber server, create a subscriber node.
+ ```SQL
+ select pglogical.create_node( node_name := 'subscriber1', dsn := ' host=mySubscriberDB.postgres.database.azure.com port=5432 dbname=myDB');
+ ```
+5. Create a subscription to start the synchronization and replication process.
+ ```SQL
+ select pglogical.create_subscription( subscription_name := 'subscription1', provider_dsn := ' host=myProviderDB.postgres.database.azure.com port=5432 dbname=myDB');
+ ```
### Logical decoding Logical decoding can be consumed via the streaming protocol or SQL interface.
SELECT * FROM pg_replication_slots;
[Set alerts](howto-alert-on-metrics.md) on the **Maximum Used Transaction IDs** and **Storage Used** flexible server metrics to notify you when the values increase past normal thresholds. ## Limitations
-* **Read replicas** - Azure Database for PostgreSQL read replicas are not currently supported for flexible servers.
+* **Logical replication** limitations apply as documented [here](https://www.postgresql.org/docs/12/logical-replication-restrictions.html).
+* **Read replicas** - Azure Database for PostgreSQL read replicas are not currently supported with flexible servers.
* **Slots and HA failover** - Logical replication slots on the primary server are not available on the standby server in your secondary AZ. This applies to you if your server uses the zone-redundant high availability option. In the event of a failover to the standby server, logical replication slots will not be available on the standby. ## Next steps
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-networking.md
Previously updated : 02/21/2021 Last updated : 04/22/2021 # Networking overview - Azure Database for PostgreSQL - Flexible Server
You have two networking options for your Azure Database for PostgreSQL - Flexibl
> [!NOTE] > Your networking option cannot be changed after the server is created.
-* **Private access (VNet Integration)** ΓÇô You can deploy your flexible server into your [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
+* **Private access (VNet integration)** ΓÇô You can deploy your flexible server into your [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
Choose the VNet Integration option if you want the following capabilities: * Connect from Azure resources in the same virtual network to your flexible server using private IP addresses
The following characteristics apply whether you choose to use the private access
* Both options control access at the server-level, not at the database- or table-level. You would use PostgreSQLΓÇÖs roles properties to control database, table, and other object access.
-## Private access (VNet Integration)
+## Private access (VNet integration)
Private access with virtual network (vnet) integration provides private and secure communication for your PostgreSQL flexible server. +
+In the above diagram,
+1. Flexible servers are injected into a delegated subnet - 10.0.1.0/24 of VNET **VNet-1**.
+2. Applications that are deployed on different subnets within the same vnet can access the Flexible servers directly.
+3. Applications that are deployed on a different VNET **VNet-2** do not have direct access to flexible servers. You have to perform [private DNS zone VNET peering](#private-dns-zone-and-vnet-peering) before they can access the flexible server.
+
### Virtual network concepts Here are some concepts to be familiar with when using virtual networks with PostgreSQL flexible servers.
Here are some concepts to be familiar with when using virtual networks with Post
Your virtual network must be in the same Azure region as your flexible server. - * **Delegated subnet** - A virtual network contains subnets (sub-networks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network. Your PostgreSQL flexible server must be in a subnet that is **delegated** for PostgreSQL flexible server use only. This delegation means that only Azure Database for PostgreSQL Flexible Servers can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as Microsoft.DBforPostgreSQL/flexibleServers.
- Add `Microsoft.Storage` to the service end point for the subnet delegated to Flexible servers.
+* **Network security groups (NSG)** -
+ Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. See [network security group overview](../../virtual-network/network-security-groups-overview.md) documentation for more information.
+
+* **Private DNS integration** -
+ Azure private DNS zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked. See [private DNS zone documentation](https://docs.microsoft.com/azure/dns/private-dns-overview) for more details.
+
+Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
+
+> [!NOTE]
+> If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL - Flexible Server. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+
+### Private DNS zone and VNET peering
-* **Network security groups (NSG)**
- Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. Review the [network security group overview](../../virtual-network/network-security-groups-overview.md) for more information.
+Private DNS zone settings and VNET peering are independent of each other.
+
+* By default, a new private DNS zone is auto-provisioned per server using the server name provided. However, if you want to setup your own private DNS zone to use with the flexible server, please see the [private DNS overview](https://docs.microsoft.com/azure/dns/private-dns-overview) documentation.
+* If you want to connect to the flexible server from a client that is provisioned in another VNET, you have to link the private DNS zone with the VNET. See [how to link the virtual network](https://docs.microsoft.com/azure/dns/private-dns-getstarted-portal#link-the-virtual-network) documentation.
### Unsupported virtual network scenarios
Here are some concepts to be familiar with when using virtual networks with Post
* Subnet size (address spaces) cannot be increased once resources exist in the subnet * Peering VNets across regions is not supported
-Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
-
-> [!NOTE]
-> If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL - Flexible Server. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
## Public access (allowed IP addresses) Characteristics of the public access method include:
If a fixed outgoing IP address isn't available for your Azure service, you can c
### Troubleshooting public access issues Consider the following points when access to the Microsoft Azure Database for PostgreSQL Server service does not behave as you expect:
-* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for PostgreSQL Server firewall configuration to take effect.
+* **Changes to the allowlist have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for PostgreSQL Server firewall configuration to take effect.
* **Authentication failed:** If a user does not have permissions on the Azure Database for PostgreSQL server or the password used is incorrect, the connection to the Azure Database for PostgreSQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server. Each client must still provide the necessary security credentials.
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-pgbouncer.md
+
+ Title: PgBouncer - Azure Database for PostgreSQL - Flexible Server
+description: This article provides an overview with the built-in PgBouncer extension.
++++ Last updated : 04/20/2021++
+# PgBouncer in Azure Database for PostgreSQL - Flexible Server
+
+> [!IMPORTANT]
+> Azure Database for PostgreSQL - Flexible Server is in preview
+
+Azure Database for PostgreSQL ΓÇô Flexible Server offers [PgBouncer](https://github.com/pgbouncer/pgbouncer) as a built-in connection pooling solution. This is an optional service that can be enabled on a per-database server basis and is supported with both public and private access. PgBouncer runs in the same virtual machine as the Postgres database server. Postgres uses a process-based model for connections which makes it expensive to maintain many idle connections. So, Postgres itself runs into resource constraints once the server runs more than a few thousand connections. The primary benefit of PgBouncer is to improve idle connections and short-lived connections at the database server.
+
+PgBouncer uses a more lightweight model that utilizes asynchronous I/O, and only uses actual Postgres connections when needed, that is, when inside an open transaction, or when a query is active. This model can support thousands of connections more easily with low overhead and allows scaling to up to 10,000 connections with low overhead.
+
+When enabled, PgBouncer runs on port 6432 on your database server. You can change your applicationΓÇÖs database connection configuration to use the same host name, but change the port to 6432 to start using PgBouncer and benefit from improved idle connection scaling.
+
+> [!Note]
+> PgBouncer is supported only on General Purpose and Memory Optimized compute tiers.
+
+## Enabling and configuring PgBouncer
+
+In order to enable PgBouncer, you can navigate to the ΓÇ£Server ParametersΓÇ¥ blade in the Azure portal, and search for ΓÇ£PgBouncerΓÇ¥ and change the pgbouncer.enabled setting to ΓÇ£trueΓÇ¥ for PgBouncer to be enabled. There is no need to restart the server. However, to set other PgBouncer parameters, see the limitations section.
+
+You can configure PgBouncer, settings with these parameters:
+
+| Parameter Name | Description | Default |
+|-|--|-|
+| pgbouncer.default_pool_size | Set this parameter value to the number of connections per user/database pair | 50 |
+| pgBouncer.max_client_conn | Set this parameter value to the highest number of client connections to PgBouncer that you want to support | 5000 |
+| pgBouncer.pool_mode | Set this parameter value to TRANSACTION for transaction pooling (which is the recommended setting for most workloads). | TRANSACTION |
+| pgBouncer.min_pool_size | Add more server connections to pool if below this number. | 0 (Disabled) |
+| pgBouncer.stats_users | Optional. Set this parameter value to the name of an existing user, to be able to log in to the special PgBouncer statistics database (named ΓÇ£PgBouncerΓÇ¥) | |
+
+> [!Note]
+> Upgrading of PgBouncer will be managed by Azure.
+
+## Switching your application to use PgBouncer
+
+In order to start using PgBouncer, follow these steps:
+1. Connect to your database server, but use port **6432** instead of the regular port 5432 -- verify that this connection works
+```azurecli-interactive
+psql "host=myPgServer.postgres.database.azure.com port=6432 dbname=postgres user=myUser password=myPassword sslmode=require"
+```
+2. Test your application in a QA environment against PgBouncer, to make sure you donΓÇÖt have any compatibility problems. The PgBouncer project provides a compatibility matrix, and we recommend using **transaction pooling** for most users: https://www.PgBouncer.org/features.html#sql-feature-map-for-pooling-modes.
+3. Change your production application to connect to port **6432** instead of **5432**, and monitor for any application side errors that may point to any compatibility issues.
+
+> [!Note]
+> Even if you had enabled PgBouncer, you can still connect to the database server directly over port 5432 using the same host name.
+
+## PgBouncer in Zone-redundant high availability
+
+In zone-redundant high availability configured servers, the primary server runs the PgBouncer. You can connect to the primary server's PgBouncer over port 6432. After a failover, the PgBouncer is restarted on the newly promoted standby, which is the new primary server. So your application connection string remains the same post failover.
+
+## Using PgBouncer with other connection pools
+
+In some cases, you may already have an application side connection pool, or have PgBouncer set up on your application side such as an AKS side car. In these cases, it can still be useful to utilize the built-in PgBouncer, as it provides idle connection scaling benefits.
+
+Utilizing an application side pool together with PgBouncer on the database server can be beneficial. Here, the application side pool brings the benefit of reduced initial connection latency (as the initial roundtrip to initialize the connection is much faster), and the database-side PgBouncer provides idle connection scaling.
+
+## Limitations
+
+* PgBouncer is currently not supported with Burstable server compute tier.
+* If you change the compute tier from General Purpose or Memory Optimized to Burstable tier, you will lose the PgBouncer capability.
+* Whenever the server is restarted during scale operations, HA failover, or a restart, the PgBouncer is also restarted along with the server virtual machine. Hence the existing connections have to be re-established.
+* Due to a known issue, the portal does not show all PgBouncer parameters. Once you enable PgBouncer and save the parameter, you have to exit Parameter screen (for example, click Overview) and then get back to Parameters page.
+
+## Next steps
+
+- Learn about [networking concepts](./concepts-networking.md)
+- Flexible server [overview](./overview.md)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-supported-versions.md
Previously updated : 03/03/2021 Last updated : 04/22/2021 # Supported PostgreSQL major versions in Azure Database for PostgreSQL - Flexible Server
Azure Database for PostgreSQL - Flexible Server currently supports the following
## PostgreSQL version 12
-The current minor release is 12.5. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-4.html) to learn more about improvements and fixes in this minor release.
+The current minor release is **12.6**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-4.html) to learn more about improvements and fixes in this minor release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 11
-The current minor release is 11.10. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-9.html) to learn more about improvements and fixes in this minor release.
+The current minor release is **11.11**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-9.html) to learn more about improvements and fixes in this minor release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 10 and older
postgresql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/how-to-manage-virtual-network-portal.md
Previously updated : 09/22/2020 Last updated : 04/22/2021 # Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure portal
Azure Database for PostgreSQL - Flexible Server supports two types of mutually e
* Public access (allowed IP addresses) * Private access (VNet Integration)
-In this article, we will focus on creation of PostgreSQL server with **Private access (VNet Integration)** using Azure portal. With Private access (VNet Integration), you can deploy your flexible server into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the PostgreSQL server are restricted to your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
+In this article, we will focus on creation of PostgreSQL server with **Private access (VNet integration)** using Azure portal. With Private access (VNet Integration), you can deploy your flexible server into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the PostgreSQL server are restricted to your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
You can deploy your flexible server into a virtual network and subnet during server creation. After the flexible server is deployed, you cannot move it into another virtual network, subnet or to *Public access (allowed IP addresses)*.
To create a flexible server in a virtual network, you need:
4. On the pull-out screen, under **Service endpoint**, choose `Microsoft.storage` from the drop-down. 5. Save the changes. -
+- If you want to setup your own private DNS zone to use with the flexible server, please see [private DNS overview](https://docs.microsoft.com/azure/dns/private-dns-overview) documentation for more details.
+
## Create Azure Database for PostgreSQL - Flexible Server in an already existing virtual network 1. Select **Create a resource** (+) in the upper-left corner of the portal.
To create a flexible server in a virtual network, you need:
4. Fill out the **Basics** form. 5. Go to the **Networking** tab to configure how you want to connect to your server. 6. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** and select the already existing *virtual network* and *Subnet* created as part of prerequisites above.
-7. Select **Review + create** to review your flexible server configuration.
-8. Select **Create** to provision the server. Provisioning can take a few minutes.
+7. Under **Private DNS Integration**, by default, a new private DNS zone will be created using the server name. Optionally, you can choose the *subscription* and the *Private DNS zone* from the drop-down list.
+8. Select **Review + create** to review your flexible server configuration.
+9. Select **Create** to provision the server. Provisioning can take a few minutes.
>[!Note] > After the flexible server is deployed to a virtual network and subnet, you cannot move it to Public access (allowed IP addresses).+
+>[!Note]
+> If you want to connect to the flexible server from a client that is provisioned in another VNET, you have to link the private DNS zone with the VNET. See this [linking the virtual network](https://docs.microsoft.com/azure/dns/private-dns-getstarted-portal#link-the-virtual-network) documentation on how to do it.
+ ## Next steps - [Create and manage Azure Database for PostgreSQL - Flexible Server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md). - Learn more about [networking in Azure Database for PostgreSQL - Flexible Server](./concepts-networking.md)
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/how-to-restore-server-portal.md
Previously updated : 09/22/2020 Last updated : 04/22/2021 # Point-in-time restore of a Flexible Server
Last updated 09/22/2020
> [!IMPORTANT] > Azure Database for PostgreSQL - Flexible Server is in preview
-This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups. You can perform either to an earliest restore point or a custom restore point within your retention period.
+This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups. You can perform either to a latest restore point or a custom restore point within your retention period.
## Pre-requisites
To complete this how-to guide, you need:
- You must have an Azure Database for PostgreSQL - Flexible Server. The same procedure is also applicable for flexible server configured with zone redundancy.
-## Restoring to the earliest restore point
+## Restoring to the latest restore point
-Follow these steps to restore your flexible server using an earliest
-existing backup.
+Follow these steps to restore your flexible server using an existing backup.
1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
existing backup.
:::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
-3. Restore page will be shown with an option to choose between Earliest restore point and Custom restore point.
+3. Restore page will be shown with an option to choose between the latest restore point and Custom restore point.
-4. Select **Earliest restore point** and provide a new server name in the **Restore to new server** field. The earliest timestamp that you can restore to is displayed.
+4. Select **Latest restore point** and provide a new server name in the **Restore to new server** field. You can optionally choose the Availability zone to restore to.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-earliest.png" alt-text="Earliest restore time":::
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-latest.png" alt-text="Latest restore time":::
5. Click **OK**.
existing backup.
## Restoring to a custom restore point
-Follow these steps to restore your flexible server using an earliest
-existing backup.
+Follow these steps to restore your flexible server using an existing backup.
1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. 2. From the overview page, click **Restore**. :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
-3. Restore page will be shown with an option to choose between Earliest restore point and Custom restore point.
+3. Restore page will be shown with an option to choose between the latest restore point and Custom restore point.
4. Choose **Custom restore point**.
-5. Select date and time and provide a new server name in the **Restore to new server** field.
+5. Select date and time and provide a new server name in the **Restore to new server** field. Provide a new server name and you can optionally choose the **Availability zone** to restore to.
6. Click **OK**.
-7. A notification will be shown that the restore operation has been
- initiated.
+7. A notification will be shown that the restore operation has been initiated.
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/overview.md
Previously updated : 04/05/2021 Last updated : 04/22/2021
-# Azure Database for PostgreSQL - Flexible Server
+# Overview - Azure Database for PostgreSQL - Flexible Server
[Azure Database for PostgreSQL](../overview.md) powered by the PostgreSQL community edition is available in three deployment modes:
Flexible servers allows full private access to the servers using Azure virtual n
The flexible server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service exposes host server metrics to monitor resources utilization and allows configuring slow query logs. Using these tools, you can quickly optimize your workloads, and configure your server for best performance.
+## Built-in PgBouncer
+
+The flexible server comes with a built-in PgBouncer, a connection pooler. You can optionally enable it and connect your applications to your database server via PgBouncer using the same host name and the port 6432.
+ ## Azure regions One of the advantage of running your workload in Azure is it's global reach. The flexible server is available today in following Azure regions:
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/release-notes.md
+
+ Title: Azure Database for PostgreSQL - Flexible Server Release notes
+description: Release notes of Azure Database for PostgreSQL - Flexible Server.
+++++ Last updated : 04/20/2021++
+# Release notes - Azure Database for PostgreSQL - Flexible Server
+
+This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL.
+
+> [!IMPORTANT]
+> Azure Database for PostgreSQL - Flexible Server is in preview
+
+## April-2021-Release-1.0
+
+* Support for [latest minors](./concepts-supported-versions.md) 12.6 and 11.11 with new server creates. Your existing servers will be automatically upgraded to the latest minor versions in your subsequent scheduled maintenance window.
+* Support for Virtual Network (VNET) [private DNS zone](./concepts-networking.md#private-access-vnet-integration).
+* Support to choose the Availability zone during Point-in-time recovery operation.
+* Support for new [regions](./overview.md#azure-regions) including Australia East, Canada Central, and France Central.
+* Support for [built-in PgBouncer](./concepts-pgbouncer.md) connection pooler.
+* Support for [pglogical](https://github.com/2ndQuadrant/pglogical) extension version 2.3.2.
+* [Intelligent performance](concepts-query-store.md) in public preview.
+* Several bug fixes, stability and performance improvements.
+
+## Contacts
+
+For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address is not a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597976-azure-database-for-postgresql).
+
+
+## Next steps
+
+Now that you've read an introduction to Azure Database for PostgreSQL flexible server deployment mode, you're ready to create your first server: [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md)
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-saps4hana-source.md
The SAP S/4HANA source supports **Full scan** to extract metadata from a SAP S/4
:::image type="content" source="media/register-scan-saps4hana-source/requirement.png" alt-text="pre-requisite" border="true":::
-5. The connector reads metadata from SAP using the Java Connector (JCo)
+5. The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html)
3.0 API. Hence make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you are using the correct JCo distribution for your
To manage or delete a scan, do the following:
## Next steps - [Browse the Azure Purview Data catalog](how-to-browse-catalog.md)-- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
+- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
search Cognitive Search Tutorial Aml Designer Custom Skill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-aml-designer-custom-skill.md
+
+ Title: "Example: Create and deploy a custom skill with Azure Machine Learning designer"
+
+description: This example demonstrates how to use Azure Machine Learning designer to build and deploy a custom AML skill for Azure Cognitive Search's AI enrichment pipeline.
+++++ Last updated : 04/16/2021++
+# Example: Build and deploy a custom skill with Azure Machine Learning designer
+
+[Azure Machine Learning designer](https://docs.microsoft.com/azure/machine-learning/concept-designer) is an easy to use interactive canvas to create machine learning models for tasks like regression and classification. Invoking the model created by the designer in a Cognitive Search enrichment pipeline requires a few additional steps. In this example, you will create a simple regression model to predict the price of an automobile and invoke the inferencing endpoint as an AML skill.
+
+Follow the [Regression - Automobile Price Prediction (Advanced)](https://github.com/Azure/MachineLearningDesigner/blob/master/articles/samples/regression-automobile-price-prediction-compare-algorithms.md) tutorial in the [examples pipelines & datasets](https://docs.microsoft.com/azure/machine-learning/samples-designer) documentation page to create a model that predicts the price of an automobile given the different features.
+
+> [!IMPORTANT]
+> Deploying the model following the real time inferencing process will result in a valid endpoint, but not one that you can use with the AML skill in Cognitive Search.
+
+## Register model and download assets
+
+Once you have a model trained, [register the trained model](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-model-designer) and follow the steps to download all the files in the `trained_model_outputs` folder or download only the `score.py` and `conda_env.yml` files from the models artifacts page. You will edit the scoring script before the model is deployed as a real-time inferencing endpoint.
++
+## Edit the scoring script for use with Cognitive Search
+
+Cognitive Search enrichment pipelines work on a single document and generate a request that contains the inputs for a single prediction. The downloaded `score.py` accepts a list of records and returns a list of predictions as a serialized JSON string. You will be making two changes to the `score.py`
+
+* Edit the script to work with a single input record, not a list
+* Edit the script to return a JSON object with a single property, the predicted price.
+
+Open the downloaded `score.py` and edit the `run(data)` function. The function is currently setup to expect the following input as described in the model's `_samples.json` file.
+
+```json
+[
+ {
+ "symboling": 2,
+ "make": "mitsubishi",
+ "fuel-type": "gas",
+ "aspiration": "std",
+ "num-of-doors": "two",
+ "body-style": "hatchback",
+ "drive-wheels": "fwd",
+ "engine-location": "front",
+ "wheel-base": 93.7,
+ "length": 157.3,
+ "width": 64.4,
+ "height": 50.8,
+ "curb-weight": 1944,
+ "engine-type": "ohc",
+ "num-of-cylinders": "four",
+ "engine-size": 92,
+ "fuel-system": "2bbl",
+ "bore": 2.97,
+ "stroke": 3.23,
+ "compression-ratio": 9.4,
+ "horsepower": 68.0,
+ "peak-rpm": 5500.0,
+ "city-mpg": 31,
+ "highway-mpg": 38,
+ "price": 6189.0
+ },
+ {
+ "symboling": 0,
+ "make": "toyota",
+ "fuel-type": "gas",
+ "aspiration": "std",
+ "num-of-doors": "four",
+ "body-style": "wagon",
+ "drive-wheels": "fwd",
+ "engine-location": "front",
+ "wheel-base": 95.7,
+ "length": 169.7,
+ "width": 63.6,
+ "height": 59.1,
+ "curb-weight": 2280,
+ "engine-type": "ohc",
+ "num-of-cylinders": "four",
+ "engine-size": 92,
+ "fuel-system": "2bbl",
+ "bore": 3.05,
+ "stroke": 3.03,
+ "compression-ratio": 9.0,
+ "horsepower": 62.0,
+ "peak-rpm": 4800.0,
+ "city-mpg": 31,
+ "highway-mpg": 37,
+ "price": 6918.0
+ },
+ {
+ "symboling": 1,
+ "make": "honda",
+ "fuel-type": "gas",
+ "aspiration": "std",
+ "num-of-doors": "two",
+ "body-style": "sedan",
+ "drive-wheels": "fwd",
+ "engine-location": "front",
+ "wheel-base": 96.5,
+ "length": 169.1,
+ "width": 66.0,
+ "height": 51.0,
+ "curb-weight": 2293,
+ "engine-type": "ohc",
+ "num-of-cylinders": "four",
+ "engine-size": 110,
+ "fuel-system": "2bbl",
+ "bore": 3.15,
+ "stroke": 3.58,
+ "compression-ratio": 9.1,
+ "horsepower": 100.0,
+ "peak-rpm": 5500.0,
+ "city-mpg": 25,
+ "highway-mpg": 31,
+ "price": 10345.0
+ }
+]
+```
+
+Your changes will ensure that the model can accept the input generated by Cognitive Search during indexing, which is a single record.
+
+```json
+{
+ "symboling": 2,
+ "make": "mitsubishi",
+ "fuel-type": "gas",
+ "aspiration": "std",
+ "num-of-doors": "two",
+ "body-style": "hatchback",
+ "drive-wheels": "fwd",
+ "engine-location": "front",
+ "wheel-base": 93.7,
+ "length": 157.3,
+ "width": 64.4,
+ "height": 50.8,
+ "curb-weight": 1944,
+ "engine-type": "ohc",
+ "num-of-cylinders": "four",
+ "engine-size": 92,
+ "fuel-system": "2bbl",
+ "bore": 2.97,
+ "stroke": 3.23,
+ "compression-ratio": 9.4,
+ "horsepower": 68.0,
+ "peak-rpm": 5500.0,
+ "city-mpg": 31,
+ "highway-mpg": 38,
+ "price": 6189.0
+}
+```
+
+Replace lines 27 through 30 with
+```python
+
+ for key, val in data.items():
+ input_entry[key].append(decode_nan(val))
+```
+You will also need to edit the output that the script generates from a string to a JSON object. Edit the return statement (line 37) in the original file to:
+```python
+ output = result.data_frame.values.tolist()
+ return {
+ "predicted_price": output[0][-1]
+ }
+```
+
+Here is the updated `run` function with the changes in input format and the predicted output that will accept a single record as an input and return a JSON object with the predicted price.
+
+```python
+def run(data):
+ data = json.loads(data)
+ input_entry = defaultdict(list)
+ # data is now a JSON object not a list of JSON objects
+ for key, val in data.items():
+ input_entry[key].append(decode_nan(val))
+
+ data_frame_directory = create_dfd_from_dict(input_entry, schema_data)
+ score_module = ScoreModelModule()
+ result, = score_module.run(
+ learner=model,
+ test_data=DataTable.from_dfd(data_frame_directory),
+ append_or_result_only=True)
+ #return json.dumps({"result": result.data_frame.values.tolist()})
+ output = result.data_frame.values.tolist()
+ # return the last column of the the first row of the dataframe
+ return {
+ "predicted_price": output[0][-1]
+ }
+```
+## Register and deploy the model
+
+With your changes saved, you can now register the model in the portal. Select register model and provide it with a valid name. Choose `Other` for Model Framework, `Custom` for Framework Name and `1.0` for Framework Version. Select the `Upload folder` option and select the folder with the updated `score.py` and `conda_env.yaml`.
+
+Select the model and select on the `Deploy` action. The deployment step assumes you have an AKS inferencing cluster provisioned. Container instances are currently not supported in Cognitive Search.
+ 1. Provide a valid endpoint name
+2. Select the compute type of `Azure Kubernetes Service`
+3. Select the compute name for your inference cluster
+4. Toggle `enable authentication` to on
+5. Select `Key-based authentication` for the type
+6. Select the updated `score.py` for `entry script file`
+7. Select the `conda_env.yaml` for `conda dependencies file`
+8. Select the deploy button to deploy your new endpoint.
+
+## Integrate with Cognitive Search
+
+To integrate the newly created endpoint with Cognitive Search
+1. Add a JSON file containing a single automobile record to a blob container
+2. Configure a AI enrichment pipeline using the [import data workflow](https://docs.microsoft.com/azure/search/cognitive-search-quickstart-blob). Be sure to select `JSON` as the `parsing mode`
+3. On the `Add Enrichments` tab, select a single skill `Extract people names` as a placeholder.
+4. Add a new field to the index called `predicted_price` of type `Edm.Double`, set the Retrievable property to true.
+5. Complete the import data process
+
+### Add the AML Skill to the skillset
+
+From the list of skillsets, select the skillset you created. You will now edit the skillset to replace the people identification skill with the AML skill to predict prices.
+On the Skillset Definition (JSON) tab, select `Azure Machine Learning (AML)` from the skills dropdown. Select the workspace, for the AML skill to discover your endpoint, the workspace and search service need to be in the same Azure subscription.
+Select the endpoint that you created earlier in the tutorial.
+Validate that the skill is populated with the URI and authentication information as configured when you deployed the endpoint. Copy the skill template and replace the skill in the skillset.
+Edit the skill to:
+1. Set the name to a valid name
+2. Add a description
+3. Set degreesOfParallelism to 1
+4. Set the context to `/document`
+5. Set the inputs to all the required inputs, see the sample skill definition below
+6. Set the outputs to capture the predicted price returned.
+
+```json
+{
+ "@odata.type": "#Microsoft.Skills.Custom.AmlSkill",
+ "name": "AMLdemo",
+ "description": "AML Designer demo",
+ "context": "/document",
+ "uri": "Your AML endpoint",
+ "key": "Your AML endpoint key",
+ "resourceId": null,
+ "region": null,
+ "timeout": "PT30S",
+ "degreeOfParallelism": 1,
+ "inputs": [
+ {
+ "name": "symboling",
+ "source": "/document/symboling"
+ },
+ {
+ "name": "make",
+ "source": "/document/make"
+ },
+ {
+ "name": "fuel-type",
+ "source": "/document/fuel-type"
+ },
+ {
+ "name": "aspiration",
+ "source": "/document/aspiration"
+ },
+ {
+ "name": "num-of-doors",
+ "source": "/document/num-of-doors"
+ },
+ {
+ "name": "body-style",
+ "source": "/document/body-style"
+ },
+ {
+ "name": "drive-wheels",
+ "source": "/document/drive-wheels"
+ },
+ {
+ "name": "engine-location",
+ "source": "/document/engine-location"
+ },
+ {
+ "name": "wheel-base",
+ "source": "/document/wheel-base"
+ },
+ {
+ "name": "length",
+ "source": "/document/length"
+ },
+ {
+ "name": "width",
+ "source": "/document/width"
+ },
+ {
+ "name": "height",
+ "source": "/document/height"
+ },
+ {
+ "name": "curb-weight",
+ "source": "/document/curb-weight"
+ },
+ {
+ "name": "engine-type",
+ "source": "/document/engine-type"
+ },
+ {
+ "name": "num-of-cylinders",
+ "source": "/document/num-of-cylinders"
+ },
+ {
+ "name": "engine-size",
+ "source": "/document/engine-size"
+ },
+ {
+ "name": "fuel-system",
+ "source": "/document/fuel-system"
+ },
+ {
+ "name": "bore",
+ "source": "/document/bore"
+ },
+ {
+ "name": "stroke",
+ "source": "/document/stroke"
+ },
+ {
+ "name": "compression-ratio",
+ "source": "/document/compression-ratio"
+ },
+ {
+ "name": "horsepower",
+ "source": "/document/horsepower"
+ },
+ {
+ "name": "peak-rpm",
+ "source": "/document/peak-rpm"
+ },
+ {
+ "name": "city-mpg",
+ "source": "/document/city-mpg"
+ },
+ {
+ "name": "highway-mpg",
+ "source": "/document/highway-mpg"
+ },
+ {
+ "name": "price",
+ "source": "/document/price"
+ }
+ ],
+ "outputs": [
+ {
+ "name": "predicted_price",
+ "targetName": "predicted_price"
+ }
+ ]
+ }
+```
+### Update the indexer output field mappings
+
+The indexer output field mappings determine what enrichments are saved to the index. Replace the output field mappings section of the indexer with the snippet below:
+
+```json
+"outputFieldMappings": [
+ {
+ "sourceFieldName": "/document/predicted_price",
+ "targetFieldName": "predicted_price"
+ }
+ ]
+```
+
+You can now run your indexer and validate that the `predicted_price` property is populated in the index with the result from your AML skill output.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Review the custom skill web api](./cognitive-search-custom-skill-web-api.md)
+
+> [Learn more about adding custom skills to the enrichment pipeline](./cognitive-search-custom-skill-interface.md)
+
+> [Learn more about the AML skill](./cognitive-search-tutorial-aml-custom-skill.md)
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-cosmosdb-gremlin.md
+
+ Title: Search over Azure Cosmos DB Gremlin API data (preview)
+
+description: Import data from Azure Cosmos DB Gremlin API into a searchable index in Azure Cognitive Search. Indexers automate data ingestion for selected data sources like Azure Cosmos DB.
++++
+ms.devlang: rest-api
++ Last updated : 04/11/2021++
+# How to index data available through Cosmos DB Gremlin API using an indexer (preview)
+
+> [!IMPORTANT]
+> The Cosmos DB Gremlin API indexer is currently in preview. Preview functionality is provided without a service level agreement and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> You can request access to the preview by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview).
+> For this preview, we recommend using the [REST API version 2020-06-30-Preview](search-api-preview.md). There is currently limited portal support and no .NET SDK support.
+
+> [!WARNING]
+> In order for Azure Cognitive Search to index data in Cosmos DB through the Gremlin API, [Cosmos DB's own indexing](https://docs.microsoft.com/azure/cosmos-db/index-overview) must also be enabled and set to [Consistent](https://docs.microsoft.com/azure/cosmos-db/index-policy#indexing-mode). This is the default configuration for Cosmos DB. Azure Cognitive Search indexing will not work without Cosmos DB indexing already enabled.
+
+[Azure Cosmos DB indexing](https://docs.microsoft.com/azure/cosmos-db/index-overview) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are distinct operations, unique to each service. Before you start Azure Cognitive Search indexing, your Azure Cosmos DB database must already exist.
+
+This article shows you how to configure Azure Cognitive Search to index content from Azure Cosmos DB using the Gremlin API. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from Azure Cosmos DB using the Gremlin API.
+
+## Get started
+
+You can use the [preview REST API](https://docs.microsoft.com/rest/api/searchservice/index-2020-06-30-preview) to index Azure Cosmos DB data that's available through the Gremlin API by following a three-part workflow common to all indexers in Azure Cognitive Search: create a data source, create an index, create an indexer. In the process below, data extraction from Cosmos DB starts when you submit the Create Indexer request.
+
+By default the Azure Cognitive Search Cosmos DB Gremlin API indexer will make every vertex in your graph a document in the index. Edges will be ignored. Alternatively, you could set the query to only index the edges.
+
+### Step 1 - Assemble inputs for the request
+
+For each request, you must provide the service name and admin key for Azure Cognitive Search (in the POST header). You can use [Postman](search-get-started-postman.md) or any REST API client to send HTTPS requests to Azure Cognitive Search.
+
+Copy and save the following values for use in your request:
+++ Azure Cognitive Search service name++ Azure Cognitive Search admin key++ Cosmos DB Gremlin API connection string+
+You can find these values in the Azure portal:
+
+1. In the portal pages for Azure Cognitive Search, copy the search service URL from the Overview page.
+
+2. In the left navigation pane, click **Keys** and then copy either the primary or secondary key.
+
+3. Switch to the portal pages for your Cosmos DB account. In the left navigation pane, under **Settings**, click **Keys**. This page provides a URI, two sets of connection strings, and two sets of keys. Copy one of the connection strings to Notepad.
+
+### Step 2 - Create a data source
+
+A **data source** specifies the data to index, credentials, and policies for identifying changes in the data (such as modified or deleted documents inside your collection). The data source is defined as an independent resource so that it can be used by multiple indexers.
+
+To create a data source, formulate a POST request:
+
+```http
+ POST https://[service name].search.windows.net/datasources?api-version=2020-06-30-Preview
+ Content-Type: application/json
+ api-key: [Search service admin key]
+
+ {
+ "name": "mycosmosdbgremlindatasource",
+ "type": "cosmosdb",
+ "credentials": {
+ "connectionString": "AccountEndpoint=https://myCosmosDbEndpoint.documents.azure.com;AccountKey=myCosmosDbAuthKey;ApiKind=Gremlin;Database=myCosmosDbDatabaseId"
+ },
+ "container": { "name": "myGraphId", "query": null }
+ }
+```
+
+The body of the request contains the data source definition, which should include the following fields:
+
+| Field | Description |
+||-|
+| **name** | Required. Choose any name to represent your data source object. |
+|**type**| Required. Must be `cosmosdb`. |
+|**credentials** | Required. The **connectionString** must include an AccountEndpoint, AccountKey, ApiKind, and Database. The ApiKind is **Gremlin**.</br></br>For example:<br/>`AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>;ApiKind=Gremlin`<br/><br/>The AccountEndpoint must use the `*.documents.azure.com` endpoint.
+| **container** | Contains the following elements: <br/>**name**: Required. Specify the ID of the graph.<br/>**query**: Optional. The default is `g.V()`. To index the edges, set the query to `g.E()`. |
+| **dataChangeDetectionPolicy** | Incremental progress will be enabled by default using `_ts` as the high water mark column. |
+|**dataDeletionDetectionPolicy** | Optional. See [Indexing Deleted Documents](#DataDeletionDetectionPolicy) section.|
+
+### Step 3 - Create a target search index
+
+[Create a target Azure Cognitive Search index](/rest/api/searchservice/create-index) if you don't have one already. The following example creates an index with id, label, and description fields:
+
+```http
+ POST https://[service name].search.windows.net/indexes?api-version=2020-06-30-Preview
+ Content-Type: application/json
+ api-key: [Search service admin key]
+
+ {
+ "name": "mysearchindex",
+ "fields": [
+ {
+ "name": "rid",
+ "type": "Edm.String",
+ "facetable": false,
+ "filterable": false,
+ "key": true,
+ "retrievable": true,
+ "searchable": true,
+ "sortable": false,
+ "analyzer": "standard.lucene",
+ "indexAnalyzer": null,
+ "searchAnalyzer": null,
+ "synonymMaps": [],
+ "fields": []
+ },{
+ "name": "id",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "retrievable": true,
+ "sortable": false,
+ "facetable": false,
+ "key": false,
+ "indexAnalyzer": null,
+ "searchAnalyzer": null,
+ "analyzer": "standard.lucene",
+ "synonymMaps": []
+ }, {
+ "name": "label",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "retrievable": true,
+ "sortable": false,
+ "facetable": false,
+ "key": false,
+ "indexAnalyzer": null,
+ "searchAnalyzer": null,
+ "analyzer": "standard.lucene",
+ "synonymMaps": []
+ }]
+ }
+```
+
+Ensure that the schema of your target index is compatible with your graph.
+
+For partitioned collections, the default document key is Azure Cosmos DB's `_rid` property, which Azure Cognitive Search automatically renames to `rid` because field names cannot start with an underscore character. Also, Azure Cosmos DB `_rid` values contain characters that are invalid in Azure Cognitive Search keys. For this reason, the `_rid` values should be Base64 encoded if you would like to make it your document key.
+
+### Mapping between JSON Data Types and Azure Cognitive Search Data Types
+| JSON data type | Compatible target index field types |
+| | |
+| Bool |Edm.Boolean, Edm.String |
+| Numbers that look like integers |Edm.Int32, Edm.Int64, Edm.String |
+| Numbers that look like floating-points |Edm.Double, Edm.String |
+| String |Edm.String |
+| Arrays of primitive types, for example ["a", "b", "c"] |Collection(Edm.String) |
+| Strings that look like dates |Edm.DateTimeOffset, Edm.String |
+| GeoJSON objects, for example { "type": "Point", "coordinates": [long, lat] } |Edm.GeographyPoint |
+| Other JSON objects |N/A |
+
+### Step 4 - Configure and run the indexer
+
+Once the index and data source have been created, you're ready to create the indexer:
+
+```http
+ POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+
+ {
+ "name": "mycosmosdbgremlinindexer",
+ "description": "My Cosmos DB Gremlin API indexer",
+ "dataSourceName": "mycosmosdbgremlindatasource",
+ "targetIndexName": "mysearchindex"
+ }
+```
+
+This indexer will start running after it's created and only run once. You can add the optional schedule parameter to the request to set your indexer to run on a schedule. For more information about defining indexer schedules, see [How to schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
+
+For more details on the Create Indexer API, check out [Create Indexer](https://docs.microsoft.com/rest/api/searchservice/create-indexer).
+
+<a name="DataDeletionDetectionPolicy"></a>
+
+## Indexing deleted documents
+
+When graph data is deleted, you might want to delete its corresponding document from the search index as well. The purpose of a data deletion detection policy is to efficiently identify deleted data items and delete the full document from the index. The data deletion detection policy isn't meant to delete partial document information. Currently, the only supported policy is the `Soft Delete` policy (deletion is marked with a flag of some sort), which is specified as follows:
+
+```http
+ {
+ "@odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
+ "softDeleteColumnName" : "the property that specifies whether a document was deleted",
+ "softDeleteMarkerValue" : "the value that identifies a document as deleted"
+ }
+```
+
+The following example creates a data source with a soft-deletion policy:
+
+```http
+ POST https://[service name].search.windows.net/datasources?api-version=2020-06-30-Preview
+ Content-Type: application/json
+ api-key: [Search service admin key]
+
+ {
+ "name": "mycosmosdbgremlindatasource",
+ "type": "cosmosdb",
+ "credentials": {
+ "connectionString": "AccountEndpoint=https://myCosmosDbEndpoint.documents.azure.com;AccountKey=myCosmosDbAuthKey;ApiKind=Gremlin;Database=myCosmosDbDatabaseId"
+ },
+ "container": { "name": "myCollection" },
+ "dataChangeDetectionPolicy": {
+ "@odata.type": "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
+ "highWaterMarkColumnName": "`_ts`"
+ },
+ "dataDeletionDetectionPolicy": {
+ "@odata.type": "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
+ "softDeleteColumnName": "isDeleted",
+ "softDeleteMarkerValue": "true"
+ }
+ }
+```
+
+<a name="MappingGraphData"></a>
+
+## Mapping graph data to a search index
+
+The Cosmos DB Gremlin API indexer will automatically map a couple pieces of graph data for you:
+
+1. The indexer will map `_rid` to an `rid` field in the index if it exists. Note that if you would like to use the `rid` value as a key in your index you should Base64 encode the key since `_rid` can contain characters that are invalid in Azure Cognitive Search document keys.
+
+1. The indexer will map `_id` to an `id` field in the index if it exists.
+
+1. When querying your Cosmos DB database using the Gremlin API you may notice that the JSON output for each property has an `id` and a `value`. Azure Cognitive Search Cosmos DB indexer will automatically map the properties `value` into a field in your search index that has the same name as the property if it exists. In the following example, 450 would be mapped to a `pages` field in the search index.
+
+```http
+ {
+ "id": "Cookbook",
+ "label": "book",
+ "type": "vertex",
+ "properties": {
+ "pages": [
+ {
+ "id": "48cf6285-a145-42c8-a0aa-d39079277b71",
+ "value": "450"
+ }
+ ]
+ }
+ }
+```
+
+You may find that you need to use [Output Field Mappings](cognitive-search-output-field-mapping.md) in order to map your query output to the fields in your index. You'll likely want to use Output Field Mappings instead of [Field Mappings](search-indexer-field-mappings.md) since the custom query will likely have complex data.
+
+For example, let's say that your query produces this output:
+
+```json
+ [
+ {
+ "vertex": {
+ "id": "Cookbook",
+ "label": "book",
+ "type": "vertex",
+ "properties": {
+ "pages": [
+ {
+ "id": "48cf6085-a211-42d8-a8ea-d38642987a71",
+ "value": "450"
+ }
+ ],
+ }
+ },
+ "written_by": [
+ {
+ "yearStarted": "2017"
+ }
+ ]
+ }
+ ]
+```
+
+If you would like to map the value of `pages` in the JSON above to a `totalpages` field in your index, you can add the following [Output Field Mapping](cognitive-search-output-field-mapping.md) to your indexer definition:
+
+```json
+ ... // rest of indexer definition
+ "outputFieldMappings": [
+ {
+ "sourceFieldName": "/document/vertex/pages",
+ "targetFieldName": "totalpages"
+ }
+ ]
+```
+
+Notice how the Output Field Mapping starts with `/document` and does not include a reference to the properties key in the JSON. This is because the indexer puts each document under the `/document` node when ingesting the graph data and the indexer also automatically allows you to reference the value of `pages` by simple referencing `pages` instead of having to reference the first object in the array of `pages`.
+
+## Next steps
+
+* To learn more about Azure Cosmos DB Gremlin API, see the [Introduction to Azure Cosmos DB: Gremlin API](https://docs.microsoft.com/azure/cosmos-db/graph-introduction).
+* To learn more about Azure Cognitive Search, see the [Search service page](https://azure.microsoft.com/services/search/).
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-cosmosdb.md
Title: Search over Azure Cosmos DB data
+ Title: Search over Azure Cosmos DB data using SQL, MongoDB, or Cassandra API
description: Import data from Azure Cosmos DB into a searchable index in Azure Cognitive Search. Indexers automate data ingestion for selected data sources like Azure Cosmos DB.
Last updated 07/11/2020
-# How to index Cosmos DB data using an indexer in Azure Cognitive Search
+# How to index data available through Cosmos DB SQL, MongoDB, or Cassandra API using an indexer in Azure Cognitive Search
> [!IMPORTANT] > SQL API is generally available.
-> MongoDB API, Gremlin API, and Cassandra API support are currently in public preview. Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> MongoDB API and Cassandra API support are currently in public preview. Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
> You can request access to the previews by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview). > [REST API preview versions](search-api-preview.md) provide these features. There is currently limited portal support, and no .NET SDK support. > [!WARNING] > Only Cosmos DB collections with an [indexing policy](../cosmos-db/index-policy.md) set to [Consistent](../cosmos-db/index-policy.md#indexing-mode) are supported by Azure Cognitive Search. Indexing collections with a Lazy indexing policy is not recommended and may result in missing data. Collections with indexing disabled are not supported.
-This article shows you how to configure an Azure Cosmos DB [indexer](search-indexer-overview.md) to extract content and make it searchable in Azure Cognitive Search. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from Azure Cosmos DB.
+This article shows you how to configure an Azure Cosmos DB [indexer](search-indexer-overview.md) to extract content and make it searchable in Azure Cognitive Search. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from Azure Cosmos DB.
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are distinct operations, unique to each service. Before you start Azure Cognitive Search indexing, your Azure Cosmos DB database must already exist and contain data.
The Cosmos DB indexer in Azure Cognitive Search can crawl [Azure Cosmos DB items
+ For [MongoDB API (preview)](../cosmos-db/mongodb-introduction.md), you can use either the [portal](#cosmos-indexer-portal) or the [REST API version 2020-06-30-Preview](search-api-preview.md) to create the data source and indexer.
-+ For [Cassandra API (preview)](../cosmos-db/cassandra-introduction.md) and [Gremlin API (preview)](../cosmos-db/graph-introduction.md), you can only use the [REST API version 2020-06-30-Preview](search-api-preview.md) to create the data source and indexer.
++ For [Cassandra API (preview)](../cosmos-db/cassandra-introduction.md), you can only use the [REST API version 2020-06-30-Preview](search-api-preview.md) to create the data source and indexer. > [!Note]
The easiest method for indexing Azure Cosmos DB items is to use a wizard in the
We recommend using the same region or location for both Azure Cognitive Search and Azure Cosmos DB for lower latency and to avoid bandwidth charges.
-### 1 - Prepare source data
+### Step 1 - Prepare source data
-You should have a Cosmos DB account, an Azure Cosmos DB database mapped to the SQL API, MongoDB API (preview), or Gremlin API (preview), and content in the database.
+You should have a Cosmos DB account, an Azure Cosmos DB database mapped to the SQL API or MongoDB API (preview), and content in the database.
Make sure your Cosmos DB database contains data. The [Import data wizard](search-import-data-portal.md) reads metadata and performs data sampling to infer an index schema, but it also loads data from Cosmos DB. If the data is missing, the wizard stops with this error "Error detecting index schema from data source: Could not build a prototype index because datasource 'emptycollection' returned no data".
-### 2 - Start Import data wizard
+### Step 2 - Start Import data wizard
You can [start the wizard](search-import-data-portal.md) from the command bar in the Azure Cognitive Search service page, or if you're connecting to Cosmos DB SQL API you can click **Add Azure Cognitive Search** in the **Settings** section of your Cosmos DB account's left navigation pane. ![Import data command in portal](./medi2.png "Start the Import data wizard")
-### 3 - Set the data source
+### Step 3 - Set the data source
In the **data source** page, the source must be **Cosmos DB**, with the following specifications:
In the **data source** page, the source must be **Cosmos DB**, with the followin
+ **Cosmos DB account** should be in one of the following formats: 1. The primary or secondary connection string from Cosmos DB with the following format: `AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;`. + For version 3.2 and version 3.6 **MongoDB collections** use the following format for the Cosmos DB account in the Azure portal: `AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;ApiKind=MongoDb`
- + For **Gremlin graphs and Cassandra tables**, sign up for the [gated indexer preview](https://aka.ms/azure-cognitive-search/indexer-preview) to get access to the preview and information about how to format the credentials.
+ + For **Cassandra tables**, sign up for the [gated indexer preview](https://aka.ms/azure-cognitive-search/indexer-preview) to get access to the preview and information about how to format the credentials.
1. A managed identity connection string with the following format that does not include an account key: `ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)`. To use this connection string format, follow the instructions for [Setting up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md). + **Database** is an existing database from the account.
In the **data source** page, the source must be **Cosmos DB**, with the followin
![Cosmos DB data source definition](media/search-howto-index-cosmosdb/cosmosdb-datasource.png "Cosmos DB data source definition")
-### 4 - Skip the "Enrich content" page in the wizard
+### Step 4 - Skip the "Enrich content" page in the wizard
-Adding cognitive skills (or enrichment) is not an import requirement. Unless you have a specific need to [add AI enrichment](cognitive-search-concept-intro.md) to your indexing pipeline, you should skip this step.
+Adding cognitive skills (or enrichment) is not an import requirement. Unless you have a specific need to [add AI enrichment](cognitive-search-concept-intro.md) to your indexing pipeline, you can skip this step.
To skip the step, click the blue buttons at the bottom of the page for "Next" and "Skip".
-### 5 - Set index attributes
+### Step 5 - Set index attributes
In the **Index** page, you should see a list of fields with a data type and a series of checkboxes for setting index attributes. The wizard can generate a fields list based on metadata and by sampling the source data.
Take a moment to review your selections. Once you run the wizard, physical data
![Cosmos DB index definition](media/search-howto-index-cosmosdb/cosmosdb-index-schema.png "Cosmos DB index definition")
-### 6 - Create indexer
+### Step 6 - Create indexer
Fully specified, the wizard creates three distinct objects in your search service. A data source object and index object are saved as named resources in your Azure Cognitive Search service. The last step creates an indexer object. Naming the indexer allows it to exist as a standalone resource, which you can schedule and manage independently of the index and data source object, created in the same wizard sequence.
When indexing is complete, you can use [Search explorer](search-explorer.md) to
## Use REST APIs
-You can use the REST API to index Azure Cosmos DB data, following a three-part workflow common to all indexers in Azure Cognitive Search: create a data source, create an index, create an indexer. Data extraction from Cosmos DB occurs when you submit the Create Indexer request. After this request is finished, you will have a queryable index.
+You can use the REST API to index Azure Cosmos DB data, following a three-part workflow common to all indexers in Azure Cognitive Search: create a data source, create an index, create an indexer. In the process below, data extraction from Cosmos DB starts when you submit the Create Indexer request.
> [!NOTE]
-> For indexing data from Cosmos DB Gremlin API or Cosmos DB Cassandra API you must first request access to the gated previews by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview). Once your request is processed, you will receive instructions for how to use the [REST API version 2020-06-30-Preview](search-api-preview.md) to create the data source.
+> For indexing data from Cosmos DB Cassandra API you must first request access to the gated previews by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview). Once your request is processed, you will receive instructions for how to use the [REST API version 2020-06-30-Preview](search-api-preview.md) to create the data source.
Earlier in this article it is mentioned that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) indexing are distinct operations. For Cosmos DB indexing, by default all documents are automatically indexed except with the Cassandra API. If you turn off automatic indexing, documents can be accessed only through their self-links or by queries by using the document ID. Azure Cognitive Search indexing requires Cosmos DB automatic indexing to be turned on in the collection that will be indexed by Azure Cognitive Search. When signing up for the Cosmos DB Cassandra API indexer preview, you'll be given instructions on how set up Cosmos DB indexing. > [!WARNING] > Azure Cosmos DB is the next generation of DocumentDB. Previously with API version **2017-11-11** you could use the `documentdb` syntax. This meant that you could specify your data source type as `cosmosdb` or `documentdb`. Starting with API version **2019-05-06** both the Azure Cognitive Search APIs and Portal only support the `cosmosdb` syntax as instructed in this article. This means that the data source type must `cosmosdb` if you would like to connect to a Cosmos DB endpoint.
-### 1 - Assemble inputs for the request
+### Step 1 - Assemble inputs for the request
For each request, you must provide the service name and admin key for Azure Cognitive Search (in the POST header), and the storage account name and key for blob storage. You can use [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) to send HTTP requests to Azure Cognitive Search.
-Copy the following four values into Notepad so that you can paste them into a request:
+Copy the following three values for use with your request:
+ Azure Cognitive Search service name + Azure Cognitive Search admin key
You can find these values in the portal:
1. In the portal pages for Azure Cognitive Search, copy the search service URL from the Overview page.
-2. In the left navigation pane, click **Keys** and then copy either the primary or secondary key (they are equivalent).
+2. In the left navigation pane, click **Keys** and then copy either the primary or secondary key.
3. Switch to the portal pages for your Cosmos storage account. In the left navigation pane, under **Settings**, click **Keys**. This page provides a URI, two sets of connection strings, and two sets of keys. Copy one of the connection strings to Notepad.
-### 2 - Create a data source
+### Step 2 - Create a data source
A **data source** specifies the data to index, credentials, and policies for identifying changes in the data (such as modified or deleted documents inside your collection). The data source is defined as an independent resource so that it can be used by multiple indexers.
The body of the request contains the data source definition, which should includ
||-| | **name** | Required. Choose any name to represent your data source object. | |**type**| Required. Must be `cosmosdb`. |
-|**credentials** | Required. Must either follow the Cosmos DB connection string format or a managed identity connection string format.<br/><br/>For **SQL collections**, connection strings can follow either of the below formats: <li>`AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>`<li>A managed identity connection string with the following format that does not include an account key: `ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;`. To use this connection string format, follow the instructions for [Setting up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).<br/><br/>For version 3.2 and version 3.6 **MongoDB collections** use either of the following formats for the connection string: <li>`AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>;ApiKind=MongoDb`<li>A managed identity connection string with the following format that does not include an account key: `ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;ApiKind=MongoDb;`. To use this connection string format, follow the instructions for [Setting up an indexer connection to a Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).<br/><br/>For **Gremlin graphs and Cassandra tables**, sign up for the [gated indexer preview](https://aka.ms/azure-cognitive-search/indexer-preview) to get access to the preview and information about how to format the credentials.<br/><br/>Avoid port numbers in the endpoint url. If you include the port number, Azure Cognitive Search will be unable to index your Azure Cosmos DB database.|
-| **container** | Contains the following elements: <br/>**name**: Required. Specify the ID of the database collection to be indexed.<br/>**query**: Optional. You can specify a query to flatten an arbitrary JSON document into a flat schema that Azure Cognitive Search can index.<br/>For the MongoDB API, Gremlin API, and Cassandra API, queries are not supported. |
+|**credentials** | Required. Must be a Cosmos DB connection string.<br/><br/>For **SQL collections**, connection strings are in this format: `AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>`<br/><br/>For version 3.2 and version 3.6 **MongoDB collections** use the following format for the connection string: `AccountEndpoint=https://<Cosmos DB account name>.documents.azure.com;AccountKey=<Cosmos DB auth key>;Database=<Cosmos DB database id>;ApiKind=MongoDb`<br/><br/>For **Cassandra tables**, sign up for the [gated indexer preview](https://aka.ms/azure-cognitive-search/indexer-preview) to get access to the preview and information about how to format the credentials.<br/><br/>Avoid port numbers in the endpoint url. If you include the port number, Azure Cognitive Search will be unable to index your Azure Cosmos DB database.|
+| **container** | Contains the following elements: <br/>**name**: Required. Specify the ID of the database collection to be indexed.<br/>**query**: Optional. You can specify a query to flatten an arbitrary JSON document into a flat schema that Azure Cognitive Search can index.<br/>For the MongoDB API and the Cassandra API, queries are not supported. |
| **dataChangeDetectionPolicy** | Recommended. See [Indexing Changed Documents](#DataChangeDetectionPolicy) section.| |**dataDeletionDetectionPolicy** | Optional. See [Indexing Deleted Documents](#DataDeletionDetectionPolicy) section.|
The body of the request contains the data source definition, which should includ
You can specify a SQL query to flatten nested properties or arrays, project JSON properties, and filter the data to be indexed. > [!WARNING]
-> Custom queries are not supported for **MongoDB API**, **Gremlin API**, and **Cassandra API**: `container.query` parameter must be set to null or omitted. If you need to use a custom query, please let us know on [User Voice](https://feedback.azure.com/forums/263029-azure-search).
+> Custom queries are not supported for **MongoDB API** and **Cassandra API**: `container.query` parameter must be set to null or omitted. If you need to use a custom query, please let us know on [User Voice](https://feedback.azure.com/forums/263029-azure-search).
Example document:
Array flattening query:
SELECT c.id, c.userId, tag, c._ts FROM c JOIN tag IN c.tags WHERE c._ts >= @HighWaterMark ORDER BY c._ts ```
-### 3 - Create a target search index
+### Step 3 - Create a target search index
[Create a target Azure Cognitive Search index](/rest/api/searchservice/create-index) if you donΓÇÖt have one already. The following example creates an index with an ID and description field:
SELECT c.id, c.userId, tag, c._ts FROM c JOIN tag IN c.tags WHERE c._ts >= @High
"name": "description", "type": "Edm.String", "filterable": false,
+ "searchable": true,
"sortable": false, "facetable": false, "suggestions": true
Ensure that the schema of your target index is compatible with the schema of the
| GeoJSON objects, for example { "type": "Point", "coordinates": [long, lat] } |Edm.GeographyPoint | | Other JSON objects |N/A |
-### 4 - Configure and run the indexer
+### Step 4 - Configure and run the indexer
Once the index and data source have been created, you're ready to create the indexer:
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-private.md
If the `properties.provisioningState` of the resource is `Succeeded` and `proper
- If you've created the indexer without setting the `executionEnvironment` property and it runs successfully, it means that Azure Cognitive Search has decided that its execution environment is the search service-specific *private* environment. This can change, depending on resources consumed by the indexer, the load on the search service, and other factors, and it can fail later. To remedy the issue: * We highly recommend that you set the `executionEnvironment` property to `private` to ensure that it won't fail in the future.
+- If you're viewing your data source's networking page in the Azure portal and you select a private endpoint that you created for your Azure Cognitive Search service to access this data source, you may receive a *No Access* error. This is expected. You can change the status of the connection request via the target service's portal page but to further manage the shared private link resource you need to view the shared private link resource in your search service's network page in the Azure Portal.
+ [Quotas and limits](search-limits-quotas-capacity.md) determine how many shared private link resources can be created and depend on the SKU of the search service. ## Next steps
security-center Features Paas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/features-paas.md
Title: Azure Security Center features for supported Azure PaaS resources. description: This page shows the availability of Azure Security Center features for the supported Azure PaaS resources.- - Previously updated : 03/01/2020 Last updated : 04/25/2021
The table below shows the availability of Azure Security Center features for the
|Azure Cache for Redis|Γ£ö|-|-| |Azure Cloud Services|Γ£ö|-|-| |Azure Cognitive Search|Γ£ö|-|-|
-|Azure Container Registry|-|-|Γ£ö|
-|Azure Cosmos DB*|-|Γ£ö|-|
+|Azure Container Registry|Γ£ö|Γ£ö|Γ£ö|
+|Azure Cosmos DB*|Γ£ö|Γ£ö|-|
|Azure Data Lake Analytics|Γ£ö|-|-| |Azure Data Lake Storage|Γ£ö|Γ£ö|-| |Azure Database for MySQL*|-|Γ£ö|-|
security-center Security Center File Integrity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-file-integrity-monitoring.md
Title: File integrity monitoring in Azure Security Center | Microsoft Docs
+ Title: File integrity monitoring in Azure Security Center
description: Learn how to configure file integrity monitoring (FIM) in Azure Security Center using this walkthrough.- - - Previously updated : 09/22/2020 Last updated : 04/25/2021
FIM uses the Azure Change Tracking solution to track and identify changes in you
> If you remove the **Change Tracking** resource, you will also disable the file integrity monitoring feature in Security Center. ## Which files should I monitor?
-When choosing which files to monitor, consider which files are critical for your system and applications. Monitor files that you donΓÇÖt expect to change without planning. If you choose files that are frequently changed by applications or operating system (such as log files and text files) it'll create a lot of noise, making it difficult to identify an attack.
+
+When choosing which files to monitor, consider the files that are critical for your system and applications. Monitor files that you donΓÇÖt expect to change without planning. If you choose files that are frequently changed by applications or operating system (such as log files and text files) it'll create a lot of noise, making it difficult to identify an attack.
Security Center provides the following list of recommended items to monitor based on known attack patterns.
The **Servers** tab lists the machines reporting to this workspace. For each mac
- Total changes that occurred during the selected period of time - A breakdown of total changes as file changes or registry changes
-**Log Search** opens when you enter a machine name in the search field or select a machine listed under the Computers tab. Log Search displays all the changes made during the selected time period for the machine. You can expand a change for more information.
+When you select a machine, the query appears along with the results that identify the changes made during the selected time period for the machine. You can expand a change for more information.
-![Log Search][8]
The **Changes** tab (shown below) lists all changes for the workspace during the selected time period. For each entity that was changed, the dashboard lists the: -- Computer that the change occurred on
+- Machine that the change occurred on
- Type of change (registry or file) - Category of change (modified, added, removed) - Date and time of change
-![Changes for the workspace][9]
**Change details** opens when you enter a change in the search field or select an entity listed under the **Changes** tab.
-![Change details][10]
## Edit monitored entities
-1. Return to the **file integrity monitoring dashboard** and select **Settings**.
-
- ![Settings][11]
-
- **Workspace Configuration** opens displaying three tabs: **Windows Registry**, **Windows Files**, and **Linux Files**. Each tab lists the entities that you can edit in that category. For each entity listed, Security Center identifies if FIM is enabled (true) or not enabled (false). Editing the entity lets you enable or disable FIM.
+1. From the **File integrity monitoring dashboard** for a workspace, select **Settings** from the toolbar.
- ![Workspace configuration][12]
+ :::image type="content" source="./media/security-center-file-integrity-monitoring/file-integrity-monitoring-dashboard-settings.png" alt-text="Accessing the file integrity monitoring settings for a workspace" lightbox="./media/security-center-file-integrity-monitoring/file-integrity-monitoring-dashboard-settings.png":::
-2. Select an identity protection. In this example, we selected an item under Windows Registry. **Edit for Change Tracking** opens.
+ **Workspace Configuration** opens with tabs for each type of element that can be monitored:
- ![Edit or change tracking][13]
+ - Windows registry
+ - Windows files
+ - Linux Files
+ - File content
+ - Windows services
-Under **Edit for Change Tracking** you can:
+ Each tab lists the entities that you can edit in that category. For each entity listed, Security Center identifies whether FIM is enabled (true) or not enabled (false). Editing the entity lets you enable or disable FIM.
-- Enable (True) or disable (False) file integrity monitoring-- Provide or change the entity name-- Provide or change the value or path-- Delete the entity, discard the change, or save the change
+ :::image type="content" source="./media/security-center-file-integrity-monitoring/file-integrity-monitoring-workspace-configuration.png" alt-text="Workspace configuration for file integrity monitoring in Azure Security Center":::
-## Add a new entity to monitor
-1. Return to the **File integrity monitoring dashboard** and select **Settings** at the top. **Workspace Configuration** opens.
-2. Under **Workspace Configuration**, select the tab for the type of entity that you want to add: Windows Registry, Windows Files, or Linux Files. In this example, we selected **Linux Files**.
-
- ![Add a new item to monitor][14]
+1. Select an entry from one of the tabs and edit any of the available fields in the **Edit for Change Tracking** pane. Options include:
-3. Select **Add**. **Add for Change Tracking** opens.
+ - Enable (True) or disable (False) file integrity monitoring
+ - Provide or change the entity name
+ - Provide or change the value or path
+ - Delete the entity
- ![Enter requested information][15]
+1. Discard or save your changes.
-4. On the **Add** page, type the requested information and select **Save**.
-## Disable monitored entities
-1. Return to the **File integrity monitoring** dashboard.
-2. Select a workspace where FIM is currently enabled. A workspace is enabled for FIM if it is missing the Enable button or Upgrade Plan button.
+## Add a new entity to monitor
- ![Select a workspace where FIM is enabled][16]
+1. From the **File integrity monitoring dashboard** for a workspace, select **Settings** from the toolbar.
-3. Under file integrity monitoring, select **Settings**.
+ The **Workspace Configuration** opens.
- ![Select settings][17]
+1. One the **Workspace Configuration**:
-4. Under **Workspace Configuration**, select a group where **Enabled** is set to true.
+ 1. Select the tab for the type of entity that you want to add: Windows registry, Windows files, Linux Files, file content, or Windows services.
+ 1. Select **Add**.
- ![Workspace Configuration][18]
+ In this example, we selected **Linux Files**.
-5. Under **Edit for Change Tracking** window set **Enabled** to False.
+ :::image type="content" source="./media/security-center-file-integrity-monitoring/file-integrity-monitoring-add-element.png" alt-text="Adding an element to monitor in Azure Security Center's file integrity monitoring" lightbox="./media/security-center-file-integrity-monitoring/file-integrity-monitoring-add-element.png":::
- ![Set Enabled to false][19]
+1. Select **Add**. **Add for Change Tracking** opens.
-6. Select **Save**.
+1. Enter the necessary information and select **Save**.
## Folder and path monitoring using wildcards
Use wildcards to simplify tracking across directories. The following rules apply
## Disable FIM You can disable FIM. FIM uses the Azure Change Tracking solution to track and identify changes in your environment. By disabling FIM, you remove the Change Tracking solution from selected workspace.
-1. To disable FIM, return to the **File integrity monitoring** dashboard.
-2. Select a workspace.
-3. Under **File integrity monitoring**, select **Disable**.
+To disable FIM:
+
+1. From the **File integrity monitoring dashboard** for a workspace, select **Disable**.
- ![Disable FIM][20]
+ :::image type="content" source="./media/security-center-file-integrity-monitoring/disable-file-integrity-monitoring.png" alt-text="Disable file integrity monitoring from the settings page":::
-4. Select **Remove** to disable.
+1. Select **Remove**.
## Next steps In this article, you learned to use file integrity monitoring (FIM) in Security Center. To learn more about Security Center, see the following pages:
In this article, you learned to use file integrity monitoring (FIM) in Security
* [Azure Security blog](/archive/blogs/azuresecurity/)--Get the latest Azure security news and information. <!--Image references-->
-[1]: ./media/security-center-file-integrity-monitoring/security-center-dashboard.png
[3]: ./media/security-center-file-integrity-monitoring/enable.png
-[4]: ./media/security-center-file-integrity-monitoring/upgrade-plan.png
-[5]: ./media/security-center-file-integrity-monitoring/enable-fim.png
-[7]: ./media/security-center-file-integrity-monitoring/filter.png
-[8]: ./media/security-center-file-integrity-monitoring/log-search.png
-[9]: ./media/security-center-file-integrity-monitoring/changes-tab.png
-[10]: ./media/security-center-file-integrity-monitoring/change-details.png
-[11]: ./media/security-center-file-integrity-monitoring/fim-dashboard-settings.png
-[12]: ./media/security-center-file-integrity-monitoring/workspace-config.png
-[13]: ./media/security-center-file-integrity-monitoring/edit.png
-[14]: ./media/security-center-file-integrity-monitoring/add.png
-[15]: ./media/security-center-file-integrity-monitoring/add-item.png
-[16]: ./media/security-center-file-integrity-monitoring/fim-dashboard-disable.png
-[17]: ./media/security-center-file-integrity-monitoring/fim-dashboard-settings-disabled.png
-[18]: ./media/security-center-file-integrity-monitoring/workspace-config-disable.png
-[19]: ./media/security-center-file-integrity-monitoring/edit-disable.png
-[20]: ./media/security-center-file-integrity-monitoring/disable-fim.png
+[4]: ./media/security-center-file-integrity-monitoring/upgrade-plan.png
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-data-sources.md
The following data connection methods are supported by Azure Sentinel:
- [Azure Web Application Firewall (WAF)](connect-azure-waf.md) (formerly Microsoft WAF) - [Cloud App Security](connect-cloud-app-security.md) - [Domain name server](connect-dns.md)
- - [Microsoft 365 Defender](connect-microsoft-365-defender.md) - includes M365D incidents and MDE raw data
+ - [Microsoft 365 Defender](connect-microsoft-365-defender.md) - includes M365D incidents and Defender for Endpoint raw data
- [Microsoft Defender for Endpoint](connect-microsoft-defender-advanced-threat-protection.md) (formerly Microsoft Defender Advanced Threat Protection) - [Microsoft Defender for Identity](connect-azure-atp.md) (formerly Azure Advanced Threat Protection) - [Microsoft Defender for Office 365](connect-office-365-advanced-threat-protection.md) (formerly Office 365 Advanced Threat Protection)
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-microsoft-365-defender.md
Azure Sentinel's [Microsoft 365 Defender (M365D)](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all M365D incidents and alerts into Azure Sentinel, and keeps the incidents synchronized between both portals. M365D incidents include all their alerts, entities, and other relevant information, and they are enriched by and group together alerts from M365D's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Cloud App Security**.
-The connector also lets you stream **advanced hunting** events from Microsoft Defender for Endpoint into Azure Sentinel, allowing you to copy MDE advanced hunting queries into Azure Sentinel, enrich Sentinel alerts with MDE raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
+The connector also lets you stream **advanced hunting** events from Microsoft Defender for Endpoint into Azure Sentinel, allowing you to copy Defender for Endpoint advanced hunting queries into Azure Sentinel, enrich Sentinel alerts with Defender for Endpoint raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
For more information about incident integration and advanced hunting event collection, see [Microsoft 365 Defender integration with Azure Sentinel](microsoft-365-defender-sentinel-integration.md).
For more information about incident integration and advanced hunting event colle
> [!NOTE] > When you enable the Microsoft 365 Defender connector, all of the M365D componentsΓÇÖ connectors (the ones mentioned at the beginning of this article) are automatically connected in the background. In order to disconnect one of the componentsΓÇÖ connectors, you must first disconnect the Microsoft 365 Defender connector.
-1. To query M365 Defender incident data, use the following statement in the query window:
+1. To query Microsoft 365 Defender incident data, use the following statement in the query window:
```kusto SecurityIncident | where ProviderName == "Microsoft 365 Defender"
For more information about incident integration and advanced hunting event colle
The data graph in the connector page indicates that you are ingesting data. You'll notice that it shows one line each for incidents, alerts, and events, and the events line is an aggregation of event volume across all enabled tables. Once you have enabled the connector, you can use the following KQL queries to generate more specific graphs.
-Use the following KQL query for a graph of the incoming M365 Defender incidents:
+Use the following KQL query for a graph of the incoming Microsoft 365 Defender incidents:
```kusto let Now = now();
sentinel Connect Windows Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-windows-virtual-desktop.md
Windows Virtual Desktop data in Azure Sentinel includes the following types:
|Data |Description | ||| |**Windows event logs** | Windows event logs from the WVD environment are streamed into an Azure Sentinel-enabled Log Analytics workspace in the same manner as Windows event logs from other Windows machines, outside of the WVD environment. <br><br>Install the Log Analytics agent onto your Windows machine and configure the Windows event logs to be sent to the Log Analytics workspace.<br><br>For more information, see:<br>- [Install Log Analytics agent on Windows computers](/azure/azure-monitor/agents/agent-windows)<br>- [Collect Windows event log data sources with Log Analytics agent](/azure/azure-monitor/agents/data-sources-windows-events)<br>- [Connect Windows security events](connect-windows-security-events.md) |
-|**Microsoft Defender for Endpoint (MDE) alerts** | To configure MDE for Windows Virtual Desktop, use the same procedure as you would for any other Windows endpoint. <br><br>For more information, see: <br>- [Set up Microsoft Defender for Endpoint deployment](/windows/security/threat-protection/microsoft-defender-atp/production-deployment)<br>- [Connect data from Microsoft 365 Defender to Azure Sentinel](connect-microsoft-365-defender.md) |
+|**Microsoft Defender for Endpoint alerts** | To configure Defender for Endpoint for Windows Virtual Desktop, use the same procedure as you would for any other Windows endpoint. <br><br>For more information, see: <br>- [Set up Microsoft Defender for Endpoint deployment](/windows/security/threat-protection/microsoft-defender-atp/production-deployment)<br>- [Connect data from Microsoft 365 Defender to Azure Sentinel](connect-microsoft-365-defender.md) |
|**Windows Virtual Desktop diagnostics** | Windows Virtual Desktop diagnostics is a feature of the Windows Virtual Desktop PaaS service, which logs information whenever someone assigned Windows Virtual Desktop role uses the service. <br><br>Each log contains information about which Windows Virtual Desktop role was involved in the activity, any error messages that appear during the session, tenant information, and user information. <br><br>The diagnostics feature creates activity logs for both user and administrative actions. <br><br>For more information, see [Use Log Analytics for the diagnostics feature in Windows Virtual Desktop](/azure/virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019). | | | |
sentinel Entities Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/entities-reference.md
Strong identifiers of a mailbox entity:
*Entity name: MailCluster* > [!NOTE]
-> **MDO** = **Microsoft Defender for Office 365**, formerly known as Office 365 Advanced Threat Protection (O365 ATP).
+> **Microsoft Defender for Office 365** was formerly known as Office 365 Advanced Threat Protection (O365 ATP).
| Field | Type | Description | | -- | - | -- |
Strong identifiers of a mailbox entity:
| IsVolumeAnomaly | Bool? | Determines whether this is a volume anomaly mail cluster. | | Source | String | The source of the mail cluster (default is 'O365 ATP'). | | ClusterSourceIdentifier | String | The network message ID of the mail that is the source of this mail cluster. |
-| ClusterSourceType | String | The source type of the mail cluster. This maps to the MailClusterSourceType setting from MDO (see note above). |
-| ClusterQueryStartTime | DateTime? | Cluster start time - used as start time for cluster counts query. Usually dates to the End time minus DaysToLookBack setting from MDO (see note above). |
+| ClusterSourceType | String | The source type of the mail cluster. This maps to the MailClusterSourceType setting from Microsoft Defender for Office 365 (see note above). |
+| ClusterQueryStartTime | DateTime? | Cluster start time - used as start time for cluster counts query. Usually dates to the End time minus DaysToLookBack setting from Microsoft Defender for Office 365 (see note above). |
| ClusterQueryEndTime | DateTime? | Cluster end time - used as end time for cluster counts query. Usually the mail data's received time. |
-| ClusterGroup | String | Corresponds to the Kusto query key used on MDO (see note above). |
+| ClusterGroup | String | Corresponds to the Kusto query key used on Microsoft Defender for Office 365 (see note above). |
| Strong identifiers of a mail cluster entity:
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/microsoft-365-defender-sentinel-integration.md
Title: Microsoft 365 Defender integration with Azure Sentinel | Microsoft Docs
-description: Learn how Microsoft 365 Defender integration with Azure Sentinel gives you the ability to use Azure Sentinel as your universal incidents queue while preserving M365D's strengths to assist in investigating M365 security incidents, and also how to ingest Defender components' advanced hunting data into Azure Sentinel.
+description: Learn how using Microsoft 365 Defender together with Azure Sentinel lets you use Azure Sentinel as your universal incidents queue while seamlessly applying Microsoft 365 Defender's strengths to help investigate Microsoft 365 security incidents. Also, learn how to ingest Defender components' advanced hunting data into Azure Sentinel.
documentationcenter: na
ms.devlang: na
na Previously updated : 03/02/2021 Last updated : 04/21/2021
## Incident integration
-Azure Sentinel's [Microsoft 365 Defender (M365D)](/microsoft-365/security/mtp/microsoft-threat-protection) incident integration allows you to stream all M365D incidents into Azure Sentinel and keep them synchronized between both portals. Incidents from M365D (formerly known as Microsoft Threat Protection or MTP) include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Azure Sentinel. Once in Sentinel, Incidents will remain bi-directionally synced with M365D, allowing you to take advantage of the benefits of both portals in your incident investigation.
+Azure Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) incident integration allows you to stream all Microsoft 365 Defender incidents into Azure Sentinel and keep them synchronized between both portals. Incidents from Microsoft 365 Defender (formerly known as Microsoft Threat Protection or MTP) include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Azure Sentinel. Once in Sentinel, Incidents will remain bi-directionally synced with Microsoft 365 Defender, allowing you to take advantage of the benefits of both portals in your incident investigation.
-This integration gives Microsoft 365 security incidents the visibility to be managed from within Azure Sentinel, as part of the primary incident queue across the entire organization, so you can see ΓÇô and correlate ΓÇô M365 incidents together with those from all of your other cloud and on-premises systems. At the same time, it allows you to take advantage of the unique strengths and capabilities of M365D for in-depth investigations and an M365-specific experience across the M365 ecosystem. M365 Defender enriches and groups alerts from multiple M365 products, both reducing the size of the SOCΓÇÖs incident queue and shortening the time to resolve. The component services that are part of the M365 Defender stack are:
+This integration gives Microsoft 365 security incidents the visibility to be managed from within Azure Sentinel, as part of the primary incident queue across the entire organization, so you can see ΓÇô and correlate ΓÇô Microsoft 365 incidents together with those from all of your other cloud and on-premises systems. At the same time, it allows you to take advantage of the unique strengths and capabilities of Microsoft 365 Defender for in-depth investigations and a Microsoft 365-specific experience across the Microsoft 365 ecosystem. Microsoft 365 Defender enriches and groups alerts from multiple Microsoft 365 products, both reducing the size of the SOCΓÇÖs incident queue and shortening the time to resolve. The component services that are part of the Microsoft 365 Defender stack are:
-- **Microsoft Defender for Endpoint** (MDE, formerly MDATP)-- **Microsoft Defender for Identity** (MDI, formerly AATP)-- **Microsoft Defender for Office 365** (MDO, formerly O365ATP)-- **Microsoft Cloud App Security** (MCAS)
+- **Microsoft Defender for Endpoint** (formerly Microsoft Defender ATP)
+- **Microsoft Defender for Identity** (formerly Azure ATP)
+- **Microsoft Defender for Office 365** (formerly Office 365 ATP)
+- **Microsoft Cloud App Security**
-In addition to collecting alerts from these components, M365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Azure Sentinel.
+In addition to collecting alerts from these components, Microsoft 365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Azure Sentinel.
### Common use cases and scenarios -- One-click connect of M365 Defender incidents, including all alerts and entities from M365 Defender components, into Azure Sentinel.
+- One-click connect of Microsoft 365 Defender incidents, including all alerts and entities from Microsoft 365 Defender components, into Azure Sentinel.
-- Bi-directional sync between Sentinel and M365D incidents on status, owner and closing reason.
+- Bi-directional sync between Sentinel and Microsoft 365 Defender incidents on status, owner, and closing reason.
-- Leverage M365 Defender alert grouping and enrichment capabilities in Azure Sentinel, thus reducing time to resolve.
+- Application of Microsoft 365 Defender alert grouping and enrichment capabilities in Azure Sentinel, thus reducing time to resolve.
-- In-context deep link between an Azure Sentinel incident and its parallel M365 Defender incident, to facilitate investigations across both portals.
+- In-context deep link between an Azure Sentinel incident and its parallel Microsoft 365 Defender incident, to facilitate investigations across both portals.
### Connecting to Microsoft 365 Defender
-Once you have enabled the Microsoft 365 Defender data connector to [collect incidents and alerts](connect-microsoft-365-defender.md), M365D incidents will appear in the Azure Sentinel incidents queue, with **Microsoft 365 Defender** in the **Product name** field, shortly after they are generated in M365 Defender.
-- It can take up to 10 minutes from the time an incident is generated in M365 Defender to the time it appears in Azure Sentinel.
+Once you have enabled the Microsoft 365 Defender data connector to [collect incidents and alerts](connect-microsoft-365-defender.md), Microsoft 365 Defender incidents will appear in the Azure Sentinel incidents queue, with **Microsoft 365 Defender** in the **Product name** field, shortly after they are generated in Microsoft 365 Defender.
+- It can take up to 10 minutes from the time an incident is generated in Microsoft 365 Defender to the time it appears in Azure Sentinel.
- Incidents will be ingested and synchronized at no extra cost.
-Once the M365 Defender integration is connected, all the component alert connectors (MDE, MDI, MDO, MCAS) will be automatically connected in the background if they weren't already. If any component licenses were purchased after M365 Defender was connected, the alerts and incidents from the new product will still flow to Azure Sentinel with no additional configuration or charge.
+Once the Microsoft 365 Defender integration is connected, all the component alert connectors (Defender for Endpoint, Defender for Identity, Defender for Office 365, Cloud App Security) will be automatically connected in the background if they weren't already. If any component licenses were purchased after Microsoft 365 Defender was connected, the alerts and incidents from the new product will still flow to Azure Sentinel with no additional configuration or charge.
-### M365 Defender incidents and Microsoft incident creation rules
+### Microsoft 365 Defender incidents and Microsoft incident creation rules
-- Incidents generated by M365 Defender, on the basis of alerts coming from M365 security products, are created using custom M365 logic.
+- Incidents generated by Microsoft 365 Defender, based on alerts coming from Microsoft 365 security products, are created using custom Microsoft 365 Defender logic.
- Microsoft incident-creation rules in Azure Sentinel also create incidents from the same alerts, using (a different) custom Azure Sentinel logic. -- Using both mechanisms together is completely supported, and this configuration can be used to facilitate the transition to the new M365 Defender incident creation logic. This will, however, create **duplicate incidents** for the same alerts.
+- Using both mechanisms together is completely supported, and can be used to facilitate the transition to the new Microsoft 365 Defender incident creation logic. Doing so will, however, create **duplicate incidents** for the same alerts.
-- To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for M365 products (MDE, MDI, and MDO - see MCAS below) when connecting M365 Defender. This can be done by marking the relevant check box in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to M365 Defender incident integration.
+- To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for Microsoft 365 products (Defender for Endpoint, Defender for Identity, and Defender for Office 365 - see Cloud App Security below) when connecting Microsoft 365 Defender. This can be done by disabling incident creation in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to Microsoft 365 Defender incident integration.
-- For Microsoft Cloud App Security (MCAS) alerts, not all alert types are currently onboarded to M365 Defender. To make sure you are still getting incidents for all MCAS alerts, you must keep or create **Microsoft incident creation rules** for the alert types *not onboarded* to M365D.
+- For Microsoft Cloud App Security alerts, not all alert types are currently onboarded to Microsoft 365 Defender. To make sure you are still getting incidents for all Cloud App Security alerts, you must keep or create **Microsoft incident creation rules** for the [alert types *not onboarded* to Microsoft 365 Defender](microsoft-cloud-app-security-alerts-not-imported-microsoft-365-defender.md).
-### Working with M365 Defender incidents in Azure Sentinel and bi-directional sync
+### Working with Microsoft 365 Defender incidents in Azure Sentinel and bi-directional sync
-M365 Defender incidents will appear in the Azure Sentinel incidents queue with the product name **Microsoft 365 Defender**, and with similar details and functionality to any other Sentinel incidents. Each incident contains a link back to the parallel incident in the M365 Defender portal.
+Microsoft 365 Defender incidents will appear in the Azure Sentinel incidents queue with the product name **Microsoft 365 Defender**, and with similar details and functionality to any other Sentinel incidents. Each incident contains a link back to the parallel incident in the Microsoft 365 Defender portal.
-As the incident evolves in M365 Defender, and more alerts or entities are added to it, the Azure Sentinel incident will update accordingly.
+As the incident evolves in Microsoft 365 Defender, and more alerts or entities are added to it, the Azure Sentinel incident will update accordingly.
-Changes made to the status, closing reason, or assignment of an M365 incident, in either M365D or Azure Sentinel, will likewise update accordingly in the other's incidents queue. The synchronization will take place in both portals immediately after the change to the incident is applied, with no delay. A refresh might be required to see the latest changes.
+Changes made to the status, closing reason, or assignment of a Microsoft 365 incident, in either Microsoft 365 Defender or Azure Sentinel, will likewise update accordingly in the other's incidents queue. The synchronization will take place in both portals immediately after the change to the incident is applied, with no delay. A refresh might be required to see the latest changes.
-In M365 Defender, all alerts from one incident can be transferred to another, resulting in the incidents being merged. When this happens, the Azure Sentinel incidents will reflect the changes. One incident will contain all the alerts from both original incidents, and the other incident will be automatically closed, with a tag of "redirected" added.
+In Microsoft 365 Defender, all alerts from one incident can be transferred to another, resulting in the incidents being merged. When this merge happens, the Azure Sentinel incidents will reflect the changes. One incident will contain all the alerts from both original incidents, and the other incident will be automatically closed, with a tag of "redirected" added.
> [!NOTE]
-> Incidents in Azure Sentinel can contain a maximum of 150 alerts. M365D incidents can have more than this. If an M365D incident with more than 150 alerts is synchronized to Azure Sentinel, the Sentinel incident will show as having ΓÇ£150+ΓÇ¥ alerts and will provide a link to the parallel incident in M365D where you will see the full set of alerts.
+> Incidents in Azure Sentinel can contain a maximum of 150 alerts. Microsoft 365 Defender incidents can have more than this. If a Microsoft 365 Defender incident with more than 150 alerts is synchronized to Azure Sentinel, the Sentinel incident will show as having ΓÇ£150+ΓÇ¥ alerts and will provide a link to the parallel incident in Microsoft 365 Defender where you will see the full set of alerts.
## Advanced hunting event collection
The Microsoft 365 Defender connector also lets you stream **advanced hunting** e
- Easily copy your existing Microsoft Defender for Endpoint advanced hunting queries into Azure Sentinel. -- Use the raw event logs to provide additional insights for your alerts, hunting, and investigation, and correlate these events with others from additional data sources in Azure Sentinel.
+- Use the raw event logs to provide further insights for your alerts, hunting, and investigation, and correlate these events with events from other data sources in Azure Sentinel.
- Store the logs with increased retention, beyond Microsoft Defender for Endpoint or Microsoft 365 DefenderΓÇÖs default retention of 30 days. You can do so by configuring the retention of your workspace or by configuring per-table retention in Log Analytics.
sentinel Microsoft Cloud App Security Alerts Not Imported Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/microsoft-cloud-app-security-alerts-not-imported-microsoft-365-defender.md
+
+ Title: Microsoft Cloud App Security alerts not imported into Azure Sentinel through Microsoft 365 Defender integration | Microsoft Docs
+description: This article displays the alerts from Microsoft Cloud App Security that must be ingested directly into Azure Sentinel, since they are not collected by Microsoft 365 Defender.
+
+cloud: na
+documentationcenter: na
+++
+ms.assetid:
+++
+ na
+ms.devlang: na
+ Last updated : 04/21/2021++++
+# Microsoft Cloud App Security alerts not imported into Azure Sentinel through Microsoft 365 Defender integration
+
+Like the other Microsoft Defender components (Defender for Endpoint, Defender for Identity, and Defender for Office 365), Microsoft Cloud App Security generates alerts that are collected by Microsoft 365 Defender. Microsoft 365 Defender in turn produces incidents that are ingested by and [synchronized with Azure Sentinel](microsoft-365-defender-sentinel-integration.md#microsoft-365-defender-incidents-and-microsoft-incident-creation-rules) when the Microsoft 365 Defender connector is enabled.
+
+Unlike with the other three components, **not all types of** Cloud App Security alerts are onboarded to Microsoft 365 Defender, so that if you want the incidents from all Cloud App Security alerts in Azure Sentinel, you will have to adjust your Microsoft incident creation analytics rules accordingly, so that those alerts that are ingested directly to Sentinel continue to generate incidents, while those that are onboarded to Microsoft 365 Defender don't (so there won't be duplicates).
+
+## Cloud App Security alerts not onboarded to Microsoft 365 Defender
+
+The following alerts are not onboarded to Microsoft 365 Defender, and Microsoft incident creation rules resulting in these alerts should continue to be configured to generate incidents.
+
+| Cloud App Security alert display name | Cloud App Security alert name |
+|-|-|
+| **Access policy alert** | `ALERT_CABINET_INLINE_EVENT_MATCH` |
+| **Activity creation from Discovered Traffic log exceeded daily limit** | `ALERT_DISCOVERY_TRAFFIC_LOG_EXCEEDED_LIMIT` |
+| **Activity policy alert** | `ALERT_CABINET_EVENT_MATCH_AUDIT` |
+| **Anomalous exfiltration alert** | `ALERT_EXFILTRATION_DISCOVERY_ANOMALY_DETECTION` |
+| **Compromised account** | `ALERT_COMPROMISED_ACCOUNT` |
+| **Discovered app security breach alert** | `ALERT_MANAGEMENT_DISCOVERY_BREACHED_APP` |
+| **Inactive account** | `ALERT_ZOMBIE_USER` |
+| **Investigation Priority Score Increased** | `ALERT_UEBA_INVESTIGATION_PRIORITY_INCREASE` |
+| **Malicious OAuth app consent** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_MALICIOUS_OAUTH_APP_CONSENT` |
+| **Misleading OAuth app name** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_MISLEADING_APP_NAME` |
+| **Misleading publisher name for an OAuth app** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_MISLEADING_PUBLISHER_NAME` |
+| **New app discovered** | `ALERT_CABINET_DISCOVERY_NEW_SERVICE` |
+| **Non-secure redirect URL is used by an OAuth app** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_NON_SECURE_REDIRECT_URL` |
+| **OAuth app policy alert** | `ALERT_CABINET_APP_PERMISSION` |
+| **Suspicious activity alert** | `ALERT_SUSPICIOUS_ACTIVITY` |
+| **Suspicious cloud use alert** | `ALERT_DISCOVERY_ANOMALY_DETECTION` |
+| **Suspicious OAuth app name** | `ALERT_CABINET_APP_PERMISSION_ANOMALY_SUSPICIOUS_APP_NAME` |
+| **System alert app connector error** | `ALERT_MANAGEMENT_DISCONNECTED_API` |
+| **System alert Cloud Discovery automatic log upload error** | `ALERT_MANAGEMENT_LOG_COLLECTOR_LOW_RATE` |
+| **System alert Cloud Discovery log-processing error** | `ALERT_MANAGEMENT_LOG_COLLECTOR_CONSTANTLY_FAILED_PARSING` |
+| **System alert ICAP connector error** | `ALERT_MANAGEMENT_DLP_CONNECTOR_ERROR` |
+| **System alert SIEM agent error** | `ALERT_MANAGEMENT_DISCONNECTED_SIEM` |
+| **System alert SIEM agent notifications** | `ALERT_MANAGEMENT_NOTIFICATIONS_SIEM` |
+| **Unusual region for cloud resource** | `MCAS_ALERT_ANUBIS_DETECTION_UNCOMMON_CLOUD_REGION` |
+|
+
+## Next steps
+
+- [Connect Microsoft 365 Defender](connect-microsoft-365-defender.md) to Azure Sentinel.
+- Learn more about [Azure Sentinel](overview.md), [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender), and [Cloud App Security](/cloud-app-security/what-is-cloud-app-security).
sentinel Store Logs In Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/store-logs-in-azure-data-explorer.md
ms.devlang: na
na Previously updated : 04/21/2021 Last updated : 04/25/2021
Azure Sentinel provides full SIEM and SOAR capabilities, quick deployment and co
If you only need to access specific tables occasionally, such as for periodic investigations or audits, you may consider that retaining your data in Azure Sentinel is no longer cost-effective. At this point, we recommend storing data in ADX, which costs less, but still enables you to explore using the same KQL queries that you run in Azure Sentinel.
-You can access the data in ADX directly from Azure Sentinel using the [Log Analytics ADX proxy feature](//azure/azure-monitor/logs/azure-monitor-data-explorer-proxy). To do so, use cross cluster queries in your log search or workbooks.
+You can access the data in ADX directly from Azure Sentinel using the [Log Analytics ADX proxy feature](//azure/azure-monitor/logs/azure-monitor-data-explorer-proxy). To do so, use cross cluster queries in your log search or workbooks.
> [!IMPORTANT] > Core SIEM capabilities, including Analytic rules, UEBA, and the investigation graph, do not support data stored in ADX.
You can access the data in ADX directly from Azure Sentinel using the [Log Analy
> [!NOTE] > Integrating with ADX can also enable you to have control and granularity in your data. For more information, see [Design considerations](#design-considerations).
->
+>
## Send data directly to Azure Sentinel and ADX in parallel You may want to retain any data *with security value* in Azure Sentinel to use in detections, incident investigations, threat hunting, UEBA, and so on. Keeping this data in Azure Sentinel mainly benefits Security Operations Center (SOC) users, where typically, 3-12 months of storage are enough.
For more information about implementing this architecture option, see [Azure Dat
## Export data from Log Analytics into ADX
-Instead of sending your data directly to ADX, you can choose to export your data from Log Analytics into ADX via an ADX event hub or Azure Data Factory.
+Instead of sending your data directly to ADX, you can choose to export your data from Log Analytics into ADX via an Azure Event Hub or Azure Data Factory.
### Data export architecture
-The following image shows a sample flow of exported data through the Azure Monitor ingestion pipeline. Your data is directed to Log Analytics by default, but you can also configure it to export to an Azure Storage Account or event hub.
+The following image shows a sample flow of exported data through the Azure Monitor ingestion pipeline. Your data is directed to Log Analytics by default, but you can also configure it to export to an Azure Storage Account or Event Hub.
:::image type="content" source="media/store-logs-in-adx/export-data-from-azure-monitor.png" alt-text="Export data from Azure Monitor - architecture.":::
When configuring data for export, note the following considerations:
|Consideration | Details | ||| |**Scope of data exported** | Once export is configured for a specific table, all data sent to that table is exported, with no exception. Exported a filtered subset of your data, or limiting the export to specific events, is not supported. |
-|**Location requirements** | Both the Azure Monitor / Azure Sentinel workspace, and the destination location (an Azure Storage Account or event hub) must be located in the same geographical region. |
+|**Location requirements** | Both the Azure Monitor / Azure Sentinel workspace, and the destination location (an Azure Storage Account or Event Hub) must be located in the same geographical region. |
|**Supported tables** | Not all tables are supported for export, such as custom log tables, which are not supported. <br><br>For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/logs/logs-data-export) and the [list of supported tables](/azure/azure-monitor/logs/logs-data-export#supported-tables). | | | |
When configuring data for export, note the following considerations:
Use one of the following procedures to export data from Azure Sentinel into ADX: -- **Via an ADX event hub**. Export data from Log Analytics into an event hub, where you can ingest it into ADX. This method stores some data (the first X months) in both Azure Sentinel and ADX.
+- **Via an Azure Event Hub**. Export data from Log Analytics into an Event Hub, where you can ingest it into ADX. This method stores some data (the first X months) in both Azure Sentinel and ADX.
- **Via Azure Storage and Azure Data Factory**. Export your data from Log Analytics into Azure Blob Storage, then Azure Data Factory is used to run a periodic copy job to further export the data into ADX. This method enables you to copy data from Azure Data Factory only when it nears its retention limit in Azure Sentinel / Log Analytics, avoiding duplication.
-### [ADX event hub](#tab/adx-event-hub)
+### [Azure Event Hub](#tab/adx-event-hub)
-This section describes how to export Azure Sentinel data from Log Analytics into an event hub, where you can ingest it into ADX. Similar to [sending data directly to Azure Sentinel and ADX in parallel](#send-data-directly-to-azure-sentinel-and-adx-in-parallel), this method includes some data duplication as the data is streamed into ADX as it arrives in Log Analytics.
+This section describes how to export Azure Sentinel data from Log Analytics into an Event Hub, where you can ingest it into ADX. Similar to [sending data directly to Azure Sentinel and ADX in parallel](#send-data-directly-to-azure-sentinel-and-adx-in-parallel), this method includes some data duplication as the data is streamed into ADX as it arrives in Log Analytics.
-The following image shows a sample flow of exported data into an event hub, from where it's ingested into ADX.
+The following image shows a sample flow of exported data into an Event Hub, from where it's ingested into ADX.
The architecture shown in the previous image provides the full Azure Sentinel SIEM experience, including incident management, visual investigations, threat hunting, advanced visualizations, UEBA, and more, for data that must be accessed frequently, every *X* months. At the same time, this architecture also enables you to query long-term data by accessing it directly in ADX, or via Azure Sentinel thanks to the ADX proxy feature. Queries to long-term data storage in ADX can be ported without any changes from Azure Sentinel to ADX.
+> [!NOTE]
+> When exporting multiple data tables into ADX via Event Hub, keep in mind that Log Analytics data export has limitations for the maximum number of Event Hubs per namespace. For more information about data export [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/logs/logs-data-export?tabs=portal).
+>
+> For most customers, we recommend using the Event Hub Standard tier. Depending on the amount of tables you need to export and the amount of traffic to those tables, you may need to use Event Hub Dedicated tier. For more information, see [Event Hub documentation](/azure/event-hubs/event-hubs-quotas).
+>
+ > [!TIP] > For more information about this procedure, see [Tutorial: Ingest and query monitoring data in Azure Data Explorer](/azure/data-explorer/ingest-data-no-code). >
-**To export data into ADX via an event hub**:
+**To export data into ADX via an Event Hub**:
-1. **Configure the Log Analytics data export to an event hub**. For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/platform/logs-data-export).
+1. **Configure the Log Analytics data export to an Event Hub**. For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/platform/logs-data-export).
1. **Create an ADX cluster and database**. For more information, see:
The architecture shown in the previous image provides the full Azure Sentinel SI
For more information, see [Ingest and query monitoring data in Azure Data Explorer](/azure/data-explorer/ingest-data-no-code?tabs=diagnostic-metrics).
-1. <a name="mapping"></a>**Create table mapping**. Map the JSON tables to define how records land in the raw events table as they come in from an event hub. For more information, see [Create the update policy for metric and log data](/azure/data-explorer/ingest-data-no-code?tabs=diagnostic-metrics).
+1. <a name="mapping"></a>**Create table mapping**. Map the JSON tables to define how records land in the raw events table as they come in from an Event Hub. For more information, see [Create the update policy for metric and log data](/azure/data-explorer/ingest-data-no-code?tabs=diagnostic-metrics).
1. **Create an update policy and attach it to the raw records table**. In this step, create a function, called an update policy, and attach it to the destination table so that the data is transformed at ingestion time.
The architecture shown in the previous image provides the full Azure Sentinel SI
> This step is required only when you want to have data tables in ADX with the same schema and format as in Azure Sentinel. >
- For more information, see [Connect an event hub to Azure Data Explorer](/azure/data-explorer/ingest-data-no-code?tabs=activity-logs).
+ For more information, see [Connect an Event Hub to Azure Data Explorer](/azure/data-explorer/ingest-data-no-code?tabs=activity-logs).
-1. **Create a data connection between the event hub and the raw data table in ADX**. Configure ADX with details of how to export the data into the event hub.
+1. **Create a data connection between the Event Hub and the raw data table in ADX**. Configure ADX with details of how to export the data into the Event Hub.
Use the instructions in the [Azure Data Explorer documentation](/azure/data-explorer/ingest-data-no-code?tabs=activity-logs) and specify the following details:
The following image shows a sample flow of exported data into an Azure Storage,
**To export data into ADX via an Azure Storage and Azure Data Factory**:
-1. **Configure the Log Analytics data export to an event hub**. For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/logs/logs-data-export?tabs=portal#enable-data-export).
+1. **Configure the Log Analytics data export to an Event Hub**. For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/logs/logs-data-export?tabs=portal#enable-data-export).
1. **Create an ADX cluster and database**. For more information, see:
The following image shows a sample flow of exported data into an Azure Storage,
For more information, see [Ingest and query monitoring data in Azure Data Explorer](/azure/data-explorer/ingest-data-no-code?tabs=diagnostic-metrics).
-1. <a name="mapping"></a>**Create table mapping**. Map the JSON tables to define how records land in the raw events table as they come in from an event hub. For more information, see [Create the update policy for metric and log data](/azure/data-explorer/ingest-data-no-code?tabs=diagnostic-metrics).
+1. <a name="mapping"></a>**Create table mapping**. Map the JSON tables to define how records land in the raw events table as they come in from an Event Hub. For more information, see [Create the update policy for metric and log data](/azure/data-explorer/ingest-data-no-code?tabs=diagnostic-metrics).
1. **Create an update policy and attach it to the raw records table**. In this step, create a function, called an update policy, and attach it to the destination table so that the data is transformed at ingestion time.
The following image shows a sample flow of exported data into an Azure Storage,
> This step is required only when you want to have data tables in ADX with the same schema and format as in Azure Sentinel. >
- For more information, see [Connect an event hub to Azure Data Explorer](/azure/data-explorer/ingest-data-no-code?tabs=activity-logs).
+ For more information, see [Connect an Event Hub to Azure Data Explorer](/azure/data-explorer/ingest-data-no-code?tabs=activity-logs).
-1. **Create a data connection between the event hub and the raw data table in ADX**. Configure ADX with details of how to export the data into the event hub.
+1. **Create a data connection between the Event Hub and the raw data table in ADX**. Configure ADX with details of how to export the data into the Event Hub.
Use the instructions in the [Azure Data Explorer documentation](/azure/data-explorer/ingest-data-no-code?tabs=activity-logs) and specify the following details:
When storing your Azure Sentinel data in ADX, consider the following elements:
|**Retention** | In ADX, you can configure when data is removed from a database or an individual table, which is also an important part of limiting storage costs. <br><br> For more information, see [Retention policy](/azure/data-explorer/kusto/management/retentionpolicy). | |**Security** | Several ADX settings can help you protect your data, such as identity management, encryption, and so on. Specifically for role-based access control (RBAC), ADX can be configured to restrict access to databases, tables, or even rows within a table. For more information, see [Security in Azure Data Explorer](/azure/data-explorer/security) and [Row level security](/azure/data-explorer/kusto/management/rowlevelsecuritypolicy).| |**Data sharing** | ADX allows you to make pieces of data available to other parties, such as partners or vendors, and even buy data from other parties. For more information, see [Use Azure Data Share to share data with Azure Data Explorer](/azure/data-explorer/data-share). |
-| **Other cost components** | Consider the other cost components for the following methods: <br><br>**Exporting data via an ADX event hub**: <br>- Log Analytics data export costs, charged per exported GBs. <br>- Event hub costs, charged by throughput unit. <br><br>**Export data via Azure Storage and Azure Data Factory**: <br>- Log Analytics data export, charged per exported GBs. <br>- Azure Storage, charged by GBs stored. <br>- Azure Data Factory, charged per copy of activities run.
+| **Other cost components** | Consider the other cost components for the following methods: <br><br>**Exporting data via an Azure Event Hub**: <br>- Log Analytics data export costs, charged per exported GBs. <br>- Event hub costs, charged by throughput unit. <br><br>**Export data via Azure Storage and Azure Data Factory**: <br>- Log Analytics data export, charged per exported GBs. <br>- Azure Storage, charged by GBs stored. <br>- Azure Data Factory, charged per copy of activities run.
| | | ## Next steps
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/duplicate-detection.md
In scenarios where client code is unable to resubmit a message with the same *Me
Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Enable Auto Forward https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/enable-auto-forward.md
To **create a subscription for a topic with auto forwarding enabled**, set `forw
} ```
+## .NET
+
+### Azure.Messaging.ServiceBus (latest)
+You can enable the auto forwarding feature by setting [CreateQueueOptions.ForwardTo](/dotnet/api/azure.messaging.servicebus.administration.createqueueoptions.forwardto) or [CreateSubscriptionOptions.ForwardTo](/dotnet/api/azure.messaging.servicebus.administration.createsubscriptionoptions.forwardto), and then by using the [CreateQueueAsync](/dotnet/api/azure.messaging.servicebus.administration.servicebusadministrationclient.createqueueasync#Azure_Messaging_ServiceBus_Administration_ServiceBusAdministrationClient_CreateQueueAsync_Azure_Messaging_ServiceBus_Administration_CreateQueueOptions_System_Threading_CancellationToken_) or [CreateSubscriptionAsync](/dotnet/api/azure.messaging.servicebus.administration.servicebusadministrationclient.createsubscriptionasync#Azure_Messaging_ServiceBus_Administration_ServiceBusAdministrationClient_CreateSubscriptionAsync_Azure_Messaging_ServiceBus_Administration_CreateSubscriptionOptions_System_Threading_CancellationToken_) methods that take `CreateQueueOptions` or `CreateSubscriptionOptions` parameters.
+
+### Microsoft.Azure.ServiceBus (legacy)
+You can enable autoforwarding by setting the [QueueDescription.ForwardTo](/dotnet/api/microsoft.servicebus.messaging.queuedescription) or [SubscriptionDescription.ForwardTo](/dotnet/api/microsoft.servicebus.messaging.subscriptiondescription) for the source, as in the following example:
+
+```csharp
+SubscriptionDescription srcSubscription = new SubscriptionDescription (srcTopic, srcSubscriptionName);
+srcSubscription.ForwardTo = destTopic;
+namespaceManager.CreateSubscription(srcSubscription));
+```
+
+## Java
+
+### azure-messaging-servicebus (latest)
+You can enable the auto forwarding feature by using the [CreateQueueOptions.setForwardTo(String forwardTo)](/java/api/com.azure.messaging.servicebus.administration.models.createqueueoptions.setforwardto) method or the [CreateSubscriptionOptions.setForwardTo(String forwardTo)](/java/api/com.azure.messaging.servicebus.administration.models.createsubscriptionoptions.setforwardto), and then by using the [createQueue](/java/api/com.azure.messaging.servicebus.administration.servicebusadministrationclient.createqueue#com_azure_messaging_servicebus_administration_ServiceBusAdministrationClient_createQueue_java_lang_String_com_azure_messaging_servicebus_administration_models_CreateQueueOptions_) method or the [createSubscription](/java/api/com.azure.messaging.servicebus.administration.servicebusadministrationclient.createsubscription#com_azure_messaging_servicebus_administration_ServiceBusAdministrationClient_createSubscription_java_lang_String_java_lang_String_com_azure_messaging_servicebus_administration_models_CreateSubscriptionOptions_) method that take `CreateQueueOptions` or `CreateSubscriptionOptions` parameters.
+
+### azure-servicebus (legacy)
+You can enable autoforwarding by using the [QueueDescription.setForwardTo(String forwardTo)](/java/api/com.microsoft.azure.servicebus.management.queuedescription.setforwardto#com_microsoft_azure_servicebus_management_QueueDescription_setForwardTo_java_lang_String_) or [SubscriptionDescription.setForwardTo(String forwardTo)](/java/api/com.microsoft.azure.servicebus.management.subscriptiondescription.setforwardto) for the source.
+ ## Next steps Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Enable Dead Letter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/enable-dead-letter.md
To **create a subscription for a topic with dead lettering on message expiration
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Enable Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/enable-duplicate-detection.md
To **create a topic with duplicate detection enabled**, set `requiresDuplicateDe
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Enable Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/enable-message-sessions.md
To **create a subscription for a topic with message sessions enabled**, set `req
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Enable Partitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/enable-partitions.md
To **create a topic with duplicate detection enabled**, set `enablePartitioning`
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Message Browsing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-browsing.md
When called repeatedly, the peek operation enumerates all messages in the queue
You can also pass a SequenceNumber to a peek operation. It will be used to determine where to start peeking from. You can make subsequent calls to the peek operation without specifying the parameter to enumerate further. ## Next steps
-Try the samples in the language of your choice to explore the peek or message browsing feature:
+Try the samples in the language of your choice to explore Azure Service Bus features.
-- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/) - **Peek at a message** sample-- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - **receive_peek.py** sample-- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - **browseMessages.js** sample
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/) - **Peek at a message** sample
+- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - **receive_peek.py** sample
+- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - **browseMessages.js** sample
- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/) - **browseMessages.ts** sample-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) - See peek methods on receiver classes in the [reference documentation](/dotnet/api/azure.messaging.servicebus). Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - **Message Browsing (Peek)** sample -- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse) - **Message Browse** sample.
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - **Message Browsing (Peek)** sample
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus) - **Message Browse** sample.
+
service-bus-messaging Message Counters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-counters.md
The returned `MessageCountDetails` object has the following properties: `ActiveM
Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)
service-bus-messaging Message Deferral https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-deferral.md
To retrieve a deferred message, its owner is responsible for remembering the seq
## Next steps Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) - See the **Settling Messages** sample.
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - see the **receive_deferred_message_queue.py** sample. - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - see the **advanced/deferral.js** sample. - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/) - see the **advanced/deferral.ts** sample. -- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) - See the **Settling Messages** sample. Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Deferral** sample. -- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Deferral** sample.
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
service-bus-messaging Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-sessions.md
You can enable message sessions while creating a queue using Azure portal, Power
Try the samples in the language of your choice to explore Azure Service Bus features. -- [Azure Service Bus client library samples for Java](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/) - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-- [Azure.Messaging.ServiceBus samples for .NET](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) Find samples for the older .NET and Java client libraries below:-- [Microsoft.Azure.ServiceBus samples for .NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [azure-servicebus samples for Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
[1]: ./media/message-sessions/sessions.png
service-bus-messaging Service Bus Async Messaging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-async-messaging.md
Title: Service Bus asynchronous messaging | Microsoft Docs description: Learn how Azure Service Bus supports asynchronism via a store and forward mechanism with queues, topics, and subscriptions. Previously updated : 06/23/2020 Last updated : 04/23/2021 # Asynchronous messaging patterns and high availability
There are several ways to handle message and entity issues, and there are guidel
Service Bus contains a number of mitigations for these issues. The following sections discuss each issue and their respective mitigations. ### Throttling
-With Service Bus, throttling enables cooperative message rate management. Each individual Service Bus node houses many entities. Each of those entities makes demands on the system in terms of CPU, memory, storage, and other facets. When any of these facets detects usage that exceeds defined thresholds, Service Bus can deny a given request. The caller receives a [ServerBusyException][ServerBusyException] and retries after 10 seconds.
+With Service Bus, throttling enables cooperative message rate management. Each individual Service Bus node houses many entities. Each of those entities makes demands on the system in terms of CPU, memory, storage, and other facets. When any of these facets detects usage that exceeds defined thresholds, Service Bus can deny a given request. The caller receives a server busy exception and retries after 10 seconds.
As a mitigation, the code must read the error and halt any retries of the message for at least 10 seconds. Since the error can happen across pieces of the customer application, it is expected that each piece independently executes the retry logic. The code can reduce the probability of being throttled by enabling partitioning on a queue or topic.
Other components within Azure can occasionally have service issues. For example,
### Service Bus failure on a single subsystem With any application, circumstances can cause an internal component of Service Bus to become inconsistent. When Service Bus detects this, it collects data from the application to aid in diagnosing what happened. Once the data is collected, the application is restarted in an attempt to return it to a consistent state. This process happens fairly quickly, and results in an entity appearing to be unavailable for up to a few minutes, though typical down times are much shorter.
-In these cases, the client application generates a [System.TimeoutException][System.TimeoutException] or [MessagingException][MessagingException] exception. Service Bus contains a mitigation for this issue in the form of automated client retry logic. Once the retry period is exhausted and the message is not delivered, you can explore using other mentioned in the article on [handling outages and disasters][handling outages and disasters].
+In these cases, the client application generates a timeout exception or a messaging exception. Service Bus contains a mitigation for this issue in the form of automated client retry logic. Once the retry period is exhausted and the message is not delivered, you can explore using other mentioned in the article on [handling outages and disasters][handling outages and disasters].
## Next steps Now that you've learned the basics of asynchronous messaging in Service Bus, read more details about [handling outages and disasters][handling outages and disasters].
-[ServerBusyException]: /dotnet/api/microsoft.servicebus.messaging.serverbusyexception
-[System.TimeoutException]: /dotnet/api/system.timeoutexception
-[MessagingException]: /dotnet/api/microsoft.servicebus.messaging.messagingexception
[Best practices for insulating applications against Service Bus outages and disasters]: service-bus-outages-disasters.md
-[Microsoft.ServiceBus.Messaging.MessagingFactory]: /dotnet/api/microsoft.servicebus.messaging.messagingfactory
-[MessageReceiver]: /dotnet/api/microsoft.servicebus.messaging.messagereceiver
-[QueueClient]: /dotnet/api/microsoft.servicebus.messaging.queueclient
-[TopicClient]: /dotnet/api/microsoft.servicebus.messaging.topicclient
-[Microsoft.ServiceBus.Messaging.PairedNamespaceOptions]: /dotnet/api/microsoft.servicebus.messaging.pairednamespaceoptions
-[MessagingFactory]: /dotnet/api/microsoft.servicebus.messaging.messagingfactory
-[SendAvailabilityPairedNamespaceOptions]: /dotnet/api/microsoft.servicebus.messaging.sendavailabilitypairednamespaceoptions
-[NamespaceManager]: /dotnet/api/microsoft.servicebus.namespacemanager
-[PairNamespaceAsync]: /dotnet/api/microsoft.servicebus.messaging.messagingfactory
-[EnableSyphon]: /dotnet/api/microsoft.servicebus.messaging.sendavailabilitypairednamespaceoptions
-[System.TimeSpan.Zero]: /dotnet/api/system.timespan.zero
-[IsTransient]: /dotnet/api/microsoft.servicebus.messaging.messagingexception
-[UnauthorizedAccessException]: /dotnet/api/system.unauthorizedaccessexception
-[BacklogQueueCount]: /dotnet/api/microsoft.servicebus.messaging.sendavailabilitypairednamespaceoptions
[handling outages and disasters]: service-bus-outages-disasters.md
service-bus-messaging Service Bus Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-authentication-and-authorization.md
To access an entity, the client requires a SAS token generated using a specific
SAS authentication support for Service Bus is included in the Azure .NET SDK versions 2.0 and later. SAS includes support for a shared access authorization rule. All APIs that accept a connection string as a parameter include support for SAS connection strings.
-> [!IMPORTANT]
-> If you are using Azure Active Directory Access Control (also known as Access Control Service or ACS) with Service Bus, note that the support for this method is now limited and you should [migrate your application to use SAS](service-bus-migrate-acs-sas.md) or use OAuth 2.0 authentication with Azure AD (recommended).For more information about deprecation of ACS, see [this blog post](/archive/blogs/servicebus/upcoming-changes-to-acs-enabled-namespaces).
## Next steps For more information about authenticating with Azure AD, see the following articles:
service-bus-messaging Service Bus Auto Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-auto-forwarding.md
Title: Auto-forwarding Azure Service Bus messaging entities description: This article describes how to chain an Azure Service Bus queue or subscription to another queue or topic. Previously updated : 01/20/2021 Last updated : 04/23/2021
The Service Bus *autoforwarding* feature enables you to chain a queue or subscri
> [!NOTE] > The basic tier of Service Bus doesn't support the autoforwarding feature. The standard and premium tiers support the feature. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
-## Using autoforwarding
-
-You can enable autoforwarding by setting the [QueueDescription.ForwardTo][QueueDescription.ForwardTo] or [SubscriptionDescription.ForwardTo][SubscriptionDescription.ForwardTo] properties on the [QueueDescription][QueueDescription] or [SubscriptionDescription][SubscriptionDescription] objects for the source, as in the following example:
-
-```csharp
-SubscriptionDescription srcSubscription = new SubscriptionDescription (srcTopic, srcSubscriptionName);
-srcSubscription.ForwardTo = destTopic;
-namespaceManager.CreateSubscription(srcSubscription));
-```
- The destination entity must exist at the time the source entity is created. If the destination entity does not exist, Service Bus returns an exception when asked to create the source entity.
+## Scenarios
+
+### Scale out an individual topic
You can use autoforwarding to scale out an individual topic. Service Bus limits the [number of subscriptions on a given topic](service-bus-quotas.md) to 2,000. You can accommodate additional subscriptions by creating second-level topics. Even if you are not bound by the Service Bus limitation on the number of subscriptions, adding a second level of topics can improve the overall throughput of your topic. ![Diagram of an autoforwarding scenario showing a message processed through an Orders Topic that can branch to any of three second-level Orders Topics.][0]
+### Decouple message senders from receivers
You can also use autoforwarding to decouple message senders from receivers. For example, consider an ERP system that consists of three modules: order processing, inventory management, and customer relations management. Each of these modules generates messages that are enqueued into a corresponding topic. Alice and Bob are sales representatives that are interested in all messages that relate to their customers. To receive those messages, Alice and Bob each create a personal queue and a subscription on each of the ERP topics that automatically forward all messages to their queue. ![Diagram of an autoforwarding scenario showing three processing modules sending messages through three corresponding topics to two separate queues.][1]
To create a subscription that is chained to another queue or topic, the creator
Don't create a chain that exceeds 4 hops. Messages that exceed 4 hops are dead-lettered. ## Next steps
+To learn how to set enable or disable auto forwarding in different ways (Azure portal, PowerShell, CLI, Azure Resource Management template, etc.), see [Enable auto forwarding for queues and subscriptions](enable-auto-forward.md).
-For detailed information about autoforwarding, see the following reference topics:
-
-* [ForwardTo][QueueDescription.ForwardTo]
-* [QueueDescription][QueueDescription]
-* [SubscriptionDescription][SubscriptionDescription]
-
-To learn more about Service Bus performance improvements, see
-
-* [Best Practices for performance improvements using Service Bus Messaging](service-bus-performance-improvements.md)
-* [Partitioned messaging entities][Partitioned messaging entities].
-[QueueDescription.ForwardTo]: /dotnet/api/microsoft.servicebus.messaging.queuedescription.forwardto#Microsoft_ServiceBus_Messaging_QueueDescription_ForwardTo
-[SubscriptionDescription.ForwardTo]: /dotnet/api/microsoft.servicebus.messaging.subscriptiondescription.forwardto#Microsoft_ServiceBus_Messaging_SubscriptionDescription_ForwardTo
-[QueueDescription]: /dotnet/api/microsoft.servicebus.messaging.queuedescription
-[SubscriptionDescription]: /dotnet/api/microsoft.servicebus.messaging.queuedescription
[0]: ./media/service-bus-auto-forwarding/IC628631.gif [1]: ./media/service-bus-auto-forwarding/IC628632.gif [Partitioned messaging entities]: service-bus-partitioning.md
service-bus-messaging Service Bus Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-managed-service-identity.md
Title: Managed identities for Azure resources with Service Bus description: This article describes how to use managed identities to access with Azure Service Bus entities (queues, topics, and subscriptions). Previously updated : 01/21/2021 Last updated : 04/23/2021 # Authenticate a managed identity with Azure Active Directory to access Azure Service Bus resources
Once the application is created, follow these steps:
Once you've enabled this setting, a new service identity is created in your Azure Active Directory (Azure AD) and configured into the App Service host.
-> [!NOTE]
-> When you use a managed identity, the connection string should be in the format: `Endpoint=sb://<NAMESPACE NAME>.servicebus.windows.net/;Authentication=ManagedIdentity`.
-
-Now, assign this service identity to a role in the required scope in your Service Bus resources.
- ### To Assign Azure roles using the Azure portal
-To assign a role to a Service Bus namespace, navigate to the namespace in the Azure portal. Display the Access Control (IAM) settings for the resource, and follow these instructions to manage role assignments:
+Now, assign the service identity to a role in the required scope in your Service Bus resources. To assign a role to a Service Bus namespace, navigate to the namespace in the Azure portal. Display the Access Control (IAM) settings for the resource, and follow these instructions to manage role assignments:
> [!NOTE] > The following steps assigns a service identity role to your Service Bus namespaces. You can follow the same steps to assign a role at other supported scopes (resource group and subscription).
Now, modify the default page of the ASP.NET application you created. You can use
The Default.aspx page is your landing page. The code can be found in the Default.aspx.cs file. The result is a minimal web application with a few entry fields, and with **send** and **receive** buttons that connect to Service Bus to either send or receive messages.
-Note how the [MessagingFactory](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) object is initialized. Instead of using the Shared Access Token (SAS) token provider, the code creates a token provider for the managed identity with the `var msiTokenProvider = TokenProvider.CreateManagedIdentityTokenProvider();` call. As such, there are no secrets to retain and use. The flow of the managed identity context to Service Bus and the authorization handshake are automatically handled by the token provider. It is a simpler model than using SAS.
+Note how the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient?view=azure-dotnet) object is initialized by using a constructor that takes a TokenCredential. The DefaultAzureCredential derives from TokenCredential and can be passed here. As such, there are no secrets to retain and use. The flow of the managed identity context to Service Bus and the authorization handshake are automatically handled by the token credential. It is a simpler model than using SAS.
After you make these changes, publish and run the application. You can obtain the correct publishing data easily by downloading and then importing a publishing profile in Visual Studio:
service-bus-messaging Service Bus Migrate Acs Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-migrate-acs-sas.md
- Title: Azure Service Bus - Migrate to Shared Access Signature authorization
-description: Learn about migrating from Azure Active Directory Access Control Service to Shared Access Signature authorization.
- Previously updated : 06/23/2020--
-# Service Bus - Migrate from Azure Active Directory Access Control Service to Shared Access Signature authorization
-
-Service Bus applications have previously had a choice of using two different authorization models: the [Shared Access Signature (SAS)](service-bus-sas.md) token model provided directly by Service Bus, and a federated model where the management of authorization rules is managed inside by the [Azure Active Directory](../active-directory/index.yml) Access Control Service (ACS), and tokens obtained from ACS are passed to Service Bus for authorizing access to the desired features.
-
-The ACS authorization model has long been superseded by [SAS authorization](service-bus-authentication-and-authorization.md) as the preferred model, and all documentation, guidance, and samples exclusively use SAS today. Moreover, it is no longer possible to create new Service Bus namespaces that are paired with ACS.
-
-SAS has the advantage in that it is not immediately dependent on another service, but can be used directly from a client without any intermediaries by giving the client access to the SAS rule name and rule key. SAS can also be easily integrated with an approach in which a client has to first pass an authorization check with another service and then is issued a token. The latter approach is similar to the ACS usage pattern, but enables issuing access tokens based on application-specific conditions that are difficult to express in ACS.
-
-For all existing applications that are dependent on ACS, we urge customers to migrate their applications to rely on SAS instead.
-
-## Migration scenarios
-
-ACS and Service Bus are integrated through the shared knowledge of a *signing key*. The signing key is used by an ACS namespace to sign authorization tokens, and it's used by Service Bus to verify that the token has been issued by the paired ACS namespace. The ACS namespace holds service identities and authorization rules. The authorization rules define which service identity or which token issued by an external identity provider gets which type of access to a part of the Service Bus namespace graph, in the form of a longest-prefix match.
-
-For example, an ACS rule might grant the **Send** claim on the path prefix `/` to a service identity, which means that a token issued by ACS based on that rule grants the client rights to send to all entities in the namespace. If the path prefix is `/abc`, the identity is restricted to sending to entities named `abc` or organized beneath that prefix. It is assumed that readers of this migration guidance are already familiar with these concepts.
-
-The migration scenarios fall into three broad categories:
-
-1. **Unchanged defaults**. Some customers use a [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) object, passing the automatically generated **owner** service identity and its secret key for the ACS namespace, paired with the Service Bus namespace, and do not add new rules.
-
-2. **Custom service identities with simple rules**. Some customers add new service identities and grant each new service identity **Send**, **Listen**, and **Manage** permissions for one specific entity.
-
-3. **Custom service identities with complex rules**. Very few customers have complex rule sets in which externally issued tokens are mapped to rights on Relay, or where a single service identity is assigned differentiated rights on several namespace paths through multiple rules.
-
-For assistance with the migration of complex rule sets, you can contact [Azure support](https://azure.microsoft.com/support/options/). The other two scenarios enable straightforward migration.
-
-### Unchanged defaults
-
-If your application has not changed ACS defaults, you can replace all [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) usage with a [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) object, and use the namespace preconfigured **RootManageSharedAccessKey** instead of the ACS **owner** account. Note that even with the ACS **owner** account, this configuration was (and still is) not generally recommended, because this account/rule provides full management authority over the namespace, including permission to delete any entities.
-
-### Simple rules
-
-If the application uses custom service identities with simple rules, the migration is straightforward in the case where an ACS service identity was created to provide access control on a specific queue. This scenario is often the case in SaaS-style solutions where each queue is used as a bridge to a tenant site or branch office, and the service identity is created for that particular site. In this case, the respective service identity can be migrated to a Shared Access Signature rule, directly on the queue. The service identity name can become the SAS rule name and the service identity key can become the SAS rule key. The rights of the SAS rule are then configured equivalent to the respectively applicable ACS rule for the entity.
-
-You can make this new and additional configuration of SAS in-place on any existing namespace that is federated with ACS, and the migration away from ACS is subsequently performed by using [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) instead of [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider). The namespace does not need to be unlinked from ACS.
-
-### Complex rules
-
-SAS rules are not meant to be accounts, but are named signing keys associated with rights. As such, scenarios in which the application creates many service identities and grants them access rights for several entities or the whole namespace still require a token-issuing intermediary. You can obtain guidance for such an intermediary by [contacting support](https://azure.microsoft.com/support/options/).
-
-## Next steps
-
-To learn more about Service Bus authentication, see the following topics:
-
-* [Service Bus authentication and authorization](service-bus-authentication-and-authorization.md)
-* [Service Bus authentication with Shared Access Signatures](service-bus-sas.md)
service-bus-messaging Service Bus Prefetch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-prefetch.md
Title: Azure Service Bus prefetch messages | Microsoft Docs description: Improve performance by prefetching Azure Service Bus messages. Messages are readily available for local retrieval before the application requests for them. Previously updated : 06/23/2020 Last updated : 04/23/2021 # Prefetch Azure Service Bus messages
+When you enable the *Prefetch* feature for any of the official Service Bus clients, the receiver acquires more messages than what the application initially asked for, up to the specified prefetch count. Please note that the JavaScript and TypeScript client does not support this feature yet.
-When *Prefetch* is enabled in any of the official Service Bus clients, the receiver quietly acquires more messages, up to the [PrefetchCount](/dotnet/api/microsoft.azure.servicebus.queueclient.prefetchcount#Microsoft_Azure_ServiceBus_QueueClient_PrefetchCount) limit, beyond what the application initially asked for.
+As messages are returned to the application, the client acquires further messages in the background, to fill the prefetch buffer.
-A single initial [Receive](/dotnet/api/microsoft.servicebus.messaging.queueclient.receive) or [ReceiveAsync](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.receiveasync) call therefore acquires a message for immediate consumption that is returned as soon as available. The client then acquires further messages in the background, to fill the prefetch buffer.
+## Enabling Prefetch
+To enable the Prefetch feature, set the prefetch count of the queue or subscription client to a number greater than zero. Setting the value to zero turns off prefetch.
-## Enable prefetch
+# [.NET](#tab/dotnet)
+If you are using the latest Azure.Messaging.ServiceBus library, you can set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects.
-With .NET, you enable the Prefetch feature by setting the [PrefetchCount](/dotnet/api/microsoft.azure.servicebus.queueclient.prefetchcount#Microsoft_Azure_ServiceBus_QueueClient_PrefetchCount) property of a **MessageReceiver**, **QueueClient**, or **SubscriptionClient** to a number greater than zero. Setting the value to zero turns off prefetch.
+If you are using the older .NET client library for Service Bus (Microsoft.Azure.ServiceBus), you can set the prefetch count property on the [MessageReceiver](/dotnet/api/microsoft.servicebus.messaging.messagereceiver.prefetchcount), [QueueClient](/dotnet/api/microsoft.azure.servicebus.queueclient.prefetchcount#Microsoft_Azure_ServiceBus_QueueClient_PrefetchCount) or the [SubscriptionClient](/dotnet/api/microsoft.azure.servicebus.subscriptionclient.prefetchcount).
+
+# [Java](#tab/java)
+If you are using the latest azure-messaging-servicebus library, you can set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects.
-You can easily add this setting to the receive-side of the [QueuesGettingStarted](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/QueuesGettingStarted) or [ReceiveLoop](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/ReceiveLoop) samples' settings to see the effect in those contexts.
+If you are using the older Java client library for Service Bus (azure-servicebus), you can set the prefetch count property on the [MessageReceiver](/java/api/com.microsoft.azure.servicebus.imessagereceiver.setprefetchcount#com_microsoft_azure_servicebus_IMessageReceiver_setPrefetchCount_int_), [QueueClient](/java/api/com.microsoft.azure.servicebus.queueclient.setprefetchcount#com_microsoft_azure_servicebus_QueueClient_setPrefetchCount_int_) or the [SubscriptionClient](/java/api/com.microsoft.azure.servicebus.subscriptionclient.setprefetchcount#com_microsoft_azure_servicebus_SubscriptionClient_setPrefetchCount_int_).
+
+# [Python](#tab/python)
-While messages are available in the prefetch buffer, any subsequent **Receive**/**ReceiveAsync** calls are immediately fulfilled from the buffer, and the buffer is replenished in the background as space becomes available. If there are no messages available for delivery, the receive operation empties the buffer and then waits or blocks, as expected.
+You can set **prefetch_count** on the [azure.servicebus.ServiceBusReceiver](/python/api/azure-servicebus/azure.servicebus.servicebusreceiver) or [azure.servicebus.aio.ServiceBusReceiver](/python/api/azure-servicebus/azure.servicebus.aio.servicebusreceiver).
-Prefetch also works in the same way with the [OnMessage](/dotnet/api/microsoft.servicebus.messaging.queueclient.onmessage) and [OnMessageAsync](/dotnet/api/microsoft.servicebus.messaging.queueclient.onmessageasync) APIs.
++
+> [!NOTE]
+> Java Script SDK doesn't support the **Prefetch** feature.
-## If it is faster, why is Prefetch not the default option?
+While messages are available in the prefetch buffer, any subsequent receive calls are immediately fulfilled from the buffer. The buffer is replenished in the background as space becomes available. If there are no messages available for delivery, the receive operation empties the buffer and then waits or blocks, as expected.
-Prefetch speeds up the message flow by having a message readily available for local retrieval when and before the application asks for one. This throughput gain is the result of a trade-off that the application author must make explicitly:
+## Why is Prefetch not the default option?
+Prefetch speeds up the message flow by having a message readily available for local retrieval before the application asks for one. This throughput gain is the result of a trade-off that the application author must make explicitly:
-With the [ReceiveAndDelete](/dotnet/api/microsoft.servicebus.messaging.receivemode) receive mode, all messages that are acquired into the prefetch buffer are no longer available in the queue, and only reside in the in-memory prefetch buffer until they are received into the application through the **Receive**/**ReceiveAsync** or **OnMessage**/**OnMessageAsync** APIs. If the application terminates before the messages are received into the application, those messages are irrecoverably lost.
+With the [receive-and-delete](message-transfers-locks-settlement.md#receiveanddelete) mode, all messages that are acquired into the prefetch buffer are no longer available in the queue. The messages stay only in the in-memory prefetch buffer until they're received into the application. If the application ends before the messages are received into the application, those messages are irrecoverable (lost).
-In the [PeekLock](/dotnet/api/microsoft.servicebus.messaging.receivemode#Microsoft_ServiceBus_Messaging_ReceiveMode_PeekLock) receive mode, messages fetched into the Prefetch buffer are acquired into the buffer in a locked state, and have the timeout clock for the lock ticking. If the prefetch buffer is large, and processing takes so long that message locks expire while residing in the prefetch buffer or even while the application is processing the message, there might be some confusing events for the application to handle.
+In the [peek-lock](message-transfers-locks-settlement.md#peeklock) receive mode, messages fetched into the prefetch buffer are acquired into the buffer in a locked state. They have the timeout clock for the lock ticking. If the prefetch buffer is large, and processing takes so long that message locks expire while staying in the prefetch buffer or even while the application is processing the message, there might be some confusing events for the application to handle.
-The application might acquire a message with an expired or imminently expiring lock. If so, the application might process the message, but then find that it cannot complete it due to a lock expiration. The application can check the [LockedUntilUtc](/dotnet/api/microsoft.azure.servicebus.message.systempropertiescollection.lockeduntilutc) property (which is subject to clock skew between the broker and local machine clock). If the message lock has expired, the application must ignore the message; no API call on or with the message should be made. If the message is not expired but expiration is imminent, the lock can be renewed and extended by another default lock period by calling [message.RenewLock()](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.renewlockasync#Microsoft_Azure_ServiceBus_Core_MessageReceiver_RenewLockAsync_System_String_)
+The application might acquire a message with an expired or imminently expiring lock. If so, the application might process the message, but then find that it can't complete the message because of a lock expiration. The application can check the `LockedUntilUtc` property (which is subject to clock skew between the broker and local machine clock). If the message lock has expired, the application must ignore the message and shouldn't make any API call on the message. If the message isn't expired but expiration is imminent, the lock can be renewed and extended by another default lock period.
-If the lock silently expires in the prefetch buffer, the message is treated as abandoned and is again made available for retrieval from the queue. That might cause it to be fetched into the prefetch buffer; placed at the end. If the prefetch buffer cannot usually be worked through during the message expiration, this causes messages to be repeatedly prefetched but never effectively delivered in a usable (validly locked) state, and are eventually moved to the dead-letter queue once the maximum delivery count is exceeded.
+If the lock silently expires in the prefetch buffer, the message is treated as abandoned and is again made available for retrieval from the queue. It might cause the message to be fetched into the prefetch buffer and placed at the end. If the prefetch buffer can't usually be worked through during the message expiration, messages are repeatedly prefetched but never effectively delivered in a usable (validly locked) state, and are eventually moved to the dead-letter queue once the maximum delivery count is exceeded.
-If you need a high degree of reliability for message processing, and processing takes significant work and time, it is recommended that you use the prefetch feature conservatively, or not at all.
+If you need a high degree of reliability for message processing, and processing takes significant work and time, we recommend that you use the Prefetch feature conservatively, or not at all.
If you need high throughput and message processing is commonly cheap, prefetch yields significant throughput benefits.
-The maximum prefetch count and the lock duration configured on the queue or subscription need to be balanced such that the lock timeout at least exceeds the cumulative expected message processing time for the maximum size of the prefetch buffer, plus one message. At the same time, the lock timeout ought not to be so long that messages can exceed their maximum [TimeToLive](/dotnet/api/microsoft.azure.servicebus.message.timetolive#Microsoft_Azure_ServiceBus_Message_TimeToLive) when they are accidentally dropped, thus requiring their lock to expire before being redelivered.
+The maximum prefetch count and the lock duration configured on the queue or subscription need to be balanced such that the lock timeout at least exceeds the cumulative expected message processing time for the maximum size of the prefetch buffer, plus one message. At the same time, the lock timeout shouldn't be so long that messages can exceed their maximum time to live when they're accidentally dropped, and so requiring their lock to expire before being redelivered.
## Next steps
-To learn more about Service Bus messaging, see the following topics:
+Try the samples in the language of your choice to explore Azure Service Bus features.
+
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/)
+- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
+- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
-* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
-* [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+Find samples for the older .NET and Java client libraries below:
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Prefetch** sample.
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus) - See the **Prefetch** sample.
storage File Sync Deprovision Server Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-deprovision-server-endpoint.md
+
+ Title: Deprovision your Azure File Sync server endpoint | Microsoft Docs
+description: Guidance on how to deprovision your Azure File Sync server endpoint based on your use case
+++ Last updated : 4/23/2021++++
+# Deprovision your Azure File Sync server endpoint
+
+Before you deprovision your server endpoint, there are a few steps you should to take to maintain data integrity and availability. This article covers several methods of deprovisioning and the appropriate guidance, ordered by scenario. Follow the steps for the use case that best applies to you.
+
+If it is ok to permanently lose the data that you are currently syncing, you can skip to directly deprovisioning your server endpoint.
+
+> [!Important]
+> DonΓÇÖt try to resolve sync issues by deprovisioning a server endpoint. For troubleshooting help, see Troubleshooting Azure File Sync see [Troubleshooting Azure File Sync](./file-sync-troubleshoot.md). Permanent data loss may occur if you delete your server endpoint without getting either the server or the cloud side fully in sync with the other.
+
+## Scenario 1: You intend to delete your server endpoint and stop using your local server/VM
+
+The goal here is to ensure that your data is up-to-date in your cloud endpoint. To have your complete set of files up-to-date in your sever endpoints instead, see [Scenario 2: You intend to delete your server endpoint and stop using this specific Azure file share](#scenario-2-you-intend-to-delete-your-server-endpoint-and-stop-using-this-specific-azure-file-share).
+
+Some use cases that fall in this category include:
+- Migrate to an Azure file share
+- Going serverless
+- Discontinue use of a specific server endpoint path while keeping the rest of the sync group intact
+
+For this scenario, there are three steps to take before deleting your server endpoint: remove user access, initiate a special VSS upload session, and wait for a final sync session to complete.
+
+### Remove user access to your server endpoint
+
+Before you deprovision your server endpoint, you need to ensure that all changes from the server can sync to the cloud. The first step in allowing the cloud to catch up is to remove the opportunity for additional changes to files and folders on the server endpoint.
+
+Removing access means downtime. To reduce downtime, you can also consider redirecting user access to your cloud endpoint.
+
+Record the date and time you removed user access for your own records and then move onto the next section.
+
+### Initiate a special Volume Snapshot Service (VSS) upload session
+
+Each day, Azure File Sync creates a temporary VSS snapshot on the server to sync files with open handles. To ensure that your final sync session uploads the latest data and to reduce per-item errors, initiate a special session for VSS upload. This will also trigger a special sync upload session that begins once the snapshot is taken.
+
+To do so, open **Task Scheduler** on your local server, navigate to **Microsoft\StorageSync**, right-click the **VssSyncScheduledTask** task and select **Run**.
+
+> [!Important]
+> Write down the date and time you complete this step. You will need it in the next section.
+
+![A screenshot of scheduling a VSS upload session.](media/storage-sync-deprovision-server-endpoint/vss-task-scheduler.png)
+
+### Wait for a final sync upload session to complete
+
+To ensure that the latest data is in the cloud, you need to wait for the final sync upload session to complete.
+
+To check the status of the sync session, open the **Event Viewer** on your local server. Navigate to the telemetry event log **(Applications and Services\Microsoft\FileSync\Agent)**. Ensure that you see a 9102 event with ΓÇÿsync directionΓÇÖ = upload, ΓÇÿHResultΓÇÖ = 0 and ΓÇÿPerItemErrorCountΓÇÖ = 0 that occurred after you manually initiated a VSS upload session.
+
+![A screenshot of checking if a final sync session has completed.](media/storage-sync-deprovision-server-endpoint/event-viewer.png)
+
+If ΓÇÿPerItemErrorCountΓÇÖ is greater than 0, then files are failing to sync. Use the **FileSyncErrorsReport.ps1** to see the files that are failing to sync. This PowerShell script is typically located at this path on a server with an Azure File Sync agent installed: **C:\Program Files\Azure\StorageSyncAgent\FileSyncErrorsReport.ps1**
+
+If these files arenΓÇÖt important, then you can delete your server endpoint. If these files are important, fix their errors and wait for another 9102 event with ΓÇÿsync directionΓÇÖ = upload, ΓÇÿHResultΓÇÖ = 0 and ΓÇÿPerItemErrorCountΓÇÖ = 0 to occur before deleting your server endpoint.
+
+## Scenario 2: You intend to delete your server endpoint and stop using this specific Azure file share
+
+The goal here is to ensure your data is up-to-date in your local server/VM. To have your complete set of files up-to-date in your cloud endpoint instead, see [Scenario 1: You intend to delete your server endpoint and stop using your local server/VM](#scenario-1-you-intend-to-delete-your-server-endpoint-and-stop-using-your-local-servervm).
+
+For this scenario, there are four steps to take before deleting your server endpoint: disable cloud tiering, recall tiered files, initiate cloud change detection, and wait for a final sync session to complete.
+
+### Disable cloud tiering
+Navigate to the cloud tiering section in **Server Endpoint Properties** for the server endpoint you would like to deprovision and disable cloud tiering.
+
+### Recall all tiered files
+Even if cloud tiering is disabled, you need to recall all tiered files, to be sure that every file is stored locally.
+
+Before you recall any files, make sure that you have enough free space locally to store all your files. Your free space needs to be approximately the size of your Azure file share in the cloud minus the cached size on your server.
+
+Use the **Invoke-StorageSyncFileRecall** PowerShell cmdlet and specify the **SyncGroupName** parameter to recall all files.
+```powershell
+Invoke-StorageSyncFileRecall -SyncGroupName ΓÇ£samplesyncgroupnameΓÇ¥
+```
+Once this cmdlet has finished running, you can move onto the next section.
+
+### Initiate cloud change detection
+Initiating change detection in the cloud ensures that your latest changes have been synced.
+
+You can initiate change detection with the Invoke-AzStorageSyncChangeDetection cmdlet:
+
+```powershell
+Invoke-AzStorageSyncChangeDetection -ResourceGroupName "myResourceGroup" -StorageSyncServiceName "myStorageSyncServiceName" -SyncGroupName "mySyncGroupName" -Path "Data","Reporting\Templates"
+```
+
+This step may take a while to complete.
+
+> [!Important]
+> Once this initiated cloud change detection scan has completed, note down the date and time it completed at. You will need it in the following section.
+
+### Wait for a final sync session to complete
+To ensure that your data is up-to-date on your local server, you need to wait for a final sync upload session to complete.
+
+To check this, go to **Event Viewer** on your local server. Navigate to the telemetry event log **(Applications and Services\Microsoft\FileSync\Agent)**. Ensure that you see a 9102 event with ΓÇÿsync directionΓÇÖ = download, ΓÇÿHResultΓÇÖ = 0 and ΓÇÿPerItemErrorCountΓÇÖ = 0 that occurred after the date/time cloud change detection finished.
+
+![A screenshot of checking if a final sync session has completed.](media/storage-sync-deprovision-server-endpoint/event-viewer.png)
+
+If ΓÇÿPerItemErrorCountΓÇÖ is greater than 0, then files are failing to sync. Use the **FileSyncErrorsReport.ps1** to see the files that are failing to sync. This PowerShell script is typically located at this path on a server with an Azure File Sync agent installed: **C:\Program Files\Azure\StorageSyncAgent\FileSyncErrorsReport.ps1**
+
+If these files arenΓÇÖt important, then you can delete your server endpoint. If these files are important, fix their errors and wait for another 9102 event with ΓÇÿsync directionΓÇÖ = download, ΓÇÿHResultΓÇÖ = 0 and ΓÇÿPerItemErrorCountΓÇÖ = 0 to occur before deleting your server endpoint.
+
+## Next Steps
+* [Modify Azure File Sync topology](./file-sync-modify-sync-topology.md)
+++++++
storage File Sync Modify Sync Topology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-modify-sync-topology.md
+
+ Title: Modify your Azure File Sync topology | Microsoft Docs
+description: Guidance on how to modify your Azure File Sync sync topology
+++ Last updated : 4/23/2021++++
+# Modify your Azure File Sync topology
+
+This article covers the most common ways customers would like to modify their Azure File Sync topology and our recommendations for how to do so. If you would like to modify your Azure File Sync topology, please refer to the best practices below to avoid errors and/or potential data loss.
+
+## Migrate a server endpoint to a different Azure File Sync Storage Sync Service
+
+Once you ensure that your data is up-to-date on your local server, deprovision your server endpoint. For guidance on how to do this, see [Deprovision your Azure File Sync server endpoint](./file-sync-deprovision-server-endpoint.md#scenario-2-you-intend-to-delete-your-server-endpoint-and-stop-using-this-specific-azure-file-share). Then reprovision in the desired sync group and Storage Sync Service.
+
+If you would like to migrate all server endpoints associated with a server to a different sync group or Storage Sync Service, see [Deprovision all server endpoints associated with a registered server](#deprovision-all-server-endpoints-associated-with-a-registered-server).
+
+## Change the granularity of a server endpoint
+
+After you confirm your data is up-to-date on your local server (see [Deprovision your Azure File Sync server endpoint](./file-sync-deprovision-server-endpoint.md#scenario-2-you-intend-to-delete-your-server-endpoint-and-stop-using-this-specific-azure-file-share)), deprovision your server endpoint. Then reprovision at the desired granularity.
+
+## Deprovision Azure File Sync topology
+
+Azure File Sync resources must be deprovisioned in a specific order: server endpoints, sync group, and then Storage Service. While the entire flow is documented below, you may stop at any level you desire.
+
+First, navigate to the Storage Sync Service resource in the Azure portal and select a sync group in the Storage Sync Service. Follow the steps in [Deprovision your Azure File Sync server endpoint](./file-sync-deprovision-server-endpoint.md) to ensure data integrity and availability when deleting server endpoints. In order to deprovision your sync group or Storage Sync Service, all server endpoints must be deleted. If you only aim to delete specific server endpoints, you can stop here.
+
+Once you delete all the server endpoints in the sync group, delete the cloud endpoint.
+
+Then, delete the sync group.
+
+Repeat these steps for all the sync groups in the Storage Sync Service you would like to delete. Once all the sync groups in that Storage Sync Service have been deleted, delete the Storage Sync Service resource.
+
+## Rename a server endpoint path or sync group
+
+Currently, this is not supported.
+
+If you are currently using the D drive and are planning on migrating to the cloud, see [Make the D: drive of a VM a data disk - Azure Virtual Machines](https://docs.microsoft.com/azure/virtual-machines/windows/change-drive-letter).
+
+## Deprovision all server endpoints associated with a registered server
+
+To ensure that your data is safe and fully updated before deprovisioning, see [Deprovision your Azure File Sync server endpoint](./file-sync-deprovision-server-endpoint.md).
+
+Navigate to your Storage Sync Service resource, and go to the Registered Servers tab. Select the server you would like to unregister and select **unregister server**. This will promptly deprovision all server endpoints associated with that server.
+
+## Next steps
+* [Deprovision your Azure File Sync server endpoint](./file-sync-deprovision-server-endpoint.md)
+++
stream-analytics Cluster Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/cluster-overview.md
Your Stream Analytics clusters are charged based on the chosen SU capacity. Clus
### Which inputs and outputs can I privately connect to from my Stream Analytics cluster?
-Stream Analytics supports various input and output types. You can [create private endpoints](private-endpoints.md) in your cluster that allow jobs to access the input and output resources. Currently Azure SQL Database, Azure Storage, Azure Data Lake Storage Gen2, Azure Event Hub, Azure IoT Hubs, Azure Function and Azure Service Bus are supported services for which you can create managed private endpoints.
+Stream Analytics supports various input and output types. You can [create private endpoints](private-endpoints.md) in your cluster that allow jobs to access the input and output resources. Currently Azure SQL Database, Azure Synapse Analytics, Azure Storage, Azure Data Lake Storage Gen2, Azure Event Hub, Azure IoT Hubs, Azure Function and Azure Service Bus are supported services for which you can create managed private endpoints.
## Next steps
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/microsoft-spark-utilities.md
Microsoft Spark Utilities (MSSparkUtils) is a builtin package to help you easily
### Configure access to Azure Data Lake Storage Gen2
-Synapse notebooks use Azure active directory (Azure AD) pass-through to access the ADLS Gen2 accounts. You need to be a **Storage Blob Data Contributor** to access the ADLS Gen2 account (or folder).
+Synapse notebooks use Azure Active Airectory (AAD) pass-through to access the ADLS Gen2 accounts. You need to be a **Storage Blob Data Contributor** to access the ADLS Gen2 account (or folder).
-Synapse pipelines use workspace identity (MSI) to access the storage accounts. To use MSSparkUtils in your pipeline activities, your workspace identity needs to be **Storage Blob Data Contributor** to access the ADLS Gen2 account (or folder).
+Synapse pipelines use workspace's Managed Service Identity (MSI) to access the storage accounts. To use MSSparkUtils in your pipeline activities, your workspace identity needs to be **Storage Blob Data Contributor** to access the ADLS Gen2 account (or folder).
Follow these steps to make sure your Azure AD and workspace MSI have access to the ADLS Gen2 account: 1. Open the [Azure portal](https://portal.azure.com/) and the storage account you want to access. You can navigate to the specific container you want to access.
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/language-packs.md
You need the following things to customize your Windows 10 Enterprise multi-sess
- [Windows 10, version 2004 or 20H2 **11C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2011C.iso) - [Windows 10, version 2004 or 20H2 **1C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2101C.iso) - [Windows 10, version 2004 or 20H2 **2C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2102C.iso)
- - [Windows 10, version 2004 or 20H2 **3C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2103C.iso)
- An Azure Files Share or a file share on a Windows File Server Virtual Machine
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
Here's what changed in August 2020:
- Azure Advisor is now a part of Windows Virtual Desktop. When you access Windows Virtual Desktop through the Azure portal, you can see recommendations for optimizing your Windows Virtual Desktop environment. Learn more at [Azure Advisor](azure-advisor.md). -- Azure CLI now supports Windows Virtual Desktop (`az desktopvirtualization`) to help you automate your Windows Virtual Desktop deployments. Check out [desktopvirtualization](/cli/azure/) for a list of extension commands.
+- Azure CLI now supports Windows Virtual Desktop (`az desktopvirtualization`) to help you automate your Windows Virtual Desktop deployments. Check out [desktopvirtualization](/cli/azure/desktopvirtualization) for a list of extension commands.
- We've updated our deployment templates to make them fully compatible with the Windows Virtual Desktop Azure Resource Manager interfaces. You can find the templates on [GitHub](https://github.com/Azure/RDS-Templates/tree/master/ARM-wvd-templates).