Updates from: 10/17/2022 01:04:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
There are several moving parts across AWS and Azure, which are required to be co
1. Return to Permissions Management, and in the **Permissions Management Onboarding - AWS OIDC Account Setup** page, select **Next**.
-### 3. Set up an AWS master account (Optional)
+### 3. Set up the AWS master account connection (Optional)
1. If your organization has Service Control Policies (SCPs) that govern some or all of the member accounts, set up the master account connection in the **Permissions Management Onboarding - AWS Master Account Details** page. Setting up the master account connection allows Permissions Management to auto-detect and onboard any AWS member accounts that have the correct Permissions Management role.
- - In the **Permissions Management Onboarding - AWS Master Account Details** page, enter the **Master Account ID** and **Master Account Role**.
+1. In the **Permissions Management Onboarding - AWS Master Account Details** page, enter the **Master Account ID** and **Master Account Role**.
1. Open another browser window and sign in to the AWS console for your master account.
There are several moving parts across AWS and Azure, which are required to be co
1. Return to Permissions Management, and in **Permissions Management Onboarding - AWS Master Account Details**, select **Next**.
-### 4. Set up an AWS Central logging account (Optional but recommended)
+### 4. Set up the AWS Central logging account connection (Optional but recommended)
1. If your organization has a central logging account where logs from some or all of your AWS account are stored, in the **Permissions Management Onboarding - AWS Central Logging Account Details** page, set up the logging account connection.
active-directory Slack Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-provisioning-tutorial.md
Title: 'Tutorial: User provisioning for Slack - Azure AD' description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Slack. --
+documentationcenter: ''
+
+writer: Thwimmer
+
+ms.assetid: 7fa2a1b1-7ed3-4c51-ae17-f5d4ee88488c
+ms.devlang: na
Last updated 05/06/2020-+ # Tutorial: Configure Slack for automatic user provisioning
active-directory Smartsheet Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartsheet-provisioning-tutorial.md
Title: 'Tutorial: Configure Smartsheet for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Smartsheet.
+documentationcenter: ''
-writer: twimmers
-
+writer: Thwimmer
+
+ms.assetid: 9d391bd3-b0d3-4c7d-af8a-70bc0a538706
+ms.devlang: na
Last updated 06/07/2019-+ # Tutorial: Configure Smartsheet for automatic user provisioning
The scenario outlined in this tutorial assumes that you already have the followi
* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * [A Smartsheet tenant](https://www.smartsheet.com/pricing). * A user account on a Smartsheet Enterprise or Enterprise Premier plan with System Administrator permissions.
+* **System Admins** and an **IT Administrator** can set up Active Directory with Smartsheet
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and Smartsheet](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Smartsheet](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Smartsheet to support provisioning with Azure AD Before configuring Smartsheet for automatic user provisioning with Azure AD, you will need to enable SCIM provisioning on Smartsheet.
-1. Sign in as a **SysAdmin** in the **[Smartsheet portal](https://app.smartsheet.com/b/home)** and navigate to **Account Admin**.
+1. Sign in as a **System Admin** in the **[Smartsheet portal](https://app.smartsheet.com/b/home)** and navigate to **Account > Admin Center**.
- ![Smartsheet Account Admin](media/smartsheet-provisioning-tutorial/smartsheet-accountadmin.png)
+ ![Screenshot of Smartsheet Account Admin](media/smartsheet-provisioning-tutorial/smartsheet-admin-center.png)
-2. Go to **Security Controls > User Auto Provisioning > Edit**.
+1. In the Admin Center page click on the **Menu** option to expose the Menu panel.
- ![Smartsheet Security Controls](media/smartsheet-provisioning-tutorial/smartsheet-securitycontrols.png)
+ ![Screenshot of Smartsheet Security Controls](media/smartsheet-provisioning-tutorial/smartsheet-menu.png)
-3. Add and validate the email domains for the users that you plan to provision from Azure AD to Smartsheet. Choose **Not Enabled** to ensure that all provisioning actions only originate from Azure AD, and to also ensure that your Smartsheet user list is in sync with Azure AD assignments.
+1. Navigate to **Menu > Settings > Domains & User Auto-Provisioning**.
- ![Smartsheet User Provisioning](media/smartsheet-provisioning-tutorial/smartsheet-userprovisioning.png)
+ ![Screenshot of Smartsheet domain](media/smartsheet-provisioning-tutorial/smartsheet-domain.png)
-4. Once validation is complete, you will have to activate the domain.
+1. To add a new domain click on **Add Domain** and follow instructions.Once the domain is added make sure it gets verified as well.
- ![Smartsheet Activate Domain](media/smartsheet-provisioning-tutorial/smartsheet-activatedomain.png)
+1. Generate the **Secret Token** required to configure automatic user provisioning with Azure AD by navigating **[Smartsheet portal](https://app.smartsheet.com/b/home)** and then navigating to **Account > Apps and Integrations**.
-5. Generate the **Secret Token** required to configure automatic user provisioning with Azure AD by navigating to **Apps and Integrations**.
-
- ![Screenshot of the Smartsheet Admin page with the user avatar and the Apps & Integrations option called out.](media/smartsheet-provisioning-tutorial/Smartsheet05.png)
-
-6. Choose **API Access**. Click **Generate new access token**.
+1. Choose **API Access**. Click **Generate new access token**.
![Screenshot of the Personal Settings dialog box with the API Access and Generate new access token options called out.](media/smartsheet-provisioning-tutorial/Smartsheet06.png)
-7. Define the name of the API Access Token. Click **OK**.
+1. Define the name of the API Access Token. Click **OK**.
![Screenshot of the Step 1 of 2: Generate API Access Token with the OK option called out.](media/smartsheet-provisioning-tutorial/Smartsheet07.png)
-8. Copy the API Access Token and save it as this will be the only time you can view it. This is required in the **Secret Token** field in Azure AD.
+1. Copy the API Access Token and save it as this will be the only time you can view it. This is required in the **Secret Token** field in Azure AD.
![Smartsheet token](media/smartsheet-provisioning-tutorial/Smartsheet08.png)
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
-2. In the applications list, select **Smartsheet**.
+1. In the applications list, select **Smartsheet**.
- ![The Smartsheet link in the Applications list](common/all-applications.png)
+ ![Screenshot of The Smartsheet link in the Applications list.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL** of `https://scim.smartsheet.com/v2` and **Access Token** value retrieved earlier from Smartsheet in **Secret Token** respectively. Click **Test Connection** to ensure Azure AD can connect to Smartsheet. If the connection fails, ensure your Smartsheet account has SysAdmin permissions and try again.
+1. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL** of `https://scim.smartsheet.com/v2` and **Access Token** value retrieved earlier from Smartsheet in **Secret Token** respectively. Click **Test Connection** to ensure Azure AD can connect to Smartsheet. If the connection fails, ensure your Smartsheet account has SysAdmin permissions and try again.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
-7. Click **Save**.
+1. Click **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Smartsheet**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Smartsheet**.
-9. Review the user attributes that are synchronized from Azure AD to Smartsheet in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Smartsheet for update operations. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Smartsheet in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Smartsheet for update operations. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-11. To enable the Azure AD provisioning service for Smartsheet, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Smartsheet, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of provisioning status toggled on.](common/provisioning-toggle-on.png)
-12. Define the users and/or groups that you would like to provision to Smartsheet by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to Smartsheet by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of provisioning scope.](common/provisioning-scope.png)
-13. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of saving provisioning configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Connector limitations
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
Title: 'Quickstart: Direct web traffic using the portal'
description: In this quickstart, you learn how to use the Azure portal to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool. -- Previously updated : 06/10/2022++ Last updated : 10/13/2022
In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
+![Quickstart setup](./media/quick-create-portal/application-gateway-qs-resources.png)
-For more information about the components of an application gateway, see [Application gateway components](application-gateway-components.md).
+For more information about the components of an application gateway, see [Application gateway components](application-gateway-components.md).
You can also complete this quickstart using [Azure PowerShell](quick-create-powershell.md) or [Azure CLI](quick-create-cli.md).
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
## Create an application gateway
-You'll create the application gateway using the tabs on the **Create an application gateway** page.
+You'll create the application gateway using the tabs on the **Create application gateway** page.
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**. The **New** window appears.
-2. Select **Networking** and then select **Application Gateway** in the **Featured** list.
+1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+2. Under **Categories**, select **Networking** and then select **Application Gateway** in the **Popular Azure services** list.
### Basics tab
You'll create the application gateway using the tabs on the **Create an applicat
![Create new application gateway: Basics](./media/application-gateway-create-gateway-portal/application-gateway-create-basics.png)
-2. For Azure to communicate between the resources that you create, it needs a virtual network. You can either create a new virtual network or use an existing one. In this example, you'll create a new virtual network at the same time that you create the application gateway. Application Gateway instances are created in separate subnets. You create two subnets in this example: one for the application gateway, and another for the backend servers.
+2. For Azure to communicate between the resources that you create, a virtual network is needed. You can either create a new virtual network or use an existing one. In this example, you'll create a new virtual network at the same time that you create the application gateway. Application Gateway instances are created in separate subnets. You create two subnets in this example: One for the application gateway, and another for the backend servers.
> [!NOTE] > [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet.
You'll create the application gateway using the tabs on the **Create an applicat
- **Name**: Enter *myVNet* for the name of the virtual network.
- - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *Default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed.
+ - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed.
- **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column.
You'll create the application gateway using the tabs on the **Create an applicat
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
On the **Configuration** tab, you'll connect the frontend and backend pool you c
1. Select **Add a routing rule** in the **Routing rules** column.
-2. In the **Add a routing rule** window that opens, enter *myRoutingRule* for the **Rule name**.
+2. In the **Add a routing rule** window that opens, enter the following values for Rule name and Priority:
+
+ - **Rule name**: Enter *myRoutingRule* for the name of the rule.
+ - **Priority**: The priority value should be between 1 and 20000 (where 1 represents highest priority and 20000 represents lowest) - for the purposes of this quickstart, enter *100* for the priority.
3. A routing rule requires a listener. On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
On the **Configuration** tab, you'll connect the frontend and backend pool you c
4. On the **Backend targets** tab, select **myBackendPool** for the **Backend target**.
-5. For the **HTTP setting**, select **Add new** to add a new HTTP setting. The HTTP setting will determine the behavior of the routing rule. In the **Add an HTTP setting** window that opens, enter *myHTTPSetting* for the **HTTP setting name** and *80* for the **Backend port**. Accept the default values for the other settings in the **Add an HTTP setting** window, then select **Add** to return to the **Add a routing rule** window.
+5. For the **Backend setting**, select **Add new** to add a new Backend setting. The Backend setting will determine the behavior of the routing rule. In the **Add Backend setting** window that opens, enter *myBackendSetting* for the **Backend settings name** and *80* for the **Backend port**. Accept the default values for the other settings in the **Add Backend setting** window, then select **Add** to return to the **Add a routing rule** window.
- ![Create new application gateway: HTTP setting](./media/application-gateway-create-gateway-portal/application-gateway-create-httpsetting.png)
+ ![Create new application gateway: HTTP setting](./media/application-gateway-create-gateway-portal/application-gateway-create-backendsetting.png)
6. On the **Add a routing rule** window, select **Add** to save the routing rule and return to the **Configuration** tab.
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
In the event when a zone is down, there's no action required by you to recover f
## Supported regions with availability zones
-See [Regions and Availability Zones in Azure](/global-infrastructure/geographies/#geographies) for the Azure regions that have availability zones.
+See [Regions and Availability Zones in Azure](../availability-zones/az-overview.md) for the Azure regions that have availability zones.
Automation accounts currently support the following regions in preview: - China North 3
There is no change to the [Service Level Agreement](https://azure.microsoft.com/
## Next steps -- Learn more about [regions that support availability zones](/azure/availability-zones/az-region.md).
+- Learn more about [regions that support availability zones](../availability-zones/az-overview.md).
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
description: Define network settings and enable network isolation for Azure Moni
Previously updated : 9/16/2022 Last updated : 10/14/2022
For your data collection endpoints, ensure the **Accept access from public netwo
:::image type="content" source="media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png" lightbox="media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png" alt-text="Screenshot that shows configuring data collection endpoint network isolation.":::
- Associate the data collection endpoints to the target resources by editing the data collection rule in the Azure portal. On the **Resources** tab, select **Enable Data Collection Endpoints**. Select a DCE for each virtual machine. See [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md).
+### Associate DCEs to target machines
+Associate the data collection endpoints to the target resources by editing the data collection rule in the Azure portal. On the **Resources** tab, select **Enable Data Collection Endpoints**. Select a DCE for each virtual machine. See [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md).
:::image type="content" source="media/azure-monitor-agent-dce/data-collection-rule-virtual-machines-with-endpoint.png" lightbox="media/azure-monitor-agent-dce/data-collection-rule-virtual-machines-with-endpoint.png" alt-text="Screenshot that shows configuring data collection endpoints for an agent.":::
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 10/10/2022 Last updated : 10/14/2022 # Guidelines for Azure NetApp Files network planning
The following table describes the network topologies supported by each network f
| Connectivity from on-premises to a volume in a spoke VNet over VPN gateway and VNet peering with gateway transit | Yes | Yes | | Connectivity over Active/Passive VPN gateways | Yes | Yes | | Connectivity over Active/Active VPN gateways | Yes | No |
-| Connectivity over Active/Active Zone Redundant gateways | Yes | Yes |
+| Connectivity over Active/Active Zone Redundant gateways | No | No |
+| Connectivity over Active/Passive Zone Redundant gateways | Yes | Yes |
| Connectivity over Virtual WAN (VWAN) | No | No | \* This option will incur a charge on ingress and egress traffic that uses a virtual network peering connection. For more information, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). For more general information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
batch Low Priority Vms Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/low-priority-vms-retirement-migration-guide.md
Title: Migrate low-priority VMs to spot VMs in Batch
-description: Learn how to migrate Azure Batch low-priority VMs to Azure Spot Virtual Machines and plan for feature end of support.
+description: Learn how to migrate Azure Batch low-priority VMs to Spot VMs and plan for feature end of support.
Previously updated : 08/10/2022 Last updated : 10/14/2022
-# Migrate Batch low-priority VMs to Azure Spot Virtual Machines
+# Migrate Batch low-priority VMs to Spot VMs
-The Azure Batch feature low-priority virtual machines (VMs) is being retired on *September 30, 2025*. Learn how to migrate your Batch low-priority VMs to Azure Spot Virtual Machines.
+The ability to allocate low-priority compute nodes in Azure Batch pools is being retired on *September 30, 2025*. Learn how to migrate your Batch pools with low-priority compute nodes to compute nodes based on Spot instances.
## About the feature
-Currently, in Azure Batch, you can use a low-priority VM or a spot VM. Both types of VMs are Azure computing instances that are allocated from spare capacity and offered at a highly discounted rate compared to dedicated, on-demand VMs.
+Currently, as part of a Batch pool configuration, you can specify a target number of low-priority compute nodes for Batch managed pool allocation Batch accounts. In user subscription pool allocation Batch accounts, you can specify a target number of spot compute nodes. In both cases, these compute resources are allocated from spare capacity and offered at a discount compared to dedicated, on-demand VMs.
-You can use low-priority VMs to take advantage of unused capacity in Azure. The amount of unused capacity that's available varies depending on factors like VM size, the region, and the time of day. At any time, when Microsoft needs the capacity back, we evict low-priority VMs. Therefore, the low-priority feature is excellent for flexible workloads like large processing jobs, dev and test environments, demos, and proofs of concept. It's easy to deploy low-priority VMs by using a virtual machine scale set.
+The amount of unused capacity that's available varies depending on factors such as VM family, VM size, region, and time of day. Unlike dedicated capacity, these low-priority or spot VMs can be reclaimed at any time by Azure. Therefore, low-priority and spot VMs are typically viable for Batch workloads that are amenable to interruption or don't require strict completion timeframes to potentially lower costs.
## Feature end of support
-Low-priority VMs are a deprecated preview feature and won't be generally available. Spot VMs offered through the Azure Spot Virtual Machines service are the official, preemptible offering from the Azure compute platform. Spot Virtual Machines is generally available. On September 30, 2025, we'll retire the low-priority VMs feature. After that date, existing low-priority pools in Batch might no longer work and you can't provision new low-priority VMs.
+Only low-priority compute nodes in Batch are being retired. Spot compute nodes will continue to be supported, is a GA offering, and not affected by this deprecation. On September 30, 2025, we'll retire low-priority compute nodes. After that date, existing low-priority pools in Batch may no longer be usable, attempts to seek back to target low-priority node counts will fail, and you'll no longer be able to provision new pools with low-priority compute nodes.
-## Alternative: Use Azure Spot Virtual Machines
+## Alternative: Use Azure Spot-based compute nodes in Batch pools
-As of May 2020, Azure offers spot VMs in Batch in addition to low-priority VMs. Like low-priority VMs, you can use the spot VM option to purchase spare capacity at a deeply discounted price in exchange for the possibility that the VM will be evicted. Unlike low-priority VMs, you can use the spot VM option for single VMs and scale sets. Virtual machine scale sets scale up to meet demand. When used with a spot VM, a virtual machine scale set allocates only when capacity is available.
+As of December 2021, Azure Batch began offering Spot-based compute nodes in Batch. Like low-priority VMs, you can use spot instances to obtain spare capacity at a discounted price in exchange for the possibility that the VM will be preempted. If a preemption occurs, the spot compute node will be evicted and all work that wasn't appropriately checkpointed will be lost. Checkpointing is optional and is up to the Batch end-user to implement. The running Batch task that was interrupted due to preemption will be automatically requeued for execution by a different compute node. Additionally, Azure Batch will automatically attempt to seek back to the target Spot node count as specified on the pool.
-A spot VM in Batch can be evicted when Azure needs the capacity or when the cost goes above your set maximum price. You also can choose to receive a 30-second eviction notice and attempt to redeploy.
+See the [detailed breakdown](batch-spot-vms.md) between the low-priority and spot offering in Batch.
-Spot VM pricing is variable and based on the capacity of a VM size or SKU in an Azure region. Prices change slowly to provide stabilization. The price will never go above pay-as-you-go rates.
+## Migrate a Batch pool with low-priority compute nodes or create a Batch pool with Spot instances
-For VM eviction policy, you can choose from two options:
--- **Stop/Deallocate** (default): When a VM is evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This option is ideal when you store state on disks.--- **Delete**: When a VM is evicted, the VM and underlying disks are deleted.-
-Although the two purchasing options are similar, be aware of a few key differences:
-
-| Factor | Low-priority VMs | Spot VMs |
-||||
-| Availability | Azure Batch | Single VMs, virtual machine scale sets |
-| Pricing | Fixed pricing | Variable pricing with ability to set maximum price |
-| Eviction or preemption | Preempted when Azure needs the capacity. Tasks on preempted node VMs are requeued and run again. | Evicted when Azure needs the capacity or if the price exceeds your maximum. If evicted for price and afterward the price goes below your maximum, the VM isn't automatically restarted. |
-
-## Migrate a low-priority VM pool or create a spot VM pool
-
-To include spot VMs when you scale in user subscription mode:
+1. Ensure that you're using a [user subscription pool allocation mode Batch account](batch-account-create-portal.md).
1. In the Azure portal, select the Batch account and view an existing pool or create a new pool.
To include spot VMs when you scale in user subscription mode:
1. Select **Save**.
-You can't use spot VMs in Batch managed mode. Instead, switch to user subscription mode and re-create the Batch account, pool, and jobs.
- ## FAQs -- How do I create a new Batch account, job, or pool?
+- How do I create a user subscription pool allocation Batch account?
+
+ See the [quickstart](./batch-account-create-portal.md) to create a new Batch account in user subscription pool allocation mode.
- See the [quickstart](./batch-account-create-portal.md) to create a new Batch account, job, or pool.
+- Are Spot VMs available in Batch managed pool allocation accounts?
-- Are spot VMs available in Batch managed mode?
+ No. Spot VMs are available only in user subscription pool allocation Batch accounts.
+
+- Are spot instances available for `CloudServiceConfiguration` Pools?
- No. In Batch accounts, spot VMs are available only in user subscription mode.
+ No. Spot instances are only available for `VirtualMachineConfiguration` pools. `CloudServiceConfiguration` pools will be [retired](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/) before low-priority pools. We recommend that you migrate to `VirtualMachineConfiguration` pools and user subscription pool allocation Batch accounts before then.
-- What is the pricing and eviction policy of spot VMs? Can I view pricing history and eviction rates?
+- What is the pricing and eviction policy of spot instances? Can I view pricing history and eviction rates?
Yes. In the Azure portal, you can see historical pricing and eviction rates per size in a region. For more information about using spot VMs, see [Spot Virtual Machines](../virtual-machines/spot-vms.md).
-## Next steps
+- Can I transfer my quotas between Batch accounts?
-Use the [CLI](../virtual-machines/linux/spot-cli.md), [Azure portal](../virtual-machines/spot-portal.md), [ARM template](../virtual-machines/linux/spot-template.md), or [PowerShell](../virtual-machines/windows/spot-powershell.md) to deploy Azure Spot Virtual Machines.
+ Currently you can't transfer any quotas between Batch accounts.
+
+## Next steps
-You can also deploy a [scale set that has Azure Spot Virtual Machines instances](../virtual-machine-scale-sets/use-spot.md).
+See the [Batch Spot compute instance guide](batch-spot-vms.md) for details on further details in the difference between offerings, limitations, and deployment examples.
container-registry Container Registry Enable Conditional Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-enable-conditional-access-policy.md
The Conditional Access policy applies after the first-factor authentication to t
The following steps will help create a Conditional Access policy for Azure Container Registry (ACR).
-1. Disable authentication-as-arm in ACR - Azure CLI.
-2. Disable authentication-as-arm in the ACR - Azure portal.
-3. Create and configure Conditional Access policy for Azure Container Registry.
+ 1. Disable authentication-as-arm in ACR - Azure CLI.
+ 2. Disable authentication-as-arm in the ACR - Azure portal.
+ 3. Create and configure Conditional Access policy for Azure Container Registry.
## Prerequisites
Disabling `azureADAuthenticationAsArmPolicy` will force the registry to use ACR
1. Run the command to show the current configuration of the registry's policy for authentication using ARM tokens with the registry. If the status is `enabled`, then both ACRs and ARM audience tokens can be used for authentication. If the status is `disabled` it means only ACR's audience tokens can be used for authentication.
- ```azurecli-interactive
- az acr config authentication-as-arm show -r <registry>
- ```
+ ```azurecli-interactive
+ az acr config authentication-as-arm show -r <registry>
+ ```
1. Run the command to update the status of the registry's policy.
- ```azurecli-interactive
- az acr config authentication-as-arm update -r <registry> --status [enabled/disabled]
- ```
+ ```azurecli-interactive
+ az acr config authentication-as-arm update -r <registry> --status [enabled/disabled]
+ ```
## Disable authentication-as-arm in the ACR - Azure portal
Disabling `authentication-as-arm` property by assigning a built-in policy will a
You can disable authentication-as-arm in the ACR, by following below steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Refer to the ACR's built-in policy definitions in the [azure-container-registry-built-in-policy definition's](policy-reference.md).
-3. Assign a built-in policy to disable authentication-as-arm definition - Azure portal.
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
+ 2. Refer to the ACR's built-in policy definitions in the [azure-container-registry-built-in-policy definition's](policy-reference.md).
+ 3. Assign a built-in policy to disable authentication-as-arm definition - Azure portal.
### Assign a built-in policy definition to disable ARM audience token authentication - Azure portal. You can enable registry's Conditional Access policy in the [Azure portal](https://portal.azure.com).
-1. Sign in to the [Azure portal](https://portal.azure.com).
+Azure Container Registry has two built-in policy definitions to disable authentication-as-arm, as below:
+
+>* `Container registries should have ARM audience token authentication disabled.` - This policy will report, block any non-compliant resources, and also sends a request to update non-compliant to compliant.
+>* `Configure container registries to disable ARM audience token authentication.` - This policy offers remediation and updates non-compliant to compliant resources.
+
-1. Navigate to your **Azure Container Registry** > **Resource Group** > **Settings** > **Policies** .
-
- :::image type="content" source="media/container-registry-enable-conditional-policy/01-azure-policies.png" alt-text="Screenshot showing how to navigate Azure policies.":::
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Azure Policy**, On the **Assignments**, select **Assign policy**.
+ 1. Navigate to your **Azure Container Registry** > **Resource Group** > **Settings** > **Policies** .
- :::image type="content" source="media/container-registry-enable-conditional-policy/02-Assign-policy.png" alt-text="Screenshot showing how to assign a policy.":::
+ :::image type="content" source="media/container-registry-enable-conditional-policy/01-azure-policies.png" alt-text="Screenshot showing how to navigate Azure policies.":::
-1. Under the **Assign policy** , use filters to search and find the **Scope**, **Policy definition**, **Assignment name**.
+ 1. Navigate to **Azure Policy**, On the **Assignments**, select **Assign policy**.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/02-Assign-policy.png" alt-text="Screenshot showing how to assign a policy.":::
- :::image type="content" source="media/container-registry-enable-conditional-policy/03-Assign-policy-tab.png" alt-text="Screenshot of the assign policy tab.":::
+ 1. Under the **Assign policy** , use filters to search and find the **Scope**, **Policy definition**, **Assignment name**.
-1. Select **Scope** to filter and search for the **Subscription** and **ResourceGroup** and choose **Select**.
-
- :::image type="content" source="media/container-registry-enable-conditional-policy/04-select-scope.png" alt-text="Screenshot of the Scope tab.":::
+ :::image type="content" source="media/container-registry-enable-conditional-policy/03-Assign-policy-tab.png" alt-text="Screenshot of the assign policy tab.":::
-1. Select **Policy definition** to filter and search the built-in policy definitions for the Conditional Access policy.
+ 1. Select **Scope** to filter and search for the **Subscription** and **ResourceGroup** and choose **Select**.
- :::image type="content" source="media/container-registry-enable-conditional-policy/05-built-in-policy-definitions.png" alt-text="Screenshot of built-in-policy-definitions.":::
-
-Azure Container Registry has two built-in policy definitions to disable authentication-as-arm, as below:
+ :::image type="content" source="media/container-registry-enable-conditional-policy/04-select-scope.png" alt-text="Screenshot of the Scope tab.":::
->* `Container registries should have ARM audience token authentication disabled.` - This policy will report, block any non-compliant resources, and also sends a request to update non-compliant to compliant.
->* `Configure container registries to disable ARM audience token authentication.` - This policy offers remediation and updates non-compliant to compliant resources.
+ 1. Select **Policy definition** to filter and search the built-in policy definitions for the Conditional Access policy.
+
+ :::image type="content" source="media/container-registry-enable-conditional-policy/05-built-in-policy-definitions.png" alt-text="Screenshot of built-in-policy-definitions.":::
-1. Use filters to select and confirm **Scope**, **Policy definition**, and **Assignment name**.
+ 1. Use filters to select and confirm **Scope**, **Policy definition**, and **Assignment name**.
-1. Use the filters to limit compliance states or to search for policies.
+ 1. Use the filters to limit compliance states or to search for policies.
-1. Confirm your settings and set policy enforcement as **enabled**.
+ 1. Confirm your settings and set policy enforcement as **enabled**.
-1. Select **Review+Create**.
+ 1. Select **Review+Create**.
- :::image type="content" source="media/container-registry-enable-conditional-policy/06-enable-policy.png" alt-text="Screenshot showing how to activate a Conditional Access policy.":::
+ :::image type="content" source="media/container-registry-enable-conditional-policy/06-enable-policy.png" alt-text="Screenshot to activate a Conditional Access policy":::
## Create and configure a Conditional Access policy - Azure portal
ACR supports Conditional Access policy for Active Directory users only. It curre
Create a Conditional Access policy and assign your test group of users as follows:
-1. Sign in to the [Azure portal](https://portal.azure.com) by using an account with *global administrator* permissions.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) by using an account with *global administrator* permissions.
-1. Search for and select **Azure Active Directory**. Then select **Security** from the menu on the left-hand side.
+ 1. Search for and select **Azure Active Directory**. Then select **Security** from the menu on the left-hand side.
-1. Select **Conditional Access**, select **+ New policy**, and then select **Create new policy**.
-
- :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select 'New policy' and then select 'Create new policy'." source="media/container-registry-enable-conditional-policy/01-create-conditional-access.png":::
+ 1. Select **Conditional Access**, select **+ New policy**, and then select **Create new policy**.
+
+ :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select 'New policy' and then select 'Create new policy'." source="media/container-registry-enable-conditional-policy/01-create-conditional-access.png":::
-1. Enter a name for the policy, such as *demo*.
+ 1. Enter a name for the policy, such as *demo*.
-1. Under **Assignments**, select the current value under **Users or workload identities**.
-
- :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select the current value under 'Users or workload identities'." source="media/container-registry-enable-conditional-policy/02-conditional-access-users-and-groups.png":::
+ 1. Under **Assignments**, select the current value under **Users or workload identities**.
+
+ :::image type="content" alt-text="A screenshot of the Conditional Access page, where you select the current value under 'Users or workload identities'." source="media/container-registry-enable-conditional-policy/02-conditional-access-users-and-groups.png":::
-1. Under **What does this policy apply to?**, verify and select **Users and groups**.
+ 1. Under **What does this policy apply to?**, verify and select **Users and groups**.
-1. Under **Include**, choose **Select users and groups**, and then select **All users**.
-
- :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify users." source="media/container-registry-enable-conditional-policy/03-conditional-access-users-groups-select-users.png":::
+ 1. Under **Include**, choose **Select users and groups**, and then select **All users**.
+
+ :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify users." source="media/container-registry-enable-conditional-policy/03-conditional-access-users-groups-select-users.png":::
-1. Under **Exclude**, choose **Select users and groups**, to exclude any choice of selection.
+ 1. Under **Exclude**, choose **Select users and groups**, to exclude any choice of selection.
-1. Under **Cloud apps or actions**, choose **Cloud apps**.
+ 1. Under **Cloud apps or actions**, choose **Cloud apps**.
-1. Under **Include**, choose **Select apps**.
+ 1. Under **Include**, choose **Select apps**.
- :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify cloud apps." source="media/container-registry-enable-conditional-policy/04-select-cloud-apps-select-apps.png":::
+ :::image type="content" alt-text="A screenshot of the page for creating a new policy, where you select options to specify cloud apps." source="media/container-registry-enable-conditional-policy/04-select-cloud-apps-select-apps.png":::
-1. Browse for and select apps to apply Conditional Access, in this case *Azure Container Registry*, then choose **Select**.
+ 1. Browse for and select apps to apply Conditional Access, in this case *Azure Container Registry*, then choose **Select**.
- :::image type="content" alt-text="A screenshot of the list of apps, with results filtered, and 'Azure Container Registry' selected." source="media/container-registry-enable-conditional-policy/05-select-azure-container-registry-app.png":::
+ :::image type="content" alt-text="A screenshot of the list of apps, with results filtered, and 'Azure Container Registry' selected." source="media/container-registry-enable-conditional-policy/05-select-azure-container-registry-app.png":::
-1. Under **Conditions** , configure control access level with options such as *User risk level*, *Sign-in risk level*, *Sign-in risk detections (Preview)*, *Device platforms*, *Locations*, *Client apps*, *Time (Preview)*, *Filter for devices*.
+ 1. Under **Conditions** , configure control access level with options such as *User risk level*, *Sign-in risk level*, *Sign-in risk detections (Preview)*, *Device platforms*, *Locations*, *Client apps*, *Time (Preview)*, *Filter for devices*.
-1. Under **Grant**, filter and choose from options to enforce grant access or block access, during a sign-in event to the Azure portal. In this case grant access with *Require multifactor authentication*, then choose **Select**.
+ 1. Under **Grant**, filter and choose from options to enforce grant access or block access, during a sign-in event to the Azure portal. In this case grant access with *Require multifactor authentication*, then choose **Select**.
- >[!TIP]
- > To configure and grant multi-factor authentication, see [configure and conditions for multi-factor authentication.](/azure/active-directory/authentication/tutorial-enable-azure-mfa#configure-the-conditions-for-multi-factor-authentication)
+ >[!TIP]
+ > To configure and grant multi-factor authentication, see [configure and conditions for multi-factor authentication.](/azure/active-directory/authentication/tutorial-enable-azure-mfa#configure-the-conditions-for-multi-factor-authentication)
-1. Under **Session**, filter and choose from options to enable any control on session level experience of the cloud apps.
+ 1. Under **Session**, filter and choose from options to enable any control on session level experience of the cloud apps.
-1. After selecting and confirming, Under **Enable policy**, select **On**.
+ 1. After selecting and confirming, Under **Enable policy**, select **On**.
-1. To apply and activate the policy, Select **Create**.
+ 1. To apply and activate the policy, Select **Create**.
- :::image type="content" alt-text="A screenshot showing how to activate the Conditional Access policy." source="media/container-registry-enable-conditional-policy/06-enable-conditional-access-policy.png":::
+ :::image type="content" alt-text="A screenshot showing how to activate the Conditional Access policy." source="media/container-registry-enable-conditional-policy/06-enable-conditional-access-policy.png":::
-We have now completed creating the Conditional Access policy for the Azure Container Registry.
+ We have now completed creating the Conditional Access policy for the Azure Container Registry.
## Next steps
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Some points to consider:
* You can achieve longer retention of your operational data in the analytical store by setting ATTL >= TTTL at the container level. * The analytical store can be made to mirror the transactional store by setting ATTL = TTTL. * If you have ATTL bigger than TTTL, at some point in time you'll have data that only exists in analytical store. This data is read only.
+* Currently we don't delete any data from analytical store. If you set your ATTL to any positive integer, the data won't be included in your queries and you won't be billed for it. But if you change ATTL back to `-1`, all the data will show up again, you will start to be billed for all the data volume.
How to enable analytical store on a container:
Analytical store follows a consumption-based pricing model where you're charged
Analytical store pricing is separate from the transaction store pricing model. There's no concept of provisioned RUs in the analytical store. See [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for full details on the pricing model for analytical store.
-Data in the analytics store can only be accessed through Azure Synapse Link, which is done in the Azure Synapse Analytics runtimes: Azure Synapse Apache Spark pools and
-Azure Synapse serverless SQL pools. See [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/) for full details on the pricing model to access data in analytical store.
+Data in the analytics store can only be accessed through Azure Synapse Link, which is done in the Azure Synapse Analytics runtimes: Azure Synapse Apache Spark pools and Azure Synapse serverless SQL pools. See [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/) for full details on the pricing model to access data in analytical store.
-In order to get a high-level cost estimate to enable analytical store on an Azure Cosmos DB container, from the analytical store perspective, you can use the [Azure Cosmos DB Capacity planner](https://cosmos.azure.com/capacitycalculator/) and get an estimate of your analytical storage and write operations costs. Analytical read operations costs depends on the analytics workload characteristics but as a high-level estimate, scan of 1 TB of data in analytical store typically results in 130,000 analytical read operations, and results in a cost of $0.065.
+In order to get a high-level cost estimate to enable analytical store on an Azure Cosmos DB container, from the analytical store perspective, you can use the [Azure Cosmos DB Capacity planner](https://cosmos.azure.com/capacitycalculator/) and get an estimate of your analytical storage and write operations costs.
-> [!NOTE]
-> Analytical store read operations estimates aren't included in the Azure Cosmos DB cost calculator since they are a function of your analytical workload. While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
+Analytical store read operations estimates aren't included in the Azure Cosmos DB cost calculator since they are a function of your analytical workload. But as a high-level estimate, scan of 1 TB of data in analytical store typically results in 130,000 analytical read operations, and results in a cost of $0.065. As an example, if you use Azure Synapse serverless SQL pools to perform this scan of 1 TB, it will cost $5.00 according to [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/). The final total cost for this 1 TB scan would be $5.065.
+While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
## Next steps
cosmos-db Analytical Store Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-private-endpoints.md
In this article, you will learn how to set up managed private endpoints for Azur
> [!NOTE] > If you are using Private DNS Zones for Azure Cosmos DB and wish to create a Synapse managed private endpoint to the analytical store sub-resource, you must first create a DNS zone for the analytical store (`privatelink.analytics.cosmos.azure.com`) linked to your Azure Cosmos DB's virtual network.
-> [!NOTE]
-> Synapse Link for API for Gremlin is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md).
-
-> [!NOTE]
-> Synapse Link for API for Gremlin is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md).
-- ## Enable a private endpoint for the analytical store ### Set up Azure Synapse Analytics workspace with a managed virtual network and data-exfiltration
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 09/28/2022 Last updated : 10/14/2022 # Supported database versions in Azure Cosmos DB for PostgreSQL
versions](https://www.postgresql.org/docs/release/):
### PostgreSQL version 14
-The current minor release is 14.4. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/14/release-14-1.html) to
+The current minor release is 14.5. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/14.5/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 13
-The current minor release is 13.7. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/13/release-13-5.html) to
+The current minor release is 13.8. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/13.8/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 12
-The current minor release is 12.11. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/12/release-12-9.html) to
+The current minor release is 12.12. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/12.12/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 11
-The current minor release is 11.16. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/11/release-11-14.html) to
+The current minor release is 11.17. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/11.17/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 10 and older We don't support PostgreSQL version 10 and older for Azure Cosmos DB for PostgreSQL.
-## PostgreSQL Version syntax
+## PostgreSQL version syntax
Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major
the Azure preferred PostgreSQL version as part of periodic maintenance.
### Major version retirement policy
-The table below provides the retirement details for PostgreSQL major versions.
+The table below provides the retirement details for PostgreSQL major versions in Azure Cosmos DB for PostgreSQL.
The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
-| Version | What's New | Azure support start date | Retirement date (Azure)|
+| Version | What's New | Supported since | Retirement date (Azure)|
| - | - | | - |
-| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
-| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
-| [PostgreSQL 10](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
-| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 [Single Server, Flexible Server] |
-| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024
-| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | June 29, 2022 (Flexible Server)| November 12, 2026
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | May 7, 2019 | November 9, 2023 |
+| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Apr 6, 2021 | November 14, 2024
+| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | Apr 6, 2021 | November 13, 2025
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | Oct 1, 2021 | November 12, 2026
### Retired PostgreSQL engine versions not supported in Azure Cosmos DB for PostgreSQL
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 10/06/2022 Last updated : 10/14/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
[**Monitoring experimental view**](#monitoring-experimental-view) * [Simplified default monitoring view](#simplified-default-monitoring-view)
+ * [Error message relocation to Status column](#error-message-relocation-to-status-column)
### Dataflow data-first experimental view
Add columns by clicking **Add column** or remove columns by clicking the trashca
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-22.png" alt-text="Screenshot of the Add column button and trashcan icon to edit column view.":::
-
+#### Error message relocation to Status column
+
+Error messages have now been relocated to the **Status** column. This will allow you to easily view errors when you see a **Failed** pipeline run.
+
+Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline.
+++ ## Provide feedback We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
The following table lists the differences between different types of TTL:
| Can be disabled | Y | Y | N | | Reserved compute is configurable | N | Y | N |
+> [!NOTE]
+> You can't enable TTL in default auto-resolve Azure integration runtime. You can create a new Azure integration runtime for it.
## Create a managed virtual network via Azure PowerShell
healthcare-apis Deploy 06 New Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-06-new-deploy.md
Previously updated : 08/30/2022 Last updated : 10/14/2022
For more information about authorizing access to Event Hubs resources, see [Auth
### Grant access to the FHIR service
-The process for granting your MedTech service system-assigned managed identity access to your FHIR service requires the same 13 steps that you used to grant access to your device message event hub. There are two exceptions. The first is that, instead of navigating to the Access Control (IAM) menu from within your event hub (as outlined in steps 1-4), you should navigate to the equivalent Access Control (IAM) menu from within your FHIR service. The second exception is that, in step 6, your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**.
+The process for granting your MedTech service system-assigned managed identity access to your **FHIR service** requires the same 13 steps that you used to grant access to your device message event hub. There are two exceptions. The first is that, instead of navigating to the **Access Control (IAM)** menu from within your event hub (as outlined in steps 1-4), you should navigate to the equivalent **Access Control (IAM)** menu from within your **FHIR service**. The second exception is that, in step 6, your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**.
The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized.
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
To update and run the provisioning sample with your device information:
| Parameter | Required | Description | | :-- | :- | :-- |
- | `--s` or `--IdScope` | True | The ID Scope of the DPS instance |
- | `--i` or `--Id` | True | The registration ID when using individual enrollment, or the desired device ID when using group enrollment. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The device ID must comply with the [Device ID string requirements](../iot-hub/iot-hub-devguide-identity-registry.md#device-identity-properties). |
- | `--p` or `--PrimaryKey` | True | The primary key of the individual or group enrollment. |
- | `--e` or `--EnrollmentType` | False | The type of enrollment: `Individual` or `Group`. Defaults to `Individual` |
+ | `--i` or `--IdScope` | True | The ID Scope of the DPS instance |
+ | `--r` or `--RegistrationId` | True | The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). |
+ | `--p` or `--PrimaryKey` | True | The primary key of the individual enrollment or the derived device key of the group enrollment. See the [ComputeDerivedSymmetricKeySample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples/Getting%20Started/ComputeDerivedSymmetricKeySample) for how to generate the derived key. |
| `--g` or `--GlobalDeviceEndpoint` | False | The global endpoint for devices to connect to. Defaults to `global.azure-devices-provisioning.net` | | `--t` or `--TransportType` | False | The transport to use to communicate with the device provisioning instance. Defaults to `Mqtt`. Possible values include `Mqtt`, `Mqtt_WebSocket_Only`, `Mqtt_Tcp_Only`, `Amqp`, `Amqp_WebSocket_Only`, `Amqp_Tcp_only`, and `Http1`.|
To update and run the provisioning sample with your device information:
* Replace `<primarykey>` with the **Primary Key** that you copied from the device enrollment. ```cmd
- dotnet run --s <id-scope> --i <registration-id> --p <primarykey>
+ dotnet run --i <id-scope> --r <registration-id> --p <primarykey>
``` 7. You should now see something similar to the following output. A "TestMessage" string is sent to the hub as a test message. ```output
- D:\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+ D:\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample>dotnet run --i 0ne00000A0A --r symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
Initializing the device provisioning client... Initialized for registration Id symm-key-csharp-device-01.
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
Title: Azure Quickstart - Create an Azure key vault and a secret using Bicep | Microsoft Docs description: Quickstart showing how to create Azure key vaults, and add secrets to the vaults using Bicep. -+ tags: azure-resource-manager Last updated 04/08/2022-+ #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
Two Azure resources are defined in the Bicep file:
```azurecli az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters keyVaultName=<vault-name> objectID=<object-id>
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters keyVaultName=<vault-name> objectId=<object-id>
``` # [PowerShell](#tab/PowerShell) ```azurepowershell New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -keyVaultName "<vault-name>" -objectID "<object-id>"
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -keyVaultName "<vault-name>" -objectId "<object-id>"
```
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 08/16/2022 Last updated : 10/15/2022
App settings in Azure Logic Apps work similarly to app settings in Azure Functio
| `Workflows.CustomHostName` | None | Sets the host name to use for workflow and input-output URLs, for example, "logic.contoso.com". For information to configure a custom DNS name, see [Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) and [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md). | | `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. | | `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
-||||
<a name="manage-app-settings"></a> ## Manage app settings - local.settings.json
-To add, update, or delete app settings, select and review the following sections for Visual Studio Code, Azure portal, Azure CLI, or ARM (Bicep) template. For app settings specific to logic apps, review the [reference guide for available app settings - local.settings.json](#reference-local-settings-json).
+To add, update, or delete app settings, select and review the following sections for Azure portal, Visual Studio Code, Azure CLI, or ARM (Bicep) template. For app settings specific to logic apps, review the [reference guide for available app settings - local.settings.json](#reference-local-settings-json).
+
+### [Azure portal](#tab/azure-portal)
+
+To review the app settings for your single-tenant based logic app in the Azure portal, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com/) search box, find and open your logic app.
+
+1. On your logic app menu, under **Settings**, select **Configuration**.
+
+1. On the **Configuration** page, on the **Application settings** tab, review the app settings for your logic app.
+
+ For more information about these settings, review the [reference guide for available app settings - local.settings.json](#reference-local-settings-json).
+
+1. To view all values, select **Show Values**. Or, to view a single value, select that value.
+
+To add a setting, follow these steps:
+
+1. On the **Application settings** tab, under **Application settings**, select **New application setting**.
+
+1. For **Name**, enter the *key* or name for your new setting.
+
+1. For **Value**, enter the value for your new setting.
+
+1. When you're ready to create your new *key-value* pair, select **OK**.
+ ### [Visual Studio Code](#tab/visual-studio-code)
To add an app setting, follow these steps:
} ```
-### [Azure portal](#tab/azure-portal)
-
-To review the app settings for your single-tenant based logic app in the Azure portal, follow these steps:
-
-1. In the [Azure portal](https://portal.azure.com/) search box, find and open your logic app.
-
-1. On your logic app menu, under **Settings**, select **Configuration**.
-
-1. On the **Configuration** page, on the **Application settings** tab, review the app settings for your logic app.
-
- For more information about these settings, review the [reference guide for available app settings - local.settings.json](#reference-local-settings-json).
-
-1. To view all values, select **Show Values**. Or, to view a single value, select that value.
-
-To add a setting, follow these steps:
-
-1. On the **Application settings** tab, under **Application settings**, select **New application setting**.
-
-1. For **Name**, enter the *key* or name for your new setting.
-
-1. For **Value**, enter the value for your new setting.
-
-1. When you're ready to create your new *key-value* pair, select **OK**.
-- ### [Azure CLI](#tab/azure-cli) To review your current app settings using the Azure CLI, run the command, `az logicapp config appsettings list`. Make sure that your command includes the `--name -n` and `--resource-group -g` parameters, for example:
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Jobs.BackgroundJobs.DispatchingWorkersPulseInterval` | `00:00:01` <br>(1 sec) | Sets the interval for job dispatchers to poll the job queue when the previous poll returns no jobs. Job dispatchers poll the queue immediately when the previous poll returns a job. |
-| `Jobs.BackgroundJobs.NumWorkersPerProcessorCount` | `192` dispatcher worker instances | Sets the number of *dispatcher worker instances* or *job dispatchers* to have per processor core. This value affects the number of workflow runs per core. |
-| `Jobs.BackgroundJobs.NumPartitionsInJobTriggersQueue` | `1` job queue | Sets the number of job queues monitored by job dispatchers for jobs to process. This value also affects the number of storage partitions where job queues exist. |
| `Jobs.BackgroundJobs.NumPartitionsInJobDefinitionsTable` | `4` job partitions | Sets the number of job partitions in the job definition table. This value controls how much execution throughput is affected by partition storage limits. |
+| `Jobs.BackgroundJobs.NumPartitionsInJobTriggersQueue` | `1` job queue | Sets the number of job queues monitored by job dispatchers for jobs to process. This value also affects the number of storage partitions where job queues exist. |
+| `Jobs.BackgroundJobs.NumWorkersPerProcessorCount` | `192` dispatcher worker instances | Sets the number of *dispatcher worker instances* or *job dispatchers* to have per processor core. This value affects the number of workflow runs per core. |
| `Jobs.StuckJobThreshold` | `00:60:00` <br>(60 minutes) | Sets the time duration before a job is declared as stuck. If you have an action that requires more than 60 minutes to run, you might need to increase this setting's default value and also the [`functionTimeout` property](../azure-functions/functions-scale.md#timeout) value in the same **host.json** file to the same value. |
-||||
+
+<a name="recurrence-triggers"></a>
+
+### Recurrence-based triggers
+
+| Setting | Default value | Description |
+|||-|
+| `Microsoft.Azure.Workflows.ServiceProviders.MaximumAllowedTriggerStateSizeInKB` | `1` KB | Sets the trigger state's maximum allowed size for recurrence-based triggers such as the built-in SFTP trigger. The trigger state persists data across multiple service provider recurrence-based triggers. <br><br>**Important**: Based on your storage size, avoid setting this value too high, which can adversely affect storage and performance. |
+
+<a name="trigger-concurrency"></a>
+
+### Trigger concurrency
+
+| Setting | Default value | Description |
+|||-|
+| `Runtime.Trigger.MaximumRunConcurrency` | `100` runs | Sets the maximum number of concurrent runs that a trigger can start. This value appears in the trigger's concurrency definition. |
+| `Runtime.Trigger.MaximumWaitingRuns` | `200` runs | Sets the maximum number of runs that can wait after concurrent runs meet the maximum. This value appears in the trigger's concurrency definition. |
<a name="run-duration-history"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-|
+| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. <br><br>**Important**: Make sure this value is less than or equal to the `Runtime.FlowRetentionThreshold` value. Otherwise, run histories can get deleted before the associated jobs are complete. |
| `Runtime.FlowRetentionThreshold` | `90.00:00:00` <br>(90 days) | Sets the amount of time to keep workflow run history after a run starts. |
-| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. <p><p>**Important**: Make sure this value is less than or equal to the `Runtime.FlowRetentionThreshold` value. Otherwise, run histories can get deleted before the associated jobs are complete. |
-||||
-
+ <a name="run-actions"></a> ### Run actions
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Runtime.FlowRunRetryableActionJobCallback.ActionJobExecutionTimeout` | `00:10:00` <br>(10 minutes) | Sets the amount of time for a workflow action job to run before timing out and retrying. |
-||||
<a name="inputs-outputs"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-|
-| `Runtime.FlowRunActionJob.MaximumActionResultSize` | `209715200` bytes | Sets the maximum size in bytes that the combined inputs and outputs can have in an action. |
| `Runtime.ContentLink.MaximumContentSizeInBytes` | `104857600` bytes | Sets the maximum size in bytes that an input or output can have in a trigger or action. |
-||||
+| `Runtime.FlowRunActionJob.MaximumActionResultSize` | `209715200` bytes | Sets the maximum size in bytes that the combined inputs and outputs can have in an action. |
<a name="pagination"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Runtime.FlowRunRetryableActionJobCallback.MaximumPageCount` | `1000` pages | When pagination is supported and enabled on an operation, sets the maximum number of pages to return or process at runtime. |
-||||
<a name="chunking"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.FlowRunRetryableActionJobCallback.MaximumContentLengthInBytesForPartialContent` | `1073741824` bytes | When chunking is supported and enabled on an operation, sets the maximum size in bytes for downloaded or uploaded content. | | `Runtime.FlowRunRetryableActionJobCallback.MaxChunkSizeInBytes` | `52428800` bytes | When chunking is supported and enabled on an operation, sets the maximum size in bytes for each content chunk. | | `Runtime.FlowRunRetryableActionJobCallback.MaximumRequestCountForPartialContent` | `1000` requests | When chunking is supported and enabled on an operation, sets the maximum number of requests that an action execution can make to download content. |
-||||
-<a name="trigger-concurrency"></a>
+<a name="store-inline-or-blob"></a>
-### Trigger concurrency
+### Store content inline or use blobs
| Setting | Default value | Description | |||-|
-| `Runtime.Trigger.MaximumRunConcurrency` | `100` runs | Sets the maximum number of concurrent runs that a trigger can start. This value appears in the trigger's concurrency definition. |
-| `Runtime.Trigger.MaximumWaitingRuns` | `200` runs | Sets the maximum number of runs that can wait after concurrent runs meet the maximum. This value appears in the trigger's concurrency definition. |
-||||
+| `Runtime.FlowRunEngine.ForeachMaximumItemsForContentInlining` | `20` items | When a `For each` loop is running, each item's value is stored either inline with other metadata in table storage or separately in blob storage. Sets the number of items to store inline with other metadata. |
+| `Runtime.FlowRunRetryableActionJobCallback.MaximumPagesForContentInlining` | `20` pages | Sets the maximum number of pages to store as inline content in table storage before storing in blob storage. |
+| `Runtime.FlowTriggerSplitOnJob.MaximumItemsForContentInlining` | `40` items | When the `SplitOn` setting debatches array items into multiple workflow instances, each item's value is stored either inline with other metadata in table storage or separately in blob storage. Sets the number of items to store inline. |
+| `Runtime.ScaleUnit.MaximumCharactersForContentInlining` | `8192` characters | Sets the maximum number of operation input and output characters to store inline in table storage before storing in blob storage. |
<a name="for-each-loop"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Runtime.Backend.FlowDefaultForeachItemsLimit` | `100000` array items | For a *stateful workflow*, sets the maximum number of array items to process in a `For each` loop. |
-| `Runtime.Backend.Stateless.FlowDefaultForeachItemsLimit` | `100` items | For a *stateless workflow*, sets the maximum number of array items to process in a `For each` loop. |
-| `Runtime.Backend.ForeachDefaultDegreeOfParallelism` | `20` iterations | Sets the default number of concurrent iterations, or degree of parallelism, in a `For each` loop. To run sequentially, set the value to `1`. |
| `Runtime.Backend.FlowDefaultSplitOnItemsLimit` | `100000` array items | Sets the maximum number of array items to debatch or split into multiple workflow instances based on the `SplitOn` setting. |
-||||
+| `Runtime.Backend.ForeachDefaultDegreeOfParallelism` | `20` iterations | Sets the default number of concurrent iterations, or degree of parallelism, in a `For each` loop. To run sequentially, set the value to `1`. |
+| `Runtime.Backend.Stateless.FlowDefaultForeachItemsLimit` | `100` items | For a *stateless workflow*, sets the maximum number of array items to process in a `For each` loop. |
<a name="until-loop"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Runtime.Backend.MaximumUntilLimitCount` | `5000` iterations | For a *stateful workflow*, sets the maximum number possible for the `Count` property in an `Until` action. |
-| `Runtime.Backend.Stateless.MaximumUntilLimitCount` | `100` iterations | For a *stateless workflow*, sets the maximum number possible for the `Count` property in an `Until` action. |
| `Runtime.Backend.Stateless.FlowRunTimeout` | `00:05:00` <br>(5 min) | Sets the maximum wait time for an `Until` loop in a stateless workflow. |
-||||
+| `Runtime.Backend.Stateless.MaximumUntilLimitCount` | `100` iterations | For a *stateless workflow*, sets the maximum number possible for the `Count` property in an `Until` action. |
<a name="variables"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Runtime.Backend.DefaultAppendArrayItemsLimit` | `100000` array items | Sets the maximum number of items in a variable with the Array type. |
-| `Runtime.Backend.VariableOperation.MaximumVariableSize` | Stateful workflow: `104857600` characters | Sets the maximum size in characters for the content that a variable can store when used in a stateful workflow. |
| `Runtime.Backend.VariableOperation.MaximumStatelessVariableSize` | Stateless workflow: `1024` characters | Sets the maximum size in characters for the content that a variable can store when used in a stateless workflow. |
-||||
-
-<a name="recurrence-triggers"></a>
-
-### Recurrence-based triggers
-
-| Setting | Default value | Description |
-|||-|
-| `Microsoft.Azure.Workflows.ServiceProviders.MaximumAllowedTriggerStateSizeInKB` | `1` KB | Sets the trigger state's maximum allowed size for recurrence-based triggers such as the built-in SFTP trigger. The trigger state persists data across multiple service provider recurrence-based triggers. <br><br>**Important**: Based on your storage size, avoid setting this value too high, which can adversely affect storage and performance. |
-||||
+| `Runtime.Backend.VariableOperation.MaximumVariableSize` | Stateful workflow: `104857600` characters | Sets the maximum size in characters for the content that a variable can store when used in a stateful workflow. |
<a name="http-operations"></a>
-### HTTP operations
+### Built-in HTTP operations
| Setting | Default value | Description | |||-|
-| `Runtime.Backend.HttpOperation.RequestTimeout` | `00:03:45` <br>(3 min and 45 sec) | Sets the request timeout value for HTTP triggers and actions. |
-| `Runtime.Backend.HttpOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP triggers and actions. |
| `Runtime.Backend.HttpOperation.DefaultRetryCount` | `4` retries | Sets the default retry count for HTTP triggers and actions. | | `Runtime.Backend.HttpOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for HTTP triggers and actions. | | `Runtime.Backend.HttpOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for HTTP triggers and actions. | | `Runtime.Backend.HttpOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for HTTP triggers and actions. |
-||||
+| `Runtime.Backend.HttpOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP triggers and actions. |
+| `Runtime.Backend.HttpOperation.RequestTimeout` | `00:03:45` <br>(3 min and 45 sec) | Sets the request timeout value for HTTP triggers and actions. |
<a name="http-webhook"></a>
-### HTTP Webhook operations
+### Built-in HTTP Webhook operations
| Setting | Default value | Description | |||-|
-| `Runtime.Backend.HttpWebhookOperation.RequestTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for HTTP webhook triggers and actions. |
-| `Runtime.Backend.HttpWebhookOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP webhook triggers and actions. |
| `Runtime.Backend.HttpWebhookOperation.DefaultRetryCount` | `4` retries | Sets the default retry count for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 hour) | Sets the default wake-up interval for HTTP webhook trigger and action jobs. |
-||||
+| `Runtime.Backend.HttpWebhookOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP webhook triggers and actions. |
+| `Runtime.Backend.HttpWebhookOperation.RequestTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for HTTP webhook triggers and actions. |
+
+<a name="built-in-storage"></a>
+
+### Built-in Azure Storage operations
+
+<a name="built-in-blob-storage"></a>
+
+#### Blob storage
+
+| Setting | Default value | Description |
+|||-|
+| `Runtime.ContentStorage.RequestOptionsDeltaBackoff` | `00:00:02` <br>(2 sec) | Sets the backoff interval between retries sent to blob storage. |
+| `Runtime.ContentStorage.RequestOptionsMaximumAttempts` | `4` retries | Sets the maximum number of retries sent to table and queue storage. |
+| `Runtime.ContentStorage.RequestOptionsMaximumExecutionTime` | `00:02:00` <br>(2 min) | Sets the operation timeout value, including retries, for blob requests from the Azure Logic Apps runtime. |
+| `Runtime.ContentStorage.RequestOptionsServerTimeout` | `00:00:30` <br>(30 sec) | Sets the timeout value for blob requests from the Azure Logic Apps runtime. |
+
+<a name="built-in-table-queue-storage"></a>
+
+#### Table and queue storage
+
+| Setting | Default value | Description |
+|||-|
+| `Runtime.DataStorage.RequestOptionsDeltaBackoff` | `00:00:02` <br>(2 sec) | Sets the backoff interval between retries sent to table and queue storage. |
+| `Runtime.DataStorage.RequestOptionsMaximumAttempts` | `4` retries | Sets the maximum number of retries sent to table and queue storage. |
+| `Runtime.DataStorage.RequestOptionsMaximumExecutionTime` | `00:00:45` <br>(45 sec) | Sets the operation timeout value, including retries, for table and queue storage requests from the Azure Logic Apps runtime. |
+| `Runtime.DataStorage.RequestOptionsServerTimeout` | `00:00:16` <br>(16 sec) | Sets the timeout value for table and queue storage requests from the Azure Logic Apps runtime. |
<a name="built-in-azure-functions"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.Backend.FunctionOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for Azure Functions actions. | | `Runtime.Backend.FunctionOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for Azure Functions actions. | | `Runtime.Backend.FunctionOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for Azure Functions actions. |
-||||
-<a name="built-in-service-bus"></a>
+<a name="built-in-azure-service-bus"></a>
### Built-in Azure Service Bus operations
These settings affect the throughput and capacity for single-tenant Azure Logic
|||-| | `ServiceProviders.ServiceBus.MessageSenderOperationTimeout` | `00:01:00` <br>(1 min) | Sets the timeout for sending messages with the built-in Service Bus operation. | | `Runtime.ServiceProviders.ServiceBus.MessageSenderPoolSizePerProcessorCount` | `64` message senders | Sets the number of Azure Service Bus message senders per processor core to use in the message sender pool. |
-||||
<a name="managed-api-connector"></a>
-### Managed API connector operations
+### Managed connector operations
| Setting | Default value | Description | |||-|
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.Backend.ApiWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 day) | Sets the maximum retry interval for managed API connector webhook triggers and actions. | | `Runtime.Backend.ApiConnectionOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for managed API connector triggers and actions. | | `Runtime.Backend.ApiWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 day) | Sets the default wake-up interval for managed API connector webhook trigger and action jobs. |
-||||
-
-<a name="blob-storage"></a>
-
-### Blob storage
-
-| Setting | Default value | Description |
-|||-|
-| `Runtime.ContentStorage.RequestOptionsServerTimeout` | `00:00:30` <br>(30 sec) | Sets the timeout value for blob requests from the Azure Logic Apps runtime. |
-| `Runtime.DataStorage.RequestOptionsMaximumExecutionTime` | `00:02:00` <br>(2 min) | Sets the operation timeout value, including retries, for table and queue storage requests from the Azure Logic Apps runtime. |
-| `Runtime.ContentStorage.RequestOptionsDeltaBackoff` | `00:00:02` <br>(2 sec) | Sets the backoff interval between retries sent to blob storage. |
-| `Runtime.ContentStorage.RequestOptionsMaximumAttempts` | `4` retries | Sets the maximum number of retries sent to table and queue storage. |
-||||
-
-<a name="store-inline-or-blob"></a>
-
-### Store content inline or use blobs
-
-| Setting | Default value | Description |
-|||-|
-| `Runtime.FlowRunEngine.ForeachMaximumItemsForContentInlining` | `20` items | When a `For each` loop is running, each item's value is stored either inline with other metadata in table storage or separately in blob storage. Sets the number of items to store inline with other metadata. |
-| `Runtime.FlowRunRetryableActionJobCallback.MaximumPagesForContentInlining` | `20` pages | Sets the maximum number of pages to store as inline content in table storage before storing in blob storage. |
-| `Runtime.FlowTriggerSplitOnJob.MaximumItemsForContentInlining` | `40` items | When the `SplitOn` setting debatches array items into multiple workflow instances, each item's value is stored either inline with other metadata in table storage or separately in blob storage. Sets the number of items to store inline. |
-| `Runtime.ScaleUnit.MaximumCharactersForContentInlining` | `8192` characters | Sets the maximum number of operation input and output characters to store inline in table storage before storing in blob storage. |
-||||
-
-<a name="table-queue-storage"></a>
-
-### Table and queue storage
-
-| Setting | Default value | Description |
-|||-|
-| `Runtime.DataStorage.RequestOptionsServerTimeout` | `00:00:16` <br>(16 sec) | Sets the timeout value for table and queue storage requests from the Azure Logic Apps runtime. |
-| `Runtime.DataStorage.RequestOptionsMaximumExecutionTime` | `00:00:45` <br>(45 sec) | Sets the operation timeout value, including retries, for table and queue storage requests from the Azure Logic Apps runtime. |
-| `Runtime.DataStorage.RequestOptionsDeltaBackoff` | `00:00:02` <br>(2 sec) | Sets the backoff interval between retries sent to table and queue storage. |
-| `Runtime.DataStorage.RequestOptionsMaximumAttempts` | `4` retries | Sets the maximum number of retries sent to table and queue storage. |
-||||
<a name="retry-policy"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.Backend.Operation.MaximumRetryCount` | `90` retries | Sets the maximum number of retries in the retry policy definition for a workflow operation. | | `Runtime.Backend.Operation.MaximumRetryInterval` | `01:00:00:01` <br>(1 day and 1 sec) | Sets the maximum interval in the retry policy definition for a workflow operation. | | `Runtime.Backend.Operation.MinimumRetryInterval` | `00:00:05` <br>(5 sec) | Sets the minimum interval in the retry policy definition for a workflow operation. |
-||||
<a name="manage-host-settings"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
You can add, update, or delete host settings, which specify the runtime configuration settings and values that apply to *all the workflows* in that logic app, such as default values for throughput, capacity, data size, and so on, *whether they run locally or in Azure*. For host settings specific to logic apps, review the [reference guide for available runtime and deployment settings - host.json](#reference-host-json).
-### Visual Studio Code - host.json
+### Azure portal - host.json
-To review the host settings for your logic app in Visual Studio Code, follow these steps:
+To review the host settings for your single-tenant based logic app in the Azure portal, follow these steps:
-1. In your logic app project, at the root project level, find and open the **host.json** file.
+1. In the [Azure portal](https://portal.azure.com/) search box, find and open your logic app.
-1. In the `extensions` object, under `workflows` and `settings`, review any host settings that were previously added for your logic app. Otherwise, the `extensions` object won't appear in the file.
+1. On your logic app menu, under **Development Tools**, select **Advanced Tools**.
+
+1. On the **Advanced Tools** page, select **Go**, which opens the **Kudu** environment for your logic app.
+
+1. On the Kudu toolbar, from the **Debug console** menu, select **CMD**.
+
+1. In the Azure portal, stop your logic app.
+
+ 1. On your logic app menu, select **Overview**.
+
+ 1. On the **Overview** pane's toolbar, select **Stop**.
+
+1. On your logic app menu, under **Development Tools**, select **Advanced Tools**.
+
+1. On the **Advanced Tools** pane, select **Go**, which opens the Kudu environment for your logic app.
+
+1. On the Kudu toolbar, open the **Debug console** menu, and select **CMD**.
+
+ A console window opens so that you can browse to the **wwwroot** folder using the command prompt. Or, you can browse the directory structure that appears above the console window.
+
+1. Browse along the following path to the **wwwroot** folder: `...\home\site\wwwroot`.
+
+1. Above the console window, in the directory table, next to the **host.json** file, select **Edit**.
+
+1. After the **host.json** file opens, review any host settings that were previously added for your logic app.
For more information about host settings, review the [reference guide for available host settings - host.json](#reference-host-json).
-To add a host setting, follow these steps:
+To add a setting, follow these steps:
-1. In the **host.json** file, under the `extensionBundle` object, add the `extensions` object, which includes the `workflow` and `settings` objects, for example:
+1. Before you add or edit settings, stop your logic app in the Azure portal.
+
+ 1. On your logic app menu, select **Overview**.
+ 1. On the **Overview** pane's toolbar, select **Stop**.
+
+1. Return to the **host.json** file. Under the `extensionBundle` object, add the `extensions` object, which includes the `workflow` and `settings` objects, for example:
```json {
To add a host setting, follow these steps:
} ```
-### Azure portal - host.json
-
-To review the host settings for your single-tenant based logic app in the Azure portal, follow these steps:
-
-1. In the [Azure portal](https://portal.azure.com/) search box, find and open your logic app.
-
-1. On your logic app menu, under **Development Tools**, select **Advanced Tools**.
-
-1. On the **Advanced Tools** page, select **Go**, which opens the **Kudu** environment for your logic app.
-
-1. On the Kudu toolbar, from the **Debug console** menu, select **CMD**.
-
-1. In the Azure portal, stop your logic app.
-
- 1. On your logic app menu, select **Overview**.
-
- 1. On the **Overview** pane's toolbar, select **Stop**.
-
-1. On your logic app menu, under **Development Tools**, select **Advanced Tools**.
-
-1. On the **Advanced Tools** pane, select **Go**, which opens the Kudu environment for your logic app.
+1. When you're done, remember to select **Save**.
-1. On the Kudu toolbar, open the **Debug console** menu, and select **CMD**.
+1. Now, restart your logic app. Return to your logic app's **Overview** page, and select **Restart**.
- A console window opens so that you can browse to the **wwwroot** folder using the command prompt. Or, you can browse the directory structure that appears above the console window.
+### Visual Studio Code - host.json
-1. Browse along the following path to the **wwwroot** folder: `...\home\site\wwwroot`.
+To review the host settings for your logic app in Visual Studio Code, follow these steps:
-1. Above the console window, in the directory table, next to the **host.json** file, select **Edit**.
+1. In your logic app project, at the root project level, find and open the **host.json** file.
-1. After the **host.json** file opens, review any host settings that were previously added for your logic app.
+1. In the `extensions` object, under `workflows` and `settings`, review any host settings that were previously added for your logic app. Otherwise, the `extensions` object won't appear in the file.
For more information about host settings, review the [reference guide for available host settings - host.json](#reference-host-json).
-To add a setting, follow these steps:
-
-1. Before you add or edit settings, stop your logic app in the Azure portal.
-
- 1. On your logic app menu, select **Overview**.
- 1. On the **Overview** pane's toolbar, select **Stop**.
+To add a host setting, follow these steps:
-1. Return to the **host.json** file. Under the `extensionBundle` object, add the `extensions` object, which includes the `workflow` and `settings` objects, for example:
+1. In the **host.json** file, under the `extensionBundle` object, add the `extensions` object, which includes the `workflow` and `settings` objects, for example:
```json {
To add a setting, follow these steps:
} ```
-1. When you're done, remember to select **Save**.
-
-1. Now, restart your logic app. Return to your logic app's **Overview** page, and select **Restart**.
- ## Next steps
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
When you provide a data input/output to a Job, you'll need to specify a `path` p
||| |A path on your local computer | `./home/username/data/my_data` | |A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
-|A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/path` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
+|A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/<path>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
|A path on a Datastore | `azureml://datastores/<data_store_name>/paths/<path>` | |A path to a Data Asset | `azureml:<my_data>:<version>` |
network-watcher Network Watcher Nsg Flow Logging Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-portal.md
Previously updated : 11/16/2021 Last updated : 10/17/2022 # Customer intent: I need to log the network traffic to and from a VM so I can analyze it for anomalies.
In this tutorial, you learn how to:
> * Download logged data > * View logged data
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription.
## Create a virtual machine
In this tutorial, you learn how to:
2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
-3. In **Virtual machines**, select **+ Create** then **+ Virtual machine**.
+3. In **Virtual machines**, select **+ Create** then **+ Azure virtual machine**.
4. Enter or select the following information in **Create a virtual machine**.
In this tutorial, you learn how to:
| Azure Spot instance | Leave the default. | | Size | Select a size. | | **Administrator account** | |
- | Authentication type | Select **SSH public key**. |
| Username | Enter a username. | | Password | Enter a password. | | Confirm password | Confirm password. |
NSG flow logging requires the **Microsoft.Insights** provider. To register the p
5. Confirm the status of the provider displayed is **Registered**. If the status is **Unregistered**, select the provider then select **Register**.
+ :::image type="content" source="./media/network-watcher-nsg-flow-logging-portal/microsoft-insights-registered.png" alt-text="Screenshot of registering microsoft insights provider.":::
+ ## Enable NSG flow log NSG flow log data is written to an Azure Storage account. Complete the following steps to create a storage account for the log data.
NSG flow log data is written to an Azure Storage account. Complete the following
| Subscription | Select your subscription. | | Resource group | Select **myResourceGroup**. | | **Instance details** | |
- | Storage account name | Enter a name for your storage account. </br> Must be 3-24 characters in length, can only contain lowercase letters and numbers, and must be unique across all Azure Storage. |
- | Region | Select **(US)East US**. |
+ | Storage account name | Enter a name for your storage account. </br> Must be 3-24 characters long, and can contain only lowercase letters and numbers, and must be unique across all Azure Storage. |
+ | Region | Select **(US) East US**. |
| Performance | Leave the default of **Standard**. | | Redundancy | Leave the default of **Geo-redundant storage (GRS)**. |
-4. Select **Review + create**.
+4. Select **Review**.
5. Select **Create**.
The following example JSON displays data that you'll see in the PT1H.json file f
"time": "2018-05-01T15:00:02.1713710Z", "systemId": "<Id>", "category": "NetworkSecurityGroupFlowEvent",
- "resourceId": "/SUBSCRIPTIONS/<Id>/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/MYVM-NSG",
+ "resourceId": "/SUBSCRIPTIONS/<subscriptionId>/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/MYVM-NSG",
"operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1,
The following example JSON displays data that you'll see in the PT1H.json file f
"rule": "UserRule_default-allow-rdp", "flows": [ {
- "mac": "000D3A170C69",
+ "mac": "<macAddress>",
"flowTuples": [ "1525186745,192.168.1.4,10.0.0.4,55960,3389,T,I,A" ]
The following example JSON displays data that you'll see in the PT1H.json file f
```json { "time": "2018-11-13T12:00:35.3899262Z",
- "systemId": "a0fca5ce-022c-47b1-9735-89943b42f2fa",
+ "systemId": "<Id>",
"category": "NetworkSecurityGroupFlowEvent",
- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG",
+ "resourceId": "/SUBSCRIPTIONS/<subscriptionId>/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/MYVM-NSG",
"operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 2,
The following example JSON displays data that you'll see in the PT1H.json file f
"rule": "DefaultRule_DenyAllInBound", "flows": [ {
- "mac": "000D3AF87856",
+ "mac": "<macAddress>",
"flowTuples": [ "1542110402,94.102.49.190,10.5.16.4,28746,443,U,I,D,B,,,,", "1542110424,176.119.4.10,10.5.16.4,56509,59336,T,I,D,B,,,,",
The following example JSON displays data that you'll see in the PT1H.json file f
"rule": "DefaultRule_AllowInternetOutBound", "flows": [ {
- "mac": "000D3AF87856",
+ "mac": "<macAddress>",
"flowTuples": [ "1542110377,10.5.16.4,13.67.143.118,59831,443,T,O,A,B,,,,", "1542110379,10.5.16.4,13.67.143.117,59932,443,T,O,A,E,1,66,1,66",
openshift Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/concepts-networking.md
The following list covers important networking components in an Azure Red Hat Op
* This endpoint balances internal service traffic. For this load balancer, the worker nodes are in the backend pool. * This load balancer isn't created by default. It's created once you create a service of type LoadBalancer with the correct annotations. For example: service.beta.kubernetes.io/azure-load-balancer-internal: "true".
-* **aro-internal-lb**
+* **aro**
* This endpoint is used for any public traffic. When you create an application and a route, this endpoint is the path for ingress traffic. * This load balancer also covers egress Internet connectivity from any pod running in the worker nodes through Azure Load Balancer outbound rules. * Currently outbound rules aren't configurable. They allocate 1,024 TCP ports to each node.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME r
Your applications don't need to change the connection URL. When resolving to a public DNS service, the DNS server will resolve to your private endpoints. The process doesn't affect your existing applications. > [!IMPORTANT]
-> Private networks already using the private DNS zone for a given type, can only connect to public resources if they don't have any private endpoint connections, otherwise a corresponding DNS configuration is required on the private DNS zone in order to complete the DNS resolution sequence.
+> * Private networks already using the private DNS zone for a given type, can only connect to public resources if they don't have any private endpoint connections, otherwise a corresponding DNS configuration is required on the private DNS zone in order to complete the DNS resolution sequence.
+> * Private endpoint private DNS zone configurations will only automatically generate if you use the recommended naming scheme in the table below.
For Azure services, use the recommended zone names as described in the following table:
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
When setting up scan, you can further scope the scan after providing the databas
### Known limitations
-* Microsoft Purview doesn't support over 300 columns in the Schema tab and it will show "Additional-Columns-Truncated" if there are more than 300 columns.
+* Microsoft Purview doesn't support over 800 columns in the Schema tab and it will show "Additional-Columns-Truncated" if there are more than 800 columns.
* Column level lineage is currently not supported in the lineage tab. However, the columnMapping attribute in properties tab of Azure SQL Stored Procedure Run captures column lineage in plain text. * Stored procedures running remotely from data integration tools like Azure Data Factory is currently not supported * Data lineage extraction is currently not supported for Functions, Triggers.
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
This article outlines how to register dedicated SQL pools (formerly SQL DW), and
### Known limitations
-* Microsoft Purview doesn't support over 300 columns in the Schema tab and it will show "Additional-Columns-Truncated".
+* Microsoft Purview doesn't support over 800 columns in the Schema tab and it will show "Additional-Columns-Truncated".
## Prerequisites
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
Previously updated : 10/04/2022 Last updated : 10/14/2022
After you've specified the key from the key vault in the customer's tenant, the
### [PowerShell](#tab/azure-powershell)
-To configure cross-tenant customer-managed keys for a new storage account in PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview), version 4.4.2-preview.
+To configure cross-tenant customer-managed keys for a new storage account with PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview), version 4.4.2-preview.
Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
Remember to replace the placeholder values in brackets with your own values and
```azurepowershell $accountName = "<storage-account>" $kvUri = "<key-vault-uri>"
-$keyName = "<keyName>"
+$keyName = "<key-name>"
$multiTenantAppId = "<multi-tenant-app-id>" Set-AzStorageAccount -ResourceGroupName $rgName `
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
Previously updated : 10/04/2022 Last updated : 10/14/2022
New-AzStorageAccount -ResourceGroupName $rgName `
### [Azure CLI](#tab/azure-cli)
-To configure cross-tenant customer-managed keys for a new storage account in Azure CLI, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+To configure cross-tenant customer-managed keys for a new storage account with Azure CLI, first install the [storage-preview](https://github.com/Azure/azure-cli-extensions/tree/main/src/storage-preview) extension. For more information about installing Azure CLI extensions, see [How to install and manage Azure CLI extensions](/cli/azure/azure-cli-extensions-overview).
+
+Next, call [az storage account create](/cli/azure/storage/account#az-storage-account-create), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
+
+Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+
+```azurecli
+accountName="<storage-account>"
+kvUri="<key-vault-uri>"
+keyName="<key-name>"
+multiTenantAppId="<multi-tenant-app-id>"
+
+# Get the resource ID for the user-assigned managed identity.
+identityResourceId=$(az identity show --name $managedIdentity \
+ --resource-group $isvRgName \
+ --query id \
+ --output tsv)
+
+az storage account create \
+ --name $accountName \
+ --resource-group $isvRgName \
+ --location $isvLocation \
+ --sku Standard_LRS \
+ --kind StorageV2 \
+ --identity-type SystemAssigned,UserAssigned \
+ --user-identity-id $identityResourceId \
+ --encryption-key-vault $kvUri \
+ --encryption-key-name $keyName \
+ --encryption-key-source Microsoft.Keyvault \
+ --key-vault-user-identity-id $identityResourceId \
+ --key-vault-federated-client-id $multiTenantAppId
+```
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/active-directory-authentication.md
The following authentication methods are supported for Azure AD server principal
- SQL Server Data Tools for Visual Studio 2015 requires at least the April 2016 version of the Data Tools (version 14.0.60311.1). Currently, Azure AD users aren't shown in SSDT Object Explorer. As a workaround, view the users in [sys.database_principals](/sql/relational-databases/system-catalog-views/sys-database-principals-transact-sql?view=azure-sqldw-latest&preserve-view=true). - [Microsoft JDBC Driver 6.0 for SQL Server](https://www.microsoft.com/download/details.aspx?id=11774) supports Azure AD authentication. Also, see [Setting the Connection Properties](/sql/connect/jdbc/setting-the-connection-properties?view=azure-sqldw-latest&preserve-view=true). - The Azure Active Directory admin account controls access to dedicated SQL pools, while Synapse RBAC roles are used to control access to serverless pools, for example, the **Synapse Administrator** role. Configure Synapse RBAC roles via Synapse Studio, for more information, see [How to manage Synapse RBAC role assignments in Synapse Studio](../security/how-to-manage-synapse-rbac-role-assignments.md).
+- If a user is configured as an Azure Active Directory administrator and Synapse Administrator, and then removed from the Azure Active Directory administrator role, then the user will lose access to the dedicated SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to dedicated SQL pools.
## Next steps
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/sql-authentication.md
The **SQL admin username** and **SQL Active Directory admin** accounts have the
- Can add and remove members to the `dbmanager` and `loginmanager` roles. - Can view the `sys.sql_logins` system table.
+>[!Note]
+>If a user is configured as an Active Directory admin and Synapse Administrator, and then removed from the Active Directory admin role, then the user will lose access to the dedicated SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to dedicated SQL pools.
+ ## [Serverless SQL pool](#tab/serverless) To manage the users having access to serverless SQL pool, you can use the instructions below.