Updates from: 05/12/2023 01:25:54
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Create Forest Trust Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md
To complete this article, you need the following resources and privileges:
* If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * Install and configure Azure PowerShell.
- * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-az-ps).
+ * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-azure-powershell).
* Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet. * Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2).
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
This article shows you how to enable Azure AD DS using PowerShell.
To complete this article, you need the following resources: * Install and configure Azure PowerShell.
- * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-az-ps).
+ * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-azure-powershell).
* Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet. * Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2).
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
While disabling NTLM password synchronization will improve security, many applic
## Use PowerShell to harden your domain
-If needed, [install and configure Azure PowerShell](/powershell/azure/install-az-ps). Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet.
+If needed, [install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet.
Also if needed, [install and configure Azure AD PowerShell](/powershell/azure/active-directory/install-adv2). Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
active-directory-domain-services Security Audit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/security-audit-events.md
Title: Enable security audits for Azure AD Domain Services | Microsoft Docs
+ Title: Enable security and DNS audits for Azure AD Domain Services | Microsoft Docs
description: Learn how to enable security audits to centralize the logging of events for analysis and alerts in Azure AD Domain Services
Previously updated : 01/29/2023 Last updated : 04/17/2023
-# Enable security audits for Azure Active Directory Domain Services
+# Enable security and DNS audits for Azure Active Directory Domain Services
-Azure Active Directory Domain Services (Azure AD DS) security audits lets Azure stream security events to targeted resources. These resources include Azure Storage, Azure Log Analytics workspaces, or Azure Event Hub. After you enable security audit events, Azure AD DS sends all the audited events for the selected category to the targeted resource.
+Azure Active Directory Domain Services (Azure AD DS) security and DNS audits let Azure stream events to targeted resources. These resources include Azure Storage, Azure Log Analytics workspaces, or Azure Event Hub. After you enable security audit events, Azure AD DS sends all the audited events for the selected category to the targeted resource.
You can archive events into Azure storage and stream events into security information and event management (SIEM) software (or equivalent) using Azure Event Hubs, or do your own analysis and using Azure Log Analytics workspaces from the Azure portal.
-> [!IMPORTANT]
-> Azure AD DS security audits are only available for Azure Resource Manager-based managed domains. For information on how to migrate, see [Migrate Azure AD DS from the Classic virtual network model to Resource Manager][migrate-azure-adds].
- ## Security audit destinations You can use Azure Storage, Azure Event Hubs, or Azure Log Analytics workspaces as a target resource for Azure AD DS security audits. These destinations can be combined. For example, you could use Azure Storage for archiving security audit events, but an Azure Log Analytics workspace to analyze and report on the information in the short term.
To enable Azure AD DS security audit events using the Azure portal, complete the
> [!IMPORTANT] > Azure AD DS security audits aren't retroactive. You can't retrieve or replay events from the past. Azure AD DS can only send events that occur after security audits are enabled.
-1. Sign in to the Azure portal at https://portal.azure.com.
+1. Sign in to the Azure portal.
1. At the top of the Azure portal, search for and select **Azure AD Domain Services**. Choose your managed domain, such as *aaddscontoso.com*. 1. In the Azure AD DS window, select **Diagnostic settings** on the left-hand side. 1. No diagnostics are configured by default. To get started, select **Add diagnostic setting**.
To enable Azure AD DS security audit events using the Azure portal, complete the
1. Enter a name for the diagnostic configuration, such as *aadds-auditing*.
- Check the box for the security audit destination you want. You can choose from an Azure Storage account, an Azure event hub, or a Log Analytics workspace. These destination resources must already exist in your Azure subscription. You can't create the destination resources in this wizard.
-
- ![Enable the required destination and type of audit events to capture](./media/security-audit-events/diagnostic-settings-page.png)
-
+ Check the box for the security or DNS audit destination you want. You can choose from a Log Analytics workspace, an Azure Storage account, an Azure event hub, or a partner solution. These destination resources must already exist in your Azure subscription. You can't create the destination resources in this wizard.
+ * **Azure Log Analytic workspaces**
+ * Select **Send to Log Analytics**, then choose the **Subscription** and **Log Analytics Workspace** you want to use to store audit events.
* **Azure storage** * Select **Archive to a storage account**, then choose **Configure**.
- * Select the **Subscription** and the **Storage account** you want to use to archive security audit events.
+ * Select the **Subscription** and the **Storage account** you want to use to archive audit events.
* When ready, choose **OK**. * **Azure event hubs** * Select **Stream to an event hub**, then choose **Configure**. * Select the **Subscription** and the **Event hub namespace**. If needed, also choose an **Event hub name** and then **Event hub policy name**. * When ready, choose **OK**.
- * **Azure Log Analytic workspaces**
- * Select **Send to Log Analytics**, then choose the **Subscription** and **Log Analytics Workspace** you want to use to store security audit events.
+ * **Partner solution**
+ * Select **Send to partner solution**, then choose the **Subscription** and **Destination** you want to use to store audit events.
+ 1. Select the log categories you want included for the particular target resource. If you send the audit events to an Azure Storage account, you can also configure a retention policy that defines the number of days to retain data. A default setting of *0* retains all data and doesn't rotate events after a period of time.
- You can select different log categories for each targeted resource within a single configuration. This ability lets you choose which logs categories you want to keep for Log Analytics and which logs categories your want to archive, for example.
+ You can select different log categories for each targeted resource within a single configuration. This ability lets you choose which logs categories you want to keep for Log Analytics and which logs categories you want to archive, for example.
-1. When done, select **Save** to commit your changes. The target resources start to receive Azure AD DS security audit events soon after the configuration is saved.
+1. When done, select **Save** to commit your changes. The target resources start to receive Azure AD DS audit events soon after the configuration is saved.
-## Enable security audit events using Azure PowerShell
+## Enable security and DNS audit events using Azure PowerShell
-To enable Azure AD DS security audit events using Azure PowerShell, complete the following steps. If needed, first [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-az-ps).
+To enable Azure AD DS security and DNS audit events using Azure PowerShell, complete the following steps. If needed, first [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-azure-powershell).
> [!IMPORTANT]
-> Azure AD DS security audits aren't retroactive. You can't retrieve or replay events from the past. Azure AD DS can only send events that occur after security audits are enabled.
+> Azure AD DS audits aren't retroactive. You can't retrieve or replay events from the past. Azure AD DS can only send events that occur after audits are enabled.
1. Authenticate to your Azure subscription using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet. When prompted, enter your account credentials.
To enable Azure AD DS security audit events using Azure PowerShell, complete the
Connect-AzAccount ```
-1. Create the target resource for the security audit events.
+1. Create the target resource for the audit events.
+ * **Azure Log Analytic workspaces** - [Create a Log Analytics workspace with Azure PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md).
* **Azure storage** - [Create a storage account using Azure PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell) * **Azure event hubs** - [Create an event hub using Azure PowerShell](../event-hubs/event-hubs-quickstart-powershell.md). You may also need to use the [New-AzEventHubAuthorizationRule](/powershell/module/az.eventhub/new-azeventhubauthorizationrule) cmdlet to create an authorization rule that grants Azure AD DS permissions to the event hub *namespace*. The authorization rule must include the **Manage**, **Listen**, and **Send** rights. > [!IMPORTANT] > Ensure you set the authorization rule on the event hub namespace and not the event hub itself.
- * **Azure Log Analytic workspaces** - [Create a Log Analytics workspace with Azure PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md).
- 1. Get the resource ID for your Azure AD DS managed domain using the [Get-AzResource](/powershell/module/Az.Resources/Get-AzResource) cmdlet. Create a variable named *$aadds.ResourceId* to hold the value: ```azurepowershell $aadds = Get-AzResource -name aaddsDomainName ```
-1. Configure the Azure Diagnostic settings using the [Set-AzDiagnosticSetting](/powershell/module/Az.Monitor/Set-AzDiagnosticSetting) cmdlet to use the target resource for Azure AD Domain Services security audit events. In the following examples, the variable *$aadds.ResourceId* is used from the previous step.
+1. Configure the Azure Diagnostic settings using the [Set-AzDiagnosticSetting](/powershell/module/Az.Monitor/Set-AzDiagnosticSetting) cmdlet to use the target resource for Azure AD Domain Services audit events. In the following examples, the variable *$aadds.ResourceId* is used from the previous step.
* **Azure storage** - Replace *storageAccountId* with your storage account name:
To enable Azure AD DS security audit events using Azure PowerShell, complete the
-Enabled $true ```
-## Query and view security audit events using Azure Monitor
+## Query and view security and DNS audit events using Azure Monitor
-Log Analytic workspaces let you view and analyze the security audit events using Azure Monitor and the Kusto query language. This query language is designed for read-only use that boasts power analytic capabilities with an easy-to-read syntax. For more information to get started with Kusto query languages, see the following articles:
+Log Analytic workspaces let you view and analyze the security and DNS audit events using Azure Monitor and the Kusto query language. This query language is designed for read-only use that boasts power analytic capabilities with an easy-to-read syntax. For more information to get started with Kusto query languages, see the following articles:
* [Azure Monitor documentation](../azure-monitor/index.yml) * [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md) * [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md) * [Create and share dashboards of Log Analytics data](../azure-monitor/visualize/tutorial-logs-dashboards.md)
-The following sample queries can be used to start analyzing security audit events from Azure AD DS.
+The following sample queries can be used to start analyzing audit events from Azure AD DS.
### Sample query 1
AADDomainServicesAccountLogon
| summarize count() ```
-## Audit event categories
+## Audit security and DNS event categories
-Azure AD DS security audits align with traditional auditing for traditional AD DS domain controllers. In hybrid environments, you can reuse existing audit patterns so the same logic may be used when analyzing the events. Depending on the scenario you need to troubleshoot or analyze, the different audit event categories need to be targeted.
+Azure AD DS security and DNS audits align with traditional auditing for traditional AD DS domain controllers. In hybrid environments, you can reuse existing audit patterns so the same logic may be used when analyzing the events. Depending on the scenario you need to troubleshoot or analyze, the different audit event categories need to be targeted.
The following audit event categories are available: | Audit Category Name | Description | |:|:|
-| Account Logon|Audits attempts to authenticate account data on a domain controller or on a local Security Accounts Manager (SAM).</p>Logon and Logoff policy settings and events track attempts to access a particular computer. Settings and events in this category focus on the account database that is used. This category includes the following subcategories:<ul><li>[Audit Credential Validation](/windows/security/threat-protection/auditing/audit-credential-validation)</li><li>[Audit Kerberos Authentication Service](/windows/security/threat-protection/auditing/audit-kerberos-authentication-service)</li><li>[Audit Kerberos Service Ticket Operations](/windows/security/threat-protection/auditing/audit-kerberos-service-ticket-operations)</li><li>[Audit Other Logon/Logoff Events](/windows/security/threat-protection/auditing/audit-other-logonlogoff-events)</li></ul>|
-| Account Management|Audits changes to user and computer accounts and groups. This category includes the following subcategories:<ul><li>[Audit Application Group Management](/windows/security/threat-protection/auditing/audit-application-group-management)</li><li>[Audit Computer Account Management](/windows/security/threat-protection/auditing/audit-computer-account-management)</li><li>[Audit Distribution Group Management](/windows/security/threat-protection/auditing/audit-distribution-group-management)</li><li>[Audit Other Account Management](/windows/security/threat-protection/auditing/audit-other-account-management-events)</li><li>[Audit Security Group Management](/windows/security/threat-protection/auditing/audit-security-group-management)</li><li>[Audit User Account Management](/windows/security/threat-protection/auditing/audit-user-account-management)</li></ul>|
-| Detail Tracking|Audits activities of individual applications and users on that computer, and to understand how a computer is being used. This category includes the following subcategories:<ul><li>[Audit DPAPI Activity](/windows/security/threat-protection/auditing/audit-dpapi-activity)</li><li>[Audit PNP activity](/windows/security/threat-protection/auditing/audit-pnp-activity)</li><li>[Audit Process Creation](/windows/security/threat-protection/auditing/audit-process-creation)</li><li>[Audit Process Termination](/windows/security/threat-protection/auditing/audit-process-termination)</li><li>[Audit RPC Events](/windows/security/threat-protection/auditing/audit-rpc-events)</li></ul>|
-| Directory Services Access|Audits attempts to access and modify objects in Active Directory Domain Services (AD DS). These audit events are logged only on domain controllers. This category includes the following subcategories:<ul><li>[Audit Detailed Directory Service Replication](/windows/security/threat-protection/auditing/audit-detailed-directory-service-replication)</li><li>[Audit Directory Service Access](/windows/security/threat-protection/auditing/audit-directory-service-access)</li><li>[Audit Directory Service Changes](/windows/security/threat-protection/auditing/audit-directory-service-changes)</li><li>[Audit Directory Service Replication](/windows/security/threat-protection/auditing/audit-directory-service-replication)</li></ul>|
-| Logon-Logoff|Audits attempts to log on to a computer interactively or over a network. These events are useful for tracking user activity and identifying potential attacks on network resources. This category includes the following subcategories:<ul><li>[Audit Account Lockout](/windows/security/threat-protection/auditing/audit-account-lockout)</li><li>[Audit User/Device Claims](/windows/security/threat-protection/auditing/audit-user-device-claims)</li><li>[Audit IPsec Extended Mode](/windows/security/threat-protection/auditing/audit-ipsec-extended-mode)</li><li>[Audit Group Membership](/windows/security/threat-protection/auditing/audit-group-membership)</li><li>[Audit IPsec Main Mode](/windows/security/threat-protection/auditing/audit-ipsec-main-mode)</li><li>[Audit IPsec Quick Mode](/windows/security/threat-protection/auditing/audit-ipsec-quick-mode)</li><li>[Audit Logoff](/windows/security/threat-protection/auditing/audit-logoff)</li><li>[Audit Logon](/windows/security/threat-protection/auditing/audit-logon)</li><li>[Audit Network Policy Server](/windows/security/threat-protection/auditing/audit-network-policy-server)</li><li>[Audit Other Logon/Logoff Events](/windows/security/threat-protection/auditing/audit-other-logonlogoff-events)</li><li>[Audit Special Logon](/windows/security/threat-protection/auditing/audit-special-logon)</li></ul>|
-|Object Access| Audits attempts to access specific objects or types of objects on a network or computer. This category includes the following subcategories:<ul><li>[Audit Application Generated](/windows/security/threat-protection/auditing/audit-application-generated)</li><li>[Audit Certification Services](/windows/security/threat-protection/auditing/audit-certification-services)</li><li>[Audit Detailed File Share](/windows/security/threat-protection/auditing/audit-detailed-file-share)</li><li>[Audit File Share](/windows/security/threat-protection/auditing/audit-file-share)</li><li>[Audit File System](/windows/security/threat-protection/auditing/audit-file-system)</li><li>[Audit Filtering Platform Connection](/windows/security/threat-protection/auditing/audit-filtering-platform-connection)</li><li>[Audit Filtering Platform Packet Drop](/windows/security/threat-protection/auditing/audit-filtering-platform-packet-drop)</li><li>[Audit Handle Manipulation](/windows/security/threat-protection/auditing/audit-handle-manipulation)</li><li>[Audit Kernel Object](/windows/security/threat-protection/auditing/audit-kernel-object)</li><li>[Audit Other Object Access Events](/windows/security/threat-protection/auditing/audit-other-object-access-events)</li><li>[Audit Registry](/windows/security/threat-protection/auditing/audit-registry)</li><li>[Audit Removable Storage](/windows/security/threat-protection/auditing/audit-removable-storage)</li><li>[Audit SAM](/windows/security/threat-protection/auditing/audit-sam)</li><li>[Audit Central Access Policy Staging](/windows/security/threat-protection/auditing/audit-central-access-policy-staging)</li></ul>|
-|Policy Change|Audits changes to important security policies on a local system or network. Policies are typically established by administrators to help secure network resources. Monitoring changes or attempts to change these policies can be an important aspect of security management for a network. This category includes the following subcategories:<ul><li>[Audit Audit Policy Change](/windows/security/threat-protection/auditing/audit-audit-policy-change)</li><li>[Audit Authentication Policy Change](/windows/security/threat-protection/auditing/audit-authentication-policy-change)</li><li>[Audit Authorization Policy Change](/windows/security/threat-protection/auditing/audit-authorization-policy-change)</li><li>[Audit Filtering Platform Policy Change](/windows/security/threat-protection/auditing/audit-filtering-platform-policy-change)</li><li>[Audit MPSSVC Rule-Level Policy Change](/windows/security/threat-protection/auditing/audit-mpssvc-rule-level-policy-change)</li><li>[Audit Other Policy Change](/windows/security/threat-protection/auditing/audit-other-policy-change-events)</li></ul>|
-|Privilege Use| Audits the use of certain permissions on one or more systems. This category includes the following subcategories:<ul><li>[Audit Non-Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-non-sensitive-privilege-use)</li><li>[Audit Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use)</li><li>[Audit Other Privilege Use Events](/windows/security/threat-protection/auditing/audit-other-privilege-use-events)</li></ul>|
-|System| Audits system-level changes to a computer not included in other categories and that have potential security implications. This category includes the following subcategories:<ul><li>[Audit IPsec Driver](/windows/security/threat-protection/auditing/audit-ipsec-driver)</li><li>[Audit Other System Events](/windows/security/threat-protection/auditing/audit-other-system-events)</li><li>[Audit Security State Change](/windows/security/threat-protection/auditing/audit-security-state-change)</li><li>[Audit Security System Extension](/windows/security/threat-protection/auditing/audit-security-system-extension)</li><li>[Audit System Integrity](/windows/security/threat-protection/auditing/audit-system-integrity)</li></ul>|
+| Account Logon|Audits attempts to authenticate account data on a domain controller or on a local Security Accounts Manager (SAM).<br>-Logon and Logoff policy settings and events track attempts to access a particular computer. Settings and events in this category focus on the account database that is used. This category includes the following subcategories:<br>-[Audit Credential Validation](/windows/security/threat-protection/auditing/audit-credential-validation)<br>-[Audit Kerberos Authentication Service](/windows/security/threat-protection/auditing/audit-kerberos-authentication-service)<br>-[Audit Kerberos Service Ticket Operations](/windows/security/threat-protection/auditing/audit-kerberos-service-ticket-operations)<br>-[Audit Other Logon/Logoff Events](/windows/security/threat-protection/auditing/audit-other-logonlogoff-events)|
+| Account Management|Audits changes to user and computer accounts and groups. This category includes the following subcategories:<br>-[Audit Application Group Management](/windows/security/threat-protection/auditing/audit-application-group-management)<br>-[Audit Computer Account Management](/windows/security/threat-protection/auditing/audit-computer-account-management)<br>-[Audit Distribution Group Management](/windows/security/threat-protection/auditing/audit-distribution-group-management)<br>-[Audit Other Account Management](/windows/security/threat-protection/auditing/audit-other-account-management-events)<br>-[Audit Security Group Management](/windows/security/threat-protection/auditing/audit-security-group-management)<br>-[Audit User Account Management](/windows/security/threat-protection/auditing/audit-user-account-management)|
+| DNS Server|Audits changes to DNS environments. This category includes the following subcategories: <br>- [DNSServerAuditsDynamicUpdates (preview)](https://learn.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn800669(v=ws.11)#audit-and-analytic-event-logging)<br>- [DNSServerAuditsGeneral (preview)](https://learn.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn800669(v=ws.11)#audit-and-analytic-event-logging)|
+| Detail Tracking|Audits activities of individual applications and users on that computer, and to understand how a computer is being used. This category includes the following subcategories:<br>-[Audit DPAPI Activity](/windows/security/threat-protection/auditing/audit-dpapi-activity)<br>-[Audit PNP activity](/windows/security/threat-protection/auditing/audit-pnp-activity)<br>-[Audit Process Creation](/windows/security/threat-protection/auditing/audit-process-creation)<br>-[Audit Process Termination](/windows/security/threat-protection/auditing/audit-process-termination)<br>-[Audit RPC Events](/windows/security/threat-protection/auditing/audit-rpc-events)|
+| Directory Services Access|Audits attempts to access and modify objects in Active Directory Domain Services (AD DS). These audit events are logged only on domain controllers. This category includes the following subcategories:<br>-[Audit Detailed Directory Service Replication](/windows/security/threat-protection/auditing/audit-detailed-directory-service-replication)<br>-[Audit Directory Service Access](/windows/security/threat-protection/auditing/audit-directory-service-access)<br>-[Audit Directory Service Changes](/windows/security/threat-protection/auditing/audit-directory-service-changes)<br>-[Audit Directory Service Replication](/windows/security/threat-protection/auditing/audit-directory-service-replication)|
+| Logon-Logoff|Audits attempts to log on to a computer interactively or over a network. These events are useful for tracking user activity and identifying potential attacks on network resources. This category includes the following subcategories:<br>-[Audit Account Lockout](/windows/security/threat-protection/auditing/audit-account-lockout)<br>-[Audit User/Device Claims](/windows/security/threat-protection/auditing/audit-user-device-claims)<br>-[Audit IPsec Extended Mode](/windows/security/threat-protection/auditing/audit-ipsec-extended-mode)<br>-[Audit Group Membership](/windows/security/threat-protection/auditing/audit-group-membership)<br>-[Audit IPsec Main Mode](/windows/security/threat-protection/auditing/audit-ipsec-main-mode)<br>-[Audit IPsec Quick Mode](/windows/security/threat-protection/auditing/audit-ipsec-quick-mode)<br>-[Audit Logoff](/windows/security/threat-protection/auditing/audit-logoff)<br>-[Audit Logon](/windows/security/threat-protection/auditing/audit-logon)<br>-[Audit Network Policy Server](/windows/security/threat-protection/auditing/audit-network-policy-server)<br>-[Audit Other Logon/Logoff Events](/windows/security/threat-protection/auditing/audit-other-logonlogoff-events)<br>-[Audit Special Logon](/windows/security/threat-protection/auditing/audit-special-logon)|
+|Object Access| Audits attempts to access specific objects or types of objects on a network or computer. This category includes the following subcategories:<br>-[Audit Application Generated](/windows/security/threat-protection/auditing/audit-application-generated)<br>-[Audit Certification Services](/windows/security/threat-protection/auditing/audit-certification-services)<br>-[Audit Detailed File Share](/windows/security/threat-protection/auditing/audit-detailed-file-share)<br>-[Audit File Share](/windows/security/threat-protection/auditing/audit-file-share)<br>-[Audit File System](/windows/security/threat-protection/auditing/audit-file-system)<br>-[Audit Filtering Platform Connection](/windows/security/threat-protection/auditing/audit-filtering-platform-connection)<br>-[Audit Filtering Platform Packet Drop](/windows/security/threat-protection/auditing/audit-filtering-platform-packet-drop)<br>-[Audit Handle Manipulation](/windows/security/threat-protection/auditing/audit-handle-manipulation)<br>-[Audit Kernel Object](/windows/security/threat-protection/auditing/audit-kernel-object)<br>-[Audit Other Object Access Events](/windows/security/threat-protection/auditing/audit-other-object-access-events)<br>-[Audit Registry](/windows/security/threat-protection/auditing/audit-registry)<br>-[Audit Removable Storage](/windows/security/threat-protection/auditing/audit-removable-storage)<br>-[Audit SAM](/windows/security/threat-protection/auditing/audit-sam)<br>-[Audit Central Access Policy Staging](/windows/security/threat-protection/auditing/audit-central-access-policy-staging)|
+|Policy Change|Audits changes to important security policies on a local system or network. Policies are typically established by administrators to help secure network resources. Monitoring changes or attempts to change these policies can be an important aspect of security management for a network. This category includes the following subcategories:<br>-[Audit Audit Policy Change](/windows/security/threat-protection/auditing/audit-audit-policy-change)<br>-[Audit Authentication Policy Change](/windows/security/threat-protection/auditing/audit-authentication-policy-change)<br>-[Audit Authorization Policy Change](/windows/security/threat-protection/auditing/audit-authorization-policy-change)<br>-[Audit Filtering Platform Policy Change](/windows/security/threat-protection/auditing/audit-filtering-platform-policy-change)<br>-[Audit MPSSVC Rule-Level Policy Change](/windows/security/threat-protection/auditing/audit-mpssvc-rule-level-policy-change)<br>-[Audit Other Policy Change](/windows/security/threat-protection/auditing/audit-other-policy-change-events)|
+|Privilege Use| Audits the use of certain permissions on one or more systems. This category includes the following subcategories:<br>-[Audit Non-Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-non-sensitive-privilege-use)<br>-[Audit Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use)<br>-[Audit Other Privilege Use Events](/windows/security/threat-protection/auditing/audit-other-privilege-use-events)|
+|System| Audits system-level changes to a computer not included in other categories and that have potential security implications. This category includes the following subcategories:<br>-[Audit IPsec Driver](/windows/security/threat-protection/auditing/audit-ipsec-driver)<br>-[Audit Other System Events](/windows/security/threat-protection/auditing/audit-other-system-events)<br>-[Audit Security State Change](/windows/security/threat-protection/auditing/audit-security-state-change)<br>-[Audit Security System Extension](/windows/security/threat-protection/auditing/audit-security-system-extension)<br>-[Audit System Integrity](/windows/security/threat-protection/auditing/audit-system-integrity)|
+ ## Event IDs per category
- Azure AD DS security audits record the following event IDs when the specific action triggers an auditable event:
+ Azure AD DS security and DNS audits record the following event IDs when the specific action triggers an auditable event:
| Event Category Name | Event IDs | |:|:| |Account Logon security|4767, 4774, 4775, 4776, 4777| |Account Management security|4720, 4722, 4723, 4724, 4725, 4726, 4727, 4728, 4729, 4730, 4731, 4732, 4733, 4734, 4735, 4737, 4738, 4740, 4741, 4742, 4743, 4754, 4755, 4756, 4757, 4758, 4764, 4765, 4766, 4780, 4781, 4782, 4793, 4798, 4799, 5376, 5377| |Detail Tracking security|None|
+|DNS Server |513-523, 525-531, 533-537, 540-582|
|DS Access security|5136, 5137, 5138, 5139, 5141| |Logon-Logoff security|4624, 4625, 4634, 4647, 4648, 4672, 4675, 4964| |Object Access security|None|
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
This article shows you how to create a managed domain using an Azure Resource Ma
To complete this article, you need the following resources: * Install and configure Azure PowerShell.
- * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-az-ps).
+ * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-azure-powershell).
* Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet. * Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2).
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
Previously updated : 03/22/2023 Last updated : 04/10/2023
Most methods also have configuration parameters to more precisely control how th
Or let's say you want to enable passwordless authentication with Microsoft Authenticator. You can set extra parameters like showing the user sign-in location or the name of the app being signed into. These options provide more context for users when they sign-in and help prevent accidental MFA approvals.
-To manage the Authentication methods policy, click **Security** > **Authentication methods** > **Policies**.
+To manage the Authentication methods policy in the Azure AD portal, click **Security** > **Authentication methods** > **Policies**.
:::image type="content" border="true" source="./media/concept-authentication-methods-manage/authentication-methods-policy.png" alt-text="Screenshot of Authentication methods policy.":::
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-data-residency.md
For Microsoft Azure Government, Microsoft Azure operated by 21Vianet, Azure AD B
If you use MFA Server, the following personal data is stored. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
| Event type | Data store type | |--|--|
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 05/08/2023 Last updated : 05/10/2023
No, users can't opt out of number matching in Authenticator push notifications.
Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Authenticator push notifications in advance.
-### Does number matching only apply if Authenticator is set as the default authentication method?
+### Does number matching only apply if Authenticator push notifications are set as the default authentication method?
-If the user has a different default authentication method, there's no change to their default sign-in. If the default method is Authenticator, they get number matching.
+Yes. If the user has a different default authentication method, there's no change to their default sign-in. If the default method is Authenticator push notifications, they get number matching. If the default method is anything else, such as TOTP in Authenticator or another provider, there's no change.
Regardless of their default method, any user who is prompted to sign-in with Authenticator push notifications sees number matching. If prompted for another method, they won't see any change.
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
To find the group SID, use the following command, with your group name
`Get-ADGroup "GroupName"`
-![Image of screen shot showing the results of the Get-ADGroup script.](./media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/find-the-sid.png)
+![Image of screen shot showing the results of the Get-ADGroup script.](./media/how-to-migrate-mfa-server-to-mfa-user-authentication/find-the-sid.png)
#### Setting the claims rules to call Azure AD MFA
For step-by-step directions on this process, see [Configure the AD FS servers](/
Once you've configured the servers, you can add Azure AD MFA as an additional authentication method.
-![Screen shot showing the Edit authentication methods screen with Azure AD MFA and Azure Mutli-factor authentication Server selected](./media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/edit-authentication-methods.png)
+![Screen shot showing the Edit authentication methods screen with Azure AD MFA and Azure Mutli-factor authentication Server selected](./media/how-to-migrate-mfa-server-to-mfa-user-authentication/edit-authentication-methods.png)
## Prepare Azure AD and implement migration
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
There are multiple possible end states to your migration, depending on your goal
|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless single sign-on (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. | |Application authentication | Continue to use AD FS authentication for your applications. | Continue to use AD FS authentication for your applications. | Move apps to Azure AD before migrating to Azure AD Multi-Factor Authentication. |
-If you can, move both your multifactor authentication and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md).
+If you can, move both your multifactor authentication and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-mfa-user-authentication.md).
If you canΓÇÖt move your user authentication, see the step-by-step guidance for [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md).
MIM can't be configured to use Azure AD Multi-Factor Authentication.
We recommend you evaluate moving your SSPR service to Azure AD SSPR. You can use the opportunity of users registering for Azure AD Multi-Factor Authentication to use the combined registration experience to register for Azure AD SSPR.
-If you can't move your SSPR service, or you leverage MFA Server to invoke MFA requests for Privileged Access Management (PAM) scenarios, we recommend you update to an [alternate 3rd party MFA option](https://learn.microsoft.com/microsoft-identity-manager/working-with-custommfaserver-for-mim).
+If you can't move your SSPR service, or you leverage MFA Server to invoke MFA requests for Privileged Access Management (PAM) scenarios, we recommend you update to an [alternate 3rd party MFA option](/microsoft-identity-manager/working-with-custommfaserver-for-mim).
### RADIUS clients and Azure AD Multi-Factor Authentication
Others might include:
## Next steps - [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md)-- [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md)
+- [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-mfa-user-authentication.md)
- [How to use the MFA Server Migration Utility](how-to-mfa-server-migration-utility.md)
active-directory How To Migrate Mfa Server To Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-user-authentication.md
+
+ Title: Migrate to Azure AD MFA and Azure AD user authentication
+description: Step-by-step guidance to move from MFA Server on-premises to Azure AD MFA and Azure AD user authentication
+++++ Last updated : 01/29/2023++++++++
+# Migrate to Azure AD MFA and Azure AD user authentication
+
+Multi-factor authentication (MFA) helps secure your infrastructure and assets from bad actors.
+Microsoft's Multi-Factor Authentication Server (MFA Server) is no longer offered for new deployments.
+Customers who are using MFA Server should move to Azure AD Multi-Factor Authentication (Azure AD MFA).
+
+There are several options for migrating from MFA Server to Azure Active Directory (Azure AD):
+
+* Good: Moving only your [MFA service to Azure AD](how-to-migrate-mfa-server-to-azure-mfa.md).
+* Better: Moving your MFA service and user authentication to Azure AD, covered in this article.
+* Best: Moving all of your applications, your MFA service, and user authentication to Azure AD. See the move applications to Azure AD section of this article if you plan to move applications, covered in this article.
+
+To select the appropriate MFA migration option for your organization, see the considerations in [Migrate from MFA Server to Azure Active Directory MFA](how-to-migrate-mfa-server-to-azure-mfa.md).
+
+The following diagram shows the process for migrating to Azure AD MFA and cloud authentication while keeping some of your applications on AD FS.
+This process enables the iterative migration of users from MFA Server to Azure AD MFA based on group membership.
+
+Each step is explained in the subsequent sections of this article.
+
+>[!NOTE]
+>If you're planning on moving any applications to Azure Active Directory as a part of this migration, you should do so prior to your MFA migration. If you move all of your apps, you can skip sections of the MFA migration process. See the section on moving applications at the end of this article.
+
+## Process to migrate to Azure AD and user authentication
+
+![Process to migrate to Azure AD and user authentication.](media/how-to-migrate-mfa-server-to-mfa-user-authentication/mfa-cloud-authentication-flow.png)
+
+## Prepare groups and Conditional Access
+
+Groups are used in three capacities for MFA migration.
+
+* **To iteratively move users to Azure AD MFA with Staged Rollout.**
+
+ Use a group created in Azure AD, also known as a cloud-only group. You can use Azure AD security groups or Microsoft 365 Groups for both moving users to MFA and for Conditional Access policies.
+
+ >[!IMPORTANT]
+ >Nested and dynamic groups aren't supported for Staged Rollout. Don't use these types of groups.
+
+* **Conditional Access policies**.
+ You can use either Azure AD or on-premises groups for conditional access.
+
+* **To invoke Azure AD MFA for AD FS applications with claims rules.**
+ This step applies only if you use applications with AD FS.
+
+ You must use an on-premises Active Directory security group. Once Azure AD MFA is an additional authentication method, you can designate groups of users to use that method on each relying party trust. For example, you can call Azure AD MFA for users you already migrated, and MFA Server for users who aren't migrated yet. This strategy is helpful both in testing and during migration.
+
+>[!NOTE]
+>We don't recommend that you reuse groups that are used for security. Only use the security group to secure a group of high-value apps with a Conditional Access policy.
+
+### Configure Conditional Access policies
+
+If you're already using Conditional Access to determine when users are prompted for MFA, you won't need any changes to your policies.
+As users are migrated to cloud authentication, they'll start using Azure AD MFA as defined by your existing Conditional Access policies.
+They won't be redirected to AD FS and MFA Server anymore.
+
+If your federated domains have the **federatedIdpMfaBehavior** set to `enforceMfaByFederatedIdp` or **SupportsMfa** flag set to `$True` (the **federatedIdpMfaBehavior** overrides **SupportsMfa** when both are set), you're likely enforcing MFA on AD FS by using claims rules.
+In this case, you'll need to analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
+
+If necessary, configure Conditional Access policies before you enable Staged Rollout.
+For more information, see the following resources:
+
+* [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md)
+* [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
+
+## Prepare AD FS
+
+If you don't have any applications in AD FS that require MFA, you can skip this section and go to the section [Prepare Staged Rollout](#prepare-staged-rollout).
+
+### Upgrade AD FS server farm to 2019, FBL 4
+
+In AD FS 2019, Microsoft released new functionality to help specify additional authentication methods for a relying party, such as an application.
+You can specify an additional authentication method by using group membership to determine the authentication provider.
+By specifying an additional authentication method, you can transition to Azure AD MFA while keeping other authentication intact during the transition.
+
+For more information, see [Upgrading to AD FS in Windows Server 2016 using a WID database](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server).
+The article covers both upgrading your farm to AD FS 2019 and upgrading your FBL to 4.
+
+### Configure claims rules to invoke Azure AD MFA
+
+Now that Azure AD MFA is an additional authentication method, you can assign groups of users to use Azure AD MFA by configuring claims rules, also known as *relying party trusts*. By using groups, you can control which authentication provider is called either globally or by application. For example, you can call Azure AD MFA for users who registered for combined security information or had their phone numbers migrated, while calling MFA Server for users whose phone numbers haven't migrated.
+
+>[!NOTE]
+>Claims rules require on-premises security group.
+
+#### Back up existing rules
+
+Before configuring new claims rules, back up your existing rules.
+You'll need to restore claims rules as a part of your cleanup steps.
+
+Depending on your configuration, you may also need to copy the existing rule and append the new rules being created for the migration.
+
+To view existing global rules, run:
+
+```powershell
+Get-AdfsAdditionalAuthenticationRule
+```
+
+To view existing relying party trusts, run the following command and replace RPTrustName with the name of the relying party trust claims rule:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name "RPTrustName").AdditionalAuthenticationRules
+```
+
+#### Access control policies
+
+>[!NOTE]
+>Access control policies can't be configured so that a specific authentication provider is invoked based on group membership.
+
+To transition from your access control policies to additional authentication rules, run this command for each of your Relying Party Trusts using the MFA Server authentication provider:
+
+```powershell
+Set-AdfsRelyingPartyTrust -**TargetName AppA -AccessControlPolicyName $Null**
+```
+
+This command will move the logic from your current Access Control Policy into Additional Authentication Rules.
+
+#### Set up the group, and find the SID
+
+You'll need to have a specific group in which you place users for whom you want to invoke Azure AD MFA. You'll need to find the security identifier (SID) for that group.
+To find the group SID, run the following command and replace `GroupName` with your group name:
+
+```powershell
+Get-ADGroup GroupName
+```
+
+![PowerShell command to get the group SID.](media/how-to-migrate-mfa-server-to-mfa-user-authentication/find-the-sid.png)
+
+#### Setting the claims rules to call Azure AD MFA
+
+The following PowerShell cmdlets invoke Azure AD MFA for users in the group when they aren't on the corporate network.
+You must replace `"YourGroupSid"` with the SID found by running the preceding cmdlet.
+
+Make sure you review the [How to Choose Additional Auth Providers in 2019](/windows-server/identity/ad-fs/overview/whats-new-active-directory-federation-services-windows-server#how-to-choose-additional-auth-providers-in-2019).
+
+>[!IMPORTANT]
+>Backup your existing claims rules before proceeding.
+
+##### Set global claims rule
+
+Run the following command and replace RPTrustName with the name of the relying party trust claims rule:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name "RPTrustName").AdditionalAuthenticationRules
+```
+
+The command returns your current additional authentication rules for your relying party trust.
+You need to append the following rules to your current claim rules:
+
+```console
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
+
+The following example assumes your current claim rules are configured to prompt for MFA when users connect from outside your network.
+This example includes the additional rules that you need to append.
+
+```PowerShell
+Set-AdfsAdditionalAuthenticationRule -AdditionalAuthenticationRules 'c:[type ==
+"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"https://schemas.microsoft.com/claims/multipleauthn" );
+ c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
+
+##### Set per-application claims rule
+
+This example modifies claim rules on a specific relying party trust (application). It includes the additional rules you need to append.
+
+```PowerShell
+Set-AdfsRelyingPartyTrust -TargetName AppA -AdditionalAuthenticationRules 'c:[type ==
+"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"https://schemas.microsoft.com/claims/multipleauthn" );
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
+
+### Configure Azure AD MFA as an authentication provider in AD FS
+
+In order to configure Azure AD MFA for AD FS, you must configure each AD FS server.
+If multiple AD FS servers are in your farm, you can configure them remotely using Azure AD PowerShell.
+
+For step-by-step directions on this process, see [Configure the AD FS servers](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa#configure-the-ad-fs-servers).
+
+After you configure the servers, you can add Azure AD MFA as an additional authentication method.
+
+![Screenshot of how to add Azure AD MFA as an additional authentication method.](media/how-to-migrate-mfa-server-to-mfa-user-authentication/edit-authentication-methods.png)
++
+## Prepare Staged Rollout
+
+Now you're ready to enable [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). Staged Rollout helps you to iteratively move your users to either PHS or PTA while also migrating their on-premises MFA settings.
+
+* Be sure to review the [supported scenarios](../hybrid/how-to-connect-staged-rollout.md#supported-scenarios).
+* First, you'll need to do either the [prework for PHS](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or the [prework for PTA](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication). We recommend PHS.
+* Next, you'll do the [prework for seamless SSO](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-seamless-sso).
+* [Enable the Staged Rollout of cloud authentication](../hybrid/how-to-connect-staged-rollout.md#enable-a-staged-rollout-of-a-specific-feature-on-your-tenant) for your selected authentication method.
+* Add the group(s) you created for Staged Rollout. Remember that you'll add users to groups iteratively, and that they can't be dynamic groups or nested groups.
+
+## Register users for Azure AD MFA
+
+This section covers how users can register for combined security (MFA and self-service-password reset) and how to migrate their MFA settings. Microsoft Authenticator can be used as in passwordless mode. It can also be used as a second factor for MFA with either registration method.
+
+### Register for combined security registration (recommended)
+
+We recommend having your users register for combined security information, which is a single place to register their authentication methods and devices for both MFA and SSPR.
+
+Microsoft provides communication templates that you can provide to your users to guide them through the combined registration process.
+These include templates for email, posters, table tents, and various other assets. Users register their information at `https://aka.ms/mysecurityinfo`, which takes them to the combined security registration screen.
+
+We recommend that you [secure the security registration process with Conditional Access](../conditional-access/howto-conditional-access-policy-registration.md) that requires the registration to occur from a trusted device or location. For information on tracking registration statuses, see [Authentication method activity for Azure Active Directory](howto-authentication-methods-activity.md).
+> [!NOTE]
+> Users who MUST register their combined security information from a non-trusted location or device can be issued a Temporary Access Pass or alternatively, temporarily excluded from the policy.
+
+### Migrate MFA settings from MFA Server
+
+You can use the [MFA Server Migration utility](how-to-mfa-server-migration-utility.md) to synchronize registered MFA settings for users from MFA Server to Azure AD.
+You can synchronize phone numbers, hardware tokens, and device registrations such as Microsoft Authenticator app settings.
+
+### Add users to the appropriate groups
+
+* If you created new conditional access policies, add the appropriate users to those groups.
+* If you created on-premises security groups for claims rules, add the appropriate users to those groups.
+* Only after you add users to the appropriate conditional access rules, add users to the group that you created for Staged Rollout. Once done, they'll begin to use the Azure authentication method that you selected (PHS or PTA) and Azure AD MFA when they're required to perform MFA.
+
+> [!IMPORTANT]
+> Nested and dynamic groups aren't supported for Staged Rollout. Do not use these types of groups.
+
+We don't recommend that you reuse groups that are used for security. If you're using a security group to secure a group of high-value apps with a Conditional Access policy, only use the group for that purpose.
+
+## Monitoring
+
+Many [Azure Monitor workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) and **Usage & Insights** reports are available to monitor your deployment.
+These reports can be found in Azure AD in the navigation pane under **Monitoring**.
+
+### Monitoring Staged Rollout
+
+In the [workbooks](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) section, select **Public Templates**. Under **Hybrid Auth** section select the **Groups, Users and Sign-ins in Staged Rollout** workbook.
+
+This workbook can be used to monitor the following activities:
+* Users and groups added to Staged Rollout.
+* Users and groups removed from Staged Rollout.
+* Sign-in failures for users in Staged Rollout, and the reasons for failures.
+
+### Monitoring Azure AD MFA registration
+Azure AD MFA registration can be monitored using the [Authentication methods usage & insights report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/AuthenticationMethodsMenuBlade/AuthMethodsActivity/menuId/AuthMethodsActivity). This report can be found in Azure AD. Select **Monitoring**, then select **Usage & insights**.
+
+![Screenshot of how to find the Usage and Insights report.](media/how-to-migrate-mfa-server-to-mfa-user-authentication/usage-report.png)
+
+In Usage & insights, select **Authentication methods**.
+
+Detailed Azure AD MFA registration information can be found on the Registration tab. You can drill down to view a list of registered users by selecting the **Users registered for Azure multi-factor authentication** hyperlink.
+
+![Screenshot of the Registration tab.](media/how-to-migrate-mfa-server-to-mfa-user-authentication/registration-tab.png)
+
+### Monitoring app sign-in health
+
+Monitor applications you moved to Azure AD with the App sign-in health workbook or the application activity usage report.
+
+* **App sign-in health workbook**. See [Monitoring application sign-in health for resilience](../fundamentals/monitor-sign-in-health-for-resilience.md) for detailed guidance on using this workbook.
+* **Azure AD application activity usage report**. This [report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsageAndInsightsMenuBlade/Azure%20AD%20application%20activity) can be used to view the successful and failed sign-ins for individual applications as well as the ability to drill down and view sign-in activity for a specific application.
+
+## Clean up tasks
+
+After you move all users to Azure AD cloud authentication and Azure AD MFA, you're ready to decommission your MFA Server.
+We recommend reviewing MFA Server logs to ensure no users or applications are using it before you remove the server.
+
+### Convert your domains to managed authentication
+
+You should now [convert your federated domains in Azure AD to managed](../hybrid/migrate-from-federation-to-cloud-authentication.md#convert-domains-from-federated-to-managed) and remove the Staged Rollout configuration.
+This conversion ensures new users use cloud authentication without being added to the migration groups.
+
+### Revert claims rules on AD FS and remove MFA Server authentication provider
+
+Follow the steps under [Configure claims rules to invoke Azure AD MFA](#configure-claims-rules-to-invoke-azure-ad-mfa) to revert the claims rules and remove any AzureMFAServerAuthentication claims rules.
+
+For example, remove the following section from the rule(s):
+
+```console
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"**YourGroupSID**"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");'
+```
+
+### Disable MFA Server as an authentication provider in AD FS
+
+This change ensures only Azure AD MFA is used as an authentication provider.
+
+1. Open the **AD FS management console**.
+1. Under **Services**, right-click on **Authentication Methods**, and select **Edit Multi-factor Authentication Methods**.
+1. Clear the **Azure Multi-Factor Authentication Server** checkbox.
+
+### Decommission the MFA Server
+
+Follow your enterprise server decommissioning process to remove the MFA Servers in your environment.
+
+Possible considerations when decommissions the MFA Server include:
+
+* We recommend reviewing MFA Server logs to ensure no users or applications are using it before you remove the server.
+* Uninstall Multi-Factor Authentication Server from the Control Panel on the server.
+* Optionally clean up logs and data directories that are left behind after backing them up first.
+* Uninstall the Multi-Factor Authentication Web Server SDK, if applicable including any files left over inetpub\wwwroot\MultiFactorAuthWebServiceSdk and/or MultiFactorAuth directories.
+* For pre-8.0.x versions of MFA Server, it may also be necessary to remove the Multi-Factor Auth Phone App Web Service.
+
+## Move application authentication to Azure Active Directory
+
+If you migrate all your application authentication along with your MFA and user authentication, you'll be able to remove significant portions of your on-premises infrastructure, reducing costs and risks.
+If you move all application authentication, you can skip the [Prepare AD FS](#prepare-ad-fs) stage and simplify your MFA migration.
+
+The process for moving all application authentication is shown in the following diagram.
+
+![Process to migrate applications to to Azure AD MFA.](media/how-to-migrate-mfa-server-to-mfa-user-authentication/mfa-app-migration-flow.png)
+
+If you can't move all your applications before the migration, move as many as possible before you start.
+For more information about migrating applications to Azure, see [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md).
+
+## Next steps
+
+- [Migrate from Microsoft MFA Server to Azure AD MFA (Overview)](how-to-migrate-mfa-server-to-azure-mfa.md)
+- [Migrate applications from Windows Active Directory to Azure Active Directory](../manage-apps/migrate-application-authentication-to-azure-active-directory.md)
+- [Plan your cloud authentication strategy](../fundamentals/active-directory-deployment-plans.md)
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
This method can also be used for easy recovery when the user has lost or forgott
**MFA server** - End users enabled for multi-factor authentication through an organization's on-premises MFA server can create and use a single passwordless phone sign-in credential. If the user attempts to upgrade multiple installations (5 or more) of the Authenticator app with the credential, this change may result in an error. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their users' authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
**Device registration** - To use the Authenticator app for passwordless authentication, the device must be registered in the Azure AD tenant and can't be a shared device. A device can only be registered in a single tenant. This limit means that only one work or school account is supported for phone sign-in using the Authenticator app.
The [Registration tab](https://portal.azure.com/) shows the number of users capa
![Registration tab to view auth methods](media/howto-authentication-passwordless-deployment/monitoring-registration-tab.png)
-The [Usage tab ](https://portal.azure.com/)shows the sign-ins by authentication method.
+The [Usage tab](https://portal.azure.com/)shows the sign-ins by authentication method.
![Usage tab to view auth methods](media/howto-authentication-passwordless-deployment/monitoring-usage-tab.png)
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
Previously updated : 03/28/2023 Last updated : 04/10/2023
The NPS extension acts as an adapter between RADIUS and cloud-based Azure AD Mul
When you use the NPS extension for Azure AD Multi-Factor Authentication, the authentication flow includes the following components: 1. **NAS/VPN Server** receives requests from VPN clients and converts them into RADIUS requests to NPS servers.
-2. **NPS Server** connects to Active Directory Domain Services (AD DS) to perform the primary authentication for the RADIUS requests and, upon success, passes the request to any installed extensions.
-3. **NPS Extension** triggers a request to Azure AD Multi-Factor Authentication for the secondary authentication. Once the extension receives the response, and if the MFA challenge succeeds, it completes the authentication request by providing the NPS server with security tokens that include an MFA claim, issued by Azure STS.
+1. **NPS Server** connects to Active Directory Domain Services (AD DS) to perform the primary authentication for the RADIUS requests and, upon success, passes the request to any installed extensions.
+1. **NPS Extension** triggers a request to Azure AD Multi-Factor Authentication for the secondary authentication. Once the extension receives the response, and if the MFA challenge succeeds, it completes the authentication request by providing the NPS server with security tokens that include an MFA claim, issued by Azure STS.
>[!NOTE]
- >Users must have access to their default authentication method to complete the MFA requirement. They cannot choose an alternative method. Their default authentication method will be used even if it's been disabled in the tenant authentication methods and MFA policies.
+ >Although NPS doesn't support [number matching](how-to-mfa-number-match.md), the latest NPS extension does support time-based one-time password (TOTP) methods, such as the TOTP available in Microsoft Authenticator. TOTP sign-in provides better security than the alternative **Approve**/**Deny** experience.
+ >
+ >After May 8, 2023, when number matching is enabled for all users, anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later will be prompted to sign in with a TOTP method instead. Users must have a TOTP authentication method registered to see this behavior. Without a TOTP method registered, users continue to see **Approve**/**Deny**.
+ 1. **Azure AD MFA** communicates with Azure Active Directory (Azure AD) to retrieve the user's details and performs the secondary authentication using a verification method configured to the user. The following diagram illustrates this high-level authentication request flow:
active-directory Howto Mfa Server Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-server-settings.md
This article helps you to manage Azure MFA Server settings in the Azure portal. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
The following MFA Server settings are available:
active-directory Howto Mfaserver Adfs 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-adfs-2.md
This article is for organizations that are federated with Azure Active Directory
This documentation covers using the Azure Multi-Factor Authentication Server with AD FS 2.0. For information about AD FS, see [Securing cloud and on-premises resources using Azure Multi-Factor Authentication Server with Windows Server](howto-mfaserver-adfs-windows-server.md). > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Howto Mfaserver Adfs Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-adfs-windows-server.md
If you use Active Directory Federation Services (AD FS) and want to secure cloud
In this article, we discuss using Azure Multi-Factor Authentication Server with AD FS beginning with Windows Server 2016. For more information, read about how to [secure cloud and on-premises resources by using Azure Multi-Factor Authentication Server with AD FS 2.0](howto-mfaserver-adfs-2.md). > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Howto Mfaserver Deploy Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-ha.md
To achieve high-availability with your Azure Server MFA deployment, you need to deploy multiple MFA servers. This section provides information on a load-balanced design to achieve your high availability targets in your Azure MFS Server deployment. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Howto Mfaserver Deploy Mobileapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-mobileapp.md
The Microsoft Authenticator app offers an extra out-of-band verification option.
Using a mobile app for two-step verification is preferred when phone reception is unreliable. If you use the app as an OATH token generator, it doesn't require any network or internet connection. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory Howto Mfaserver Deploy Upgrade Pf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-upgrade-pf.md
To upgrade the PhoneFactor Agent v5.x or older to Azure AD Multi-Factor Authentication Server, uninstall the PhoneFactor Agent and affiliated components first. Then the Multi-Factor Authentication Server and its affiliated components can be installed. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory Howto Mfaserver Deploy Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-upgrade.md
This article walks you through the process of upgrading Azure AD Multi-Factor Au
If you're upgrading from v6.x or older to v7.x or newer, all components change from .NET 2.0 to .NET 4.5. All components also require Microsoft Visual C++ 2015 Redistributable Update 1 or higher. The MFA Server installer installs both the x86 and x64 versions of these components if they aren't already installed. If the User Portal and Mobile App Web Service run on separate servers, you need to install those packages before upgrading those components. You can search for the latest Microsoft Visual C++ 2015 Redistributable update on the [Microsoft Download Center](https://www.microsoft.com/download/). > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory Howto Mfaserver Deploy Userportal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-userportal.md
User portal Administrators may be set up and granted permission to add new users
Depending on your environment, you may want to deploy the user portal on the same server as Azure AD Multi-Factor Authentication Server or on another internet-facing server. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory Howto Mfaserver Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md
This page covers a new installation of the server and setting it up with on-premises Active Directory. If you already have the MFA server installed and are looking to upgrade, see [Upgrade to the latest Azure Multi-Factor Authentication Server](howto-mfaserver-deploy-upgrade.md). If you're looking for information on installing just the web service, see [Deploying the Azure Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md). > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
If you aren't using the Event Confirmation feature, and your users aren't using
Follow these steps to download the Azure AD Multi-Factor Authentication Server from the Azure portal: > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Howto Mfaserver Dir Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-ad.md
Use the Directory Integration section of the Azure MFA Server to integrate with Active Directory or another LDAP directory. You can configure attributes to match the directory schema and set up automatic user synchronization. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Howto Mfaserver Dir Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-ldap.md
By default, the Azure Multi-Factor Authentication Server is configured to import
To use Azure Multi-Factor Authentication as an LDAP proxy, insert the Azure Multi-Factor Authentication Server between the LDAP client (for example, VPN appliance, application) and the LDAP directory server. The Azure Multi-Factor Authentication Server must be configured to communicate with both the client servers and the LDAP directory. In this configuration, the Azure Multi-Factor Authentication Server accepts LDAP requests from client servers and applications and forwards them to the target LDAP directory server to validate the primary credentials. If the LDAP directory validates the primary credentials, Azure Multi-Factor Authentication performs a second identity verification and sends a response back to the LDAP client. The entire authentication succeeds only if both the LDAP server authentication and the second-step verification succeed. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory Howto Mfaserver Dir Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-radius.md
RADIUS is a standard protocol to accept authentication requests and to process those requests. The Azure Multi-Factor Authentication Server can act as a RADIUS server. Insert it between your RADIUS client (VPN appliance) and your authentication target to add two-step verification. Your authentication target could be Active Directory, an LDAP directory, or another RADIUS server. For Azure Multi-Factor Authentication (MFA) to function, you must configure the Azure MFA Server so that it can communicate with both the client servers and the authentication target. The Azure MFA Server accepts requests from a RADIUS client, validates credentials against the authentication target, adds Azure Multi-Factor Authentication, and sends a response back to the RADIUS client. The authentication request only succeeds if both the primary authentication and the Azure Multi-Factor Authentication succeed. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Howto Mfaserver Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-iis.md
Use the IIS Authentication section of the Azure Multi-Factor Authentication (MFA) Server to enable and configure IIS authentication for integration with Microsoft IIS web applications. The Azure Multi-Factor Authentication Server installs a plug-in that can filter requests being made to the IIS web server to add Azure Multi-Factor Authentication. The IIS plug-in provides support for Form-Based Authentication and Integrated Windows HTTP Authentication. Trusted IPs can also be configured to exempt internal IP addresses from two-factor authentication. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >>
active-directory Howto Mfaserver Nps Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-nps-rdg.md
Since Windows Authentication for terminal services is not supported for Server 2
Install the Azure Multi-Factor Authentication Server on a separate server, which proxies the RADIUS request back to the NPS on the Remote Desktop Gateway Server. After NPS validates the username and password, it returns a response to the Multi-Factor Authentication Server. Then, the MFA Server performs the second factor of authentication and returns a result to the gateway. > [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Multi Factor Authentication Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/multi-factor-authentication-wizard.md
+
+ Title: Use the multi-factor authentication in portal guide to configure MFA
+description: Learn how to use the multi-factor authentication (MFA) wizard to deploy MFA for your organization
++++ Last updated : 05/09/2023+++++
+# Configure multi-factor authentication using the portal guide
+
+Azure Active Directory (Azure AD) features help you manage and secure your organization. This setup guide helps you get started with Azure's multifactor authentication capabilities. In the following section, weΓÇÖll briefly describe the setup guide.
+
+## Who is this setup guide for?
+
+This guide provides step-by-step instructions for IT administrators to implement Multi-Factor Authentication (MFA) in their organization. It's designed for administrators who are new to MFA and need guidance on where to begin.
+
+## What to expect and what you need
+
+The setup guides help you configure the core functionality of Azure AD. If you need to set up a more advanced configuration, the setup guide points you to the appropriate location in the Azure AD portal.
+
+### Required permissions
+
+You must be a member of the following administrative roles:
+
+- **Global administrator**: allows you to use integrated tools in the setup guides to make changes in your Microsoft 365 organization.
+
+- **Global reader**: allows you to view the setup guides but not make changes in your tenant.
+
+## Configure multi-factor authentication (MFA)
+
+If you're using Azure Active Directory Premium P1 or P2, we guide you through a setup process that's tailored to your needs. Our customized conditional access policies include the most common and least intrusive security standards we recommend. If you're not subscribed to a premium license, we help you keep your account secure with our one-click security defaults enabled to give you a baseline protection policy.
+
+## Next steps
+
+- [Troubleshoot Azure AD Multi-Factor Authentication issues](/troubleshoot/azure/active-directory/troubleshoot-azure-mfa-issue)
+- [Use the sign-ins report to review Azure AD Multi-Factor Authentication events](howto-mfa-reporting.md)
active-directory Config Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/config-authority.md
# Configure MSAL for iOS and macOS to use different identity providers
-This article will show you how to configure your Microsoft authentication library app for iOS and macOS (MSAL) for different authorities such as Azure Active Directory (Azure AD), Business-to-Consumer (B2C), sovereign clouds, and guest users. Throughout this article, you can generally think of an authority as an identity provider.
+This article will show you how to configure your Microsoft Authentication Library app for iOS and macOS (MSAL) for different authorities such as Azure Active Directory (Azure AD), Business-to-Consumer (B2C), sovereign clouds, and guest users. Throughout this article, you can generally think of an authority as an identity provider.
## Default authority configuration
-`MSALPublicClientApplication` is configured with a default authority URL of `https://login.microsoftonline.com/common`, which is suitable for most Azure Active Directory (AAD) scenarios. Unless you're implementing advanced scenarios like national clouds, or working with B2C, you won't need to change it.
+`MSALPublicClientApplication` is configured with a default authority URL of `https://login.microsoftonline.com/common`, which is suitable for most Azure AD scenarios. Unless you're implementing advanced scenarios like national clouds, or working with B2C, you won't need to change it.
> [!NOTE] > Modern authentication with Active Directory Federation Services as identity provider (ADFS) is not supported (see [ADFS for Developers](/windows-server/identity/ad-fs/overview/ad-fs-openid-connect-oauth-flows-scenarios) for details). ADFS is supported through federation.
do{
### Sovereign clouds
-If your app runs in a sovereign cloud, you may need to change the authority URL in the `MSALPublicClientApplication`. The following example sets the authority URL to work with the German AAD cloud:
+If your app runs in a sovereign cloud, you may need to change the authority URL in the `MSALPublicClientApplication`. The following example sets the authority URL to work with the German Azure AD cloud:
Objective-C ```objc
The following are subclasses of `MSALAuthority` that you can instantiate dependi
### MSALAADAuthority
-`MSALAADAuthority` represents an AAD authority. The authority URL should be in the following format, where `<port>` is optional: `https://<host>:<port>/<tenant>`
+`MSALAADAuthority` represents an Azure AD authority. The authority URL should be in the following format, where `<port>` is optional: `https://<host>:<port>/<tenant>`
### MSALB2CAuthority
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
When you have an app or script that needs to access resources, you can set up an
This article shows you how to create a service principal that authenticates with a certificate. To set up a service principal with password, see [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps).
-You must have the [latest version](/powershell/azure/install-az-ps) of PowerShell for this article.
+You must have the [latest version](/powershell/azure/install-azure-powershell) of PowerShell for this article.
[!INCLUDE [az-powershell-update](../../../includes/updated-for-az.md)]
active-directory Howto Manage Local Admin Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md
To enable Windows LAPS with Azure AD, you must take actions in Azure AD and the
- If you're using Microsoft Intune to manage client side policies, see [Manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy) - If you're using Group Policy Objects (GPO) to manage client side policies, see [Windows LAPS Group Policy](/windows-server/identity/laps/laps-management-policy-settings#windows-laps-group-policy)
-## Recovering local administrator password
+## Recovering local administrator password and password metadata
-To view the local administrator password for a Windows device joined to Azure AD, you must be granted the *deviceLocalCredentials.Read.All* permission, and you must be assigned one of the following roles:
+To view the local administrator password for a Windows device joined to Azure AD, you must be granted the *deviceLocalCredentials.Read.All* permission.
-- [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator)-- [Intune Service Administrator](../roles/permissions-reference.md#intune-administrator)-- [Global Administrator](../roles/permissions-reference.md#global-administrator)
+To view the local administrator password metadata for a Windows device joined to Azure AD, you must be granted the *deviceLocalCredentials.Read* permission.
+
+The following built-in roles are granted these permissions by default:
+
+|Built-in role|DeviceLocalCredential.Read.All|DeviceLocalCredential.Read|
+||||
+|[Global Administrator](../roles/permissions-reference.md#global-administrator)|Yes|Yes|
+|[Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator)|Yes|Yes|
+|[Intune Service Administrator](../roles/permissions-reference.md#intune-administrator)|Yes|Yes|
+|[Global Reader](../roles/permissions-reference.md#global-reader)|No|Yes|
+|[Helpdesk Administrator](../roles/permissions-reference.md#helpdesk-administrator)|No|Yes|
+|[Security Administrator](../roles/permissions-reference.md#security-administrator)|No|Yes|
+|[Security Reader](../roles/permissions-reference.md#security-reader)|No|Yes|
+
+Any roles not listed are granted neither permission.
You can also use Microsoft Graph API [Get deviceLocalCredentialInfo](/graph/api/devicelocalcredentialinfo-get?view=graph-rest-beta&preserve-view=true) to recover local administrative password. If you use the Microsoft Graph API, the password returned is in Base64 encoded value that you need to decode before using it.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 05/04/2023 Last updated : 05/11/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on May 4th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on May 11th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | | Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) | | Microsoft Viva Suite | VIVA | 61902246-d7cb-453e-85cd-53ee28eec138 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83)<br/>VIVAENGAGE_COMMUNITIES_AND_COMMUNICATIONS (43304c6a-1d4e-4e0b-9b06-5b2a2ff58a90)<br/>VIVAENGAGE_KNOWLEDGE (c244cc9e-622f-4576-92ea-82e233e44e36)<br/>Viva_Goals_Premium (b44c6eaf-5c9f-478c-8f16-8cea26353bfb)<br/>VIVA_LEARNING_PREMIUM (7162bd38-edae-4022-83a7-c5837f951759) | Graph Connectors Search with Index (Microsoft Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Microsoft Viva Insights (b622badb-1b45-48d5-920f-4b27a2c0996c)<br/>Microsoft Viva Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83)<br/>Viva Engage Communities and Communications (43304c6a-1d4e-4e0b-9b06-5b2a2ff58a90)<br/>Viva Engage Knowledge (c244cc9e-622f-4576-92ea-82e233e44e36)<br/>Viva Goals (b44c6eaf-5c9f-478c-8f16-8cea26353bfb)<br/>Viva Learning (7162bd38-edae-4022-83a7-c5837f951759) |
+| Minecraft Education Faculty | MEE_FACULTY | 984df360-9a74-4647-8cf8-696749f6247a | MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Minecraft Education (4c246bbc-f513-4311-beff-eba54c353256)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Minecraft Education Student | MEE_STUDENT | 533b8f26-f74b-4e9c-9c59-50fc4b393b63 | MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Minecraft Education (4c246bbc-f513-4311-beff-eba54c353256)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) | | Nonprofit Portal | NONPROFIT_PORTAL | aa2695c9-8d59-4800-9dc8-12e01f1735af | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>NONPROFIT_PORTAL (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Nonprofit Portal (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2)| | Office 365 A1 for Faculty | STANDARDWOFFPACK_FACULTY | 94763226-9b3c-4e75-a931-5c89701abe66 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Project Plan 3 for Faculty | PROJECTPROFESSIONAL_FACULTY | 46974aed-363e-423c-9e6a-951037cec495 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT_EDU (664a2fed-6c7a-468e-af35-d61740f0ec90)<br/>PROJECT_PROFESSIONAL_FACULTY (22572403-045f-432b-a660-af949c0a77b5)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>DYN365_CDS_PROJECT (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>FLOW_FOR_PROJECT (fa200448-008c-4acb-abd4-ea106ed2199d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service for Education (664a2fed-6c7a-468e-af35-d61740f0ec90)<br/>Project P3 for Faculty (22572403-045f-432b-a660-af949c0a77b5)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Common Data Service for Project (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>Power Automate for Project (fa200448-008c-4acb-abd4-ea106ed2199d) | | Project Plan 3 for GCC | PROJECTPROFESSIONAL_GOV | 074c6829-b3a0-430a-ba3d-aca365e57065 | SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_CLIENT_SUBSCRIPTION_GOV (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>SHAREPOINT_PROJECT_GOV (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Office for the web (Government) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Desktop Client for Government (45c6831b- ad74-4c7f-bd03-7c2b3fa39067)<br/>Project Online Service for Government (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) | | Project Plan 5 for GCC | PROJECTPREMIUM_GOV | f2230877-72be-4fec-b1ba-7156d6f75bd6 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_CLIENT_SUBSCRIPTION_GOV (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>SHAREPOINT_PROJECT_GOV (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Office for the web (Government) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Desktop Client for Government (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>Project Online Service for Government (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) |
+| Project Plan 5 without Project Client for Faculty | PROJECTONLINE_PLAN_1_FACULTY | b732e2a7-5694-4dff-a0f2-9d9204c794ac | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>SHAREPOINT_PROJECT_EDU (664a2fed-6c7a-468e-af35-d61740f0ec90)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project Online Service for Education (664a2fed-6c7a-468e-af35-d61740f0ec90)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
| Rights Management Adhoc | RIGHTSMANAGEMENT_ADHOC | 8c4ce438-32a7-4ac5-91a6-e22ae08d9c8b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ADHOC (7a39d7dd-e456-4e09-842a-0204ee08187b) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Rights Management Adhoc (7a39d7dd-e456-4e09-842a-0204ee08187b) | | Rights Management Service Basic Content Protection | RMSBASIC | 093e8d14-a334-43d9-93e3-30589a8b47d0 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | | Sensor Data Intelligence Additional Machines Add-in for Dynamics 365 Supply Chain Management | DYN365_IOT_INTELLIGENCE_ADDL_MACHINES | 08e18479-4483-4f70-8f17-6f92156d8ea9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>D365_IOTFORSCM_ADDITIONAL (a5f38206-2f48-4d83-9957-525f4e75e9c0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>IoT Intelligence Add-in Additional Machines (a5f38206-2f48-4d83-9957-525f4e75e9c0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Visio Plan 1 | VISIO_PLAN1_DEPT | ca7f3140-d88c-455b-9a1c-7f0679e31a76 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for business Basic (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio web app (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | Visio Plan 2 | VISIO_PLAN2_DEPT | 38b434d2-a15e-4cde-9a98-e737c75623e1 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for Business (Basic) (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio Desktop App (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>Visio Web App (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | Visio Online Plan 1 | VISIOONLINE_PLAN1 | 4b244418-9658-4451-a2b8-b5e2b364e9bd | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
+| Visio Plan 2 for Faculty | VISIOCLIENT_FACULTY | bf95fd32-576a-4742-8d7a-6dc4940b9532 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for Business (Basic) (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio Desktop App (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>Visio Web App (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
| Visio Online Plan 2 | VISIOCLIENT | c5928f49-12ba-48f7-ada3-0d743a3601d5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO DESKTOP APP (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | Visio Plan 2 for GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR Government (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | | Viva Topics | TOPIC_EXPERIENCES | 4016f256-b063-4864-816e-d818aad600c9 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83) | Graph Connectors Search with Index (Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83) |
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
To enable security defaults in your directory:
1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator. 1. Browse to **Azure Active Directory** > **Properties**. 1. Select **Manage security defaults**.
-1. Set **Security defaults** to **Enabled **.
+1. Set **Security defaults** to **Enabled**.
1. Select **Save**. ![Screenshot of the Azure portal with the toggle to enable security defaults](./media/concept-fundamentals-security-defaults/security-defaults-azure-ad-portal.png)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
With this new experience, PIM now automatically manages any type of resource in
**Service category:** Self Service Password Reset **Product capability:** Identity Security & Protection
-Self Service Password Reset (SSPR) can now PIM eligible users, and evaluate group-based memberships, along with direct memberships when checking if a user is in a particular administrator role. This capability provides more accurate SSPR policy enforcement by validating if users are in scope for the default SSPR admin policy or your organizations SSPR user policy.
+Self Service Password Reset (SSPR) can now check for PIM eligible users, and evaluate group-based memberships, along with direct memberships when checking if a user is in a particular administrator role. This capability provides more accurate SSPR policy enforcement by validating if users are in scope for the default SSPR admin policy or your organizations SSPR user policy.
For more information, see:
active-directory Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/apps.md
| : | : | | HR | [SuccessFactors - User Provisioning](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) | | HR | [Workday - User Provisioning](../../active-directory/saas-apps/workday-inbound-cloud-only-tutorial.md)|
-|[LDAP](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md)| OpenLDAP<br>Microsoft Active Directory Lightweight Directory Services<br>389 Directory Server<br>Apache Directory Server<br>IBM Tivoli DS<br>Isode Directory<br>NetIQ eDirectory<br>Novell eDirectory<br>Open DJ<br>Open DS<br>Oracle (previously Sun ONE) Directory Server Enterprise Edition<br>RadiantOne Virtual Directory Server (VDS) |
-| [SQL](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md)| Microsoft SQL Server and Azure SQL<br>IBM DB2 10.x<br>IBM DB2 9.x<br>Oracle 10g and 11g<br>Oracle 12c and 18c<br>MySQL 5.x|
+|[LDAP directory](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md)| OpenLDAP<br>Microsoft Active Directory Lightweight Directory Services<br>389 Directory Server<br>Apache Directory Server<br>IBM Tivoli DS<br>Isode Directory<br>NetIQ eDirectory<br>Novell eDirectory<br>Open DJ<br>Open DS<br>Oracle (previously Sun ONE) Directory Server Enterprise Edition<br>RadiantOne Virtual Directory Server (VDS) |
+| [SQL database](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md)| Microsoft SQL Server and Azure SQL<br>IBM DB2 10.x<br>IBM DB2 9.x<br>Oracle 10g and 11g<br>Oracle 12c and 18c<br>MySQL 5.x|
| Cloud platform| [AWS IAM Identity Center](../../active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md) | | Cloud platform| [Google Cloud Platform - User Provisioning](../../active-directory/saas-apps/g-suite-provisioning-tutorial.md) |
-| Cloud platform|[SAP Cloud Identity Platform - Provisioning](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md) |
+| Business applications|[SAP Cloud Identity Platform - Provisioning](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md) |
| CRM| [Salesforce - User Provisioning](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md) | | ITSM| [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md)|
-## Entra Identity Governance Integrations
+## Entra Identity Governance integrations
The list below provides key integrations between Entra Identity Governance and various applications, including both provisioning and SSO integrations. For a full list of applications that Microsoft Entra integrates with specifically for SSO, see [here](../../active-directory/saas-apps/tutorial-list.md).
+Microsoft Entra identity governance can be integrated with many other applications, using standards such as OpenID Connect, SAML, SCIM, SQL and LDAP. If you're using a SaaS application which isn't listed, then [ask the SaaS vendor to onboard](../manage-apps/v2-howto-app-gallery-listing.md). For integration with other applications, see [integrating applications with Azure AD](identity-governance-applications-integrate.md).
+ | Application | Automated provisioning | Single Sign On (SSO)| | : | :-: | :-: |
+| 389 directory server ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
| [4me](../../active-directory/saas-apps/4me-provisioning-tutorial.md) | ΓùÅ | ΓùÅ| | [8x8](../../active-directory/saas-apps/8x8-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [15five](../../active-directory/saas-apps/15five-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
The list below provides key integrations between Entra Identity Governance and v
| [Alinto Protect (renamed Cleanmail)](../../active-directory/saas-apps/alinto-protect-provisioning-tutorial.md) | ΓùÅ | | | [Alvao](../../active-directory/saas-apps/alvao-provisioning-tutorial.md) | ΓùÅ | | | [Amazon Web Services (AWS) - Role Provisioning](../../active-directory/saas-apps/amazon-web-service-tutorial.md) | ΓùÅ | ΓùÅ |
+| Apache Directory Server ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
| [Appaegis Isolation Access Cloud](../../active-directory/saas-apps/appaegis-isolation-access-cloud-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Apple School Manager](../../active-directory/saas-apps/apple-school-manager-provision-tutorial.md) | ΓùÅ | | | [Apple Business Manager](../../active-directory/saas-apps/apple-business-manager-provision-tutorial.md) | ΓùÅ | |
The list below provides key integrations between Entra Identity Governance and v
| [Howspace](../../active-directory/saas-apps/howspace-provisioning-tutorial.md) | ΓùÅ | | | [H5mag](../../active-directory/saas-apps/h5mag-provisioning-tutorial.md) | ΓùÅ | | | IBM DB2 ([SQL connector](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md) ) | ΓùÅ | |
-| IBM Tivoli Directory Server ([SQL connector](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md) ) | ΓùÅ | |
+| IBM Tivoli Directory Server ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
| [Ideo](../../active-directory/saas-apps/ideo-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Ideagen Cloud](../../active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md) | ΓùÅ | | | [Infor CloudSuite](../../active-directory/saas-apps/infor-cloudsuite-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
The list below provides key integrations between Entra Identity Governance and v
| [introDus Pre and Onboarding Platform](../../active-directory/saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md) | ΓùÅ | | | [Invision](../../active-directory/saas-apps/invision-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [InviteDesk](../../active-directory/saas-apps/invitedesk-provisioning-tutorial.md) | ΓùÅ | |
+| Isode directory server ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
| [Jive](../../active-directory/saas-apps/jive-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Jostle](../../active-directory/saas-apps/jostle-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Joyn FSM](../../active-directory/saas-apps/joyn-fsm-provisioning-tutorial.md) | ΓùÅ | |
The list below provides key integrations between Entra Identity Governance and v
| [MerchLogix](../../active-directory/saas-apps/merchlogix-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Meta Networks Connector](../../active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | MicroFocus Novell eDirectory ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
+| Microsoft 365 | ΓùÅ | ΓùÅ |
+| Microsoft Active Directory Domain Services | | ΓùÅ |
+| Microsoft Azure | ΓùÅ | ΓùÅ |
+| [Microsoft Azure Active Directory Domain Services](../../active-directory-domain-services/synchronization.md) | ΓùÅ | ΓùÅ |
+| Microsoft Azure SQL ([SQL connector](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md) ) | ΓùÅ | |
| Microsoft Lightweight Directory Server (ADAM) ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | | | Microsoft SharePoint Server (SharePoint) | ΓùÅ | | | Microsoft SQL Server ([SQL connector](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md) ) | ΓùÅ | |
The list below provides key integrations between Entra Identity Governance and v
| [Mural Identity](../../active-directory/saas-apps/mural-identity-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [MX3 Diagnostics](../../active-directory/saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md) | ΓùÅ | | | [myPolicies](../../active-directory/saas-apps/mypolicies-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
-| Net IQ eDirectory ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
+| MySQL ([SQL connector](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md) ) | ΓùÅ | |
+| NetIQ eDirectory ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
| [Netpresenter Next](../../active-directory/saas-apps/netpresenter-provisioning-tutorial.md) | ΓùÅ | | | [Netskope User Authentication](../../active-directory/saas-apps/netskope-administrator-console-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Netsparker Enterprise](../../active-directory/saas-apps/netsparker-enterprise-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
The list below provides key integrations between Entra Identity Governance and v
| Novell eDirectory ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | | | [Office Space Software](../../active-directory/saas-apps/officespace-software-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Olfeo SAAS](../../active-directory/saas-apps/olfeo-saas-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
-| [Open LDAP](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) | ΓùÅ | |
+| Open DJ ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
+| Open DS ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
+| [OpenLDAP](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) | ΓùÅ | |
| [OpenText Directory Services](../../active-directory/saas-apps/open-text-directory-services-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Oracle Cloud Infrastructure Console](../../active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
-| Oracle Database | ΓùÅ | |
+| Oracle Database ([SQL connector](../../active-directory/app-provisioning/tutorial-ecma-sql-connector.md) ) | ΓùÅ | |
| Oracle E-Business Suite | ΓùÅ | ΓùÅ | | [Oracle Fusion ERP](../../active-directory/saas-apps/oracle-fusion-erp-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | Oracle Internet Directory | ΓùÅ | | | Oracle PeopleSoft ERP | ΓùÅ | ΓùÅ |
-| Oracle SunOne | ΓùÅ | |
+| Oracle SunONE Directory Server ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
| [PagerDuty](../../active-directory/saas-apps/pagerduty-tutorial.md) | | ΓùÅ | | [Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](../../active-directory/saas-apps/palo-alto-networks-cloud-identity-engine-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Palo Alto Networks SCIM Connector](../../active-directory/saas-apps/palo-alto-networks-scim-connector-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
The list below provides key integrations between Entra Identity Governance and v
| [Proxyclick](../../active-directory/saas-apps/proxyclick-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Peakon](../../active-directory/saas-apps/peakon-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Proware](../../active-directory/saas-apps/proware-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
+| RadiantOne Virtual Directory Server (VDS) ([LDAP connector](../../active-directory/app-provisioning/on-premises-ldap-connector-configure.md) ) | ΓùÅ | |
| [Real Links](../../active-directory/saas-apps/real-links-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Reward Gateway](../../active-directory/saas-apps/reward-gateway-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [RFPIO](../../active-directory/saas-apps/rfpio-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
The list below provides key integrations between Entra Identity Governance and v
| [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Salesforce Sandbox](../../active-directory/saas-apps/salesforce-sandbox-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [Samanage](../../active-directory/saas-apps/samanage-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
+| SAML-based apps | | ΓùÅ |
| [SAP Analytics Cloud](../../active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
-| [SAP Cloud Platform Identity Authentication](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
+| [SAP Cloud Platform](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
| SAP R/3 | ΓùÅ | | | [SAP HANA](../../active-directory/saas-apps/saphana-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP SuccessFactors to Active Directory](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP SuccessFactors to Azure Active Directory](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP SuccessFactors Writeback ](../../active-directory/saas-apps/sap-successfactors-writeback-tutorial.md) | ΓùÅ | ΓùÅ | | [SchoolStream ASA](../../active-directory/saas-apps/schoolstream-asa-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
+| [SCIM-based apps in the cloud](../app-provisioning/use-scim-to-provision-users-and-groups.md) | ΓùÅ | |
+| [SCIM-based apps on-premises](../app-provisioning/on-premises-scim-provisioning.md) | ΓùÅ | |
| [Secure Deliver](../../active-directory/saas-apps/secure-deliver-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [SecureLogin](../../active-directory/saas-apps/secure-login-provisioning-tutorial.md) | ΓùÅ | | | [Sentry](../../active-directory/saas-apps/sentry-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
The list below provides key integrations between Entra Identity Governance and v
| [Zscaler ZSCloud](../../active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | ## Partner driven integrations
-There is also a healthy partner ecosystem, further expanding the breadth and depth of integrations available with Entra Identity Governance. Explore the [partner integrations](../../active-directory/app-provisioning/partner-driven-integrations.md) available, including:
+There is also a healthy partner ecosystem, further expanding the breadth and depth of integrations available with Microsoft Entra Identity Governance. Explore the [partner integrations](../../active-directory/app-provisioning/partner-driven-integrations.md) available, including connectors for:
* Epic * Cerner * IBM RACF
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
To set the role assignment and create a query, do the following steps:
### Install Azure PowerShell module
-Once you have the appropriate role assignment, launch PowerShell, and [install the Azure PowerShell module](/powershell/azure/install-az-ps) (if you haven't already), by typing:
+Once you have the appropriate role assignment, launch PowerShell, and [install the Azure PowerShell module](/powershell/azure/install-azure-powershell) (if you haven't already), by typing:
```azurepowershell install-module -Name az -allowClobber -Scope CurrentUser
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Alternatively, in BIG-IP you can disable Guided Configuration strict management
![Screenshot of the padlock icon.](./media/f5-big-ip-oracle/strict-mode-padlock.png) >[!NOTE]
- >To re-enable strict management mode and deploye a configuration overwrites settings outside the Guided Configuration UI. Therefore, we recommend the advanced configuration method for production services.
+ >To re-enable strict management mode and deploy a configuration that overwrites settings outside the Guided Configuration UI, we recommend the advanced
+ configuration method for production services.
## Troubleshooting
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
In this article, you learn how to create, list, and delete a user-assigned manag
To use Azure PowerShell locally for this article instead of using Cloud Shell:
-1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-azure-powershell) if you haven't already.
1. Sign in to Azure.
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
In this article, you learn how to assign a managed identity to an application ro
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing. - To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top-right corner of code blocks.
- - Run scripts locally by installing the latest version of [the Az PowerShell module](/powershell/azure/install-az-ps). You can also use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started).
+ - Run scripts locally by installing the latest version of [the Az PowerShell module](/powershell/azure/install-azure-powershell). You can also use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started).
## Assign a managed identity access to another application's app role
active-directory How To Use Vm Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md
This article provides PowerShell and CLI script examples for sign-in using manag
[!INCLUDE [msi-qs-configure-prereqs](../../../includes/active-directory-msi-qs-configure-prereqs.md)]
-If you plan to use the Azure PowerShell or Azure CLI examples in this article, be sure to install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+If you plan to use the Azure PowerShell or Azure CLI examples in this article, be sure to install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
> [!IMPORTANT] > - All sample script in this article assumes the command-line client is running on a VM with managed identities for Azure resources enabled. Use the VM "Connect" feature in the Azure portal, to remotely connect to your VM. For details on enabling managed identities for Azure resources on a VM, see [Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md), or one of the variant articles (using PowerShell, CLI, a template, or an Azure SDK).
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
This article provides various code and script examples for token acquisition. It
[!INCLUDE [msi-qs-configure-prereqs](../../../includes/active-directory-msi-qs-configure-prereqs.md)]
-If you plan to use the Azure PowerShell examples in this article, be sure to install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps).
+If you plan to use the Azure PowerShell examples in this article, be sure to install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!IMPORTANT]
active-directory How To View Managed Identity Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-powershell.md
In this article, you learn how to view the service principal of a managed identi
- Enable [system assigned identity on a virtual machine](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity) or [application](../../app-service/overview-managed-identity.md#add-a-system-assigned-identity). - To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top right corner of code blocks.
- - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-az-ps), then sign in to Azure using `Connect-AzAccount`.
+ - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell), then sign in to Azure using `Connect-AzAccount`.
## View the service principal
active-directory Howto Assign Access Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md
Once you've configured an Azure resource with a managed identity, you can give t
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing. - To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top-right corner of code blocks.
- - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-az-ps), then sign in to Azure using `Connect-AzAccount`.
+ - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell), then sign in to Azure using `Connect-AzAccount`.
## Use Azure RBAC to assign a managed identity access to another resource
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
description: List of services supporting managed identities
Previously updated : 01/10/2022 Last updated : 05/10/2023
Managed identities for Azure resources provide Azure services with an automatica
>[!IMPORTANT] > New technical content is added daily. This list does not include every article that talks about managed identities. Please refer to each service's content set for details on their managed identities support. Resource provider namespace information is available in the article titled [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md).
+## Services supporting managed identities
+ The following Azure services support managed identities for Azure resources:
The following Azure services support managed identities for Azure resources:
| Azure Batch | [Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity](../../batch/batch-customer-managed-key.md) </BR> [Configure managed identities in Batch pools](../../batch/managed-identity-pools.md) | | Azure Blueprints | [Stages of a blueprint deployment](../../governance/blueprints/concepts/deployment-stages.md) | | Azure Cache for Redis | [Managed identity for storage accounts with Azure Cache for Redis](../../azure-cache-for-redis/cache-managed-identity.md) |
+| Azure Communications Gateway | [Deploy Azure Communications Gateway](../../communications-gateway/deploy.md) |
| Azure Container Apps | [Managed identities in Azure Container Apps](../../container-apps/managed-identity.md) | | Azure Container Instance | [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) | | Azure Container Registry | [Use an Azure-managed identity in ACR Tasks](../../container-registry/container-registry-tasks-authentication-managed-identity.md) |
active-directory Qs Configure Powershell Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md
In this article, using PowerShell, you learn how to perform the following manage
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing. - To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top-right corner of code blocks.
- - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-az-ps), then sign in to Azure using `Connect-AzAccount`.
+ - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell), then sign in to Azure using `Connect-AzAccount`.
## System-assigned managed identity
active-directory Qs Configure Powershell Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vmss.md
In this article, using PowerShell, you learn how to perform the managed identiti
- To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top-right corner of code blocks.
- - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-az-ps), then sign in to Azure using `Connect-AzAccount`.
+ - Run scripts locally by installing the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell), then sign in to Azure using `Connect-AzAccount`.
## System-assigned managed identity
active-directory Tutorial Windows Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db.md
This tutorial shows you how to use a system-assigned managed identity for a Wind
- If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - To perform the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).-- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps)
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell)
- You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
conn.Open();
Alternatively, a quick way to test the end-to-end setup without having to write and deploy an app on the VM is using PowerShell. 1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, click **Connect**.
-2. Enter in your **Username** and **Password** for which you added when you created the Windows VM.
+2. Enter in your **VM admin credential** which you added when you created the Windows VM.
3. Now that you have created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session. 4. Using PowerShellΓÇÖs `Invoke-WebRequest`, make a request to the local managed identity's endpoint to get an access token for Azure SQL.
active-directory Tutorial Windows Vm Ua Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md
You learn how to:
To use Azure PowerShell locally for this article (rather than using Cloud Shell), complete the following steps:
-1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-azure-powershell) if you haven't already.
1. Sign in to Azure:
active-directory Linkedin Employment Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/linkedin-employment-verification.md
If your organization wants its employees to get their place of work verified on
1. Setup your Microsoft Entra Verified ID service by following these [instructions](verifiable-credentials-configure-tenant.md). 1. [Create](how-to-use-quickstart-verifiedemployee.md#create-a-verified-employee-credential) a Verified ID Employee credential. 1. Deploy the custom webapp from [GitHub](https://github.com/Azure-Samples/VerifiedEmployeeIssuance).
-1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp. You cannot self-service the LinkedIn company page. Today, you need to fill in [this form](https://www.linkedin.com/help/linkedin/answer/a1359065) and we can enable your organization.
+1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp. You cannot self-service the LinkedIn company page. Today, you need to fill in [this form](https://aka.ms/enablelinkedin) and we can enable your organization.
1. Once you deploy the updated LinkedIn mobile app your employees can get verified. >[!IMPORTANT]
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md
az identity federated-credential delete --name $ficId --identity-name $uaId --re
To use Azure PowerShell locally for this article instead of using Cloud Shell:
-1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-azure-powershell) if you haven't already.
1. Sign in to Azure.
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust.md
az ad app federated-credential delete --id f6475511-fd81-4965-a00e-41e7792b7b9c
To use Azure PowerShell locally for this article instead of using Cloud Shell:
-1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-azure-powershell) if you haven't already.
1. Sign in to Azure.
advisor Advisor Alerts Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-arm.md
To learn more about action groups, see [Create and manage action groups](../azur
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Review the template
advisor Advisor Alerts Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-bicep.md
To learn more about action groups, see [Create and manage action groups](../azur
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Review the Bicep file
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 04/21/2023 Last updated : 05/10/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
Azure CNI Overlay has the following limitations:
- Windows support is still in Preview - Windows Server 2019 node pools are **not** supported for Overlay - Traffic from host network pods is not able to reach Windows Overlay pods.-- Sovereign Clouds are not supported - Virtual Machine Availability Sets (VMAS) are not supported for Overlay - Dualstack networking is not supported in Overlay - You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead.
When the status reflects *Registered*, refresh the registration of the *Microsof
az provider register --namespace Microsoft.ContainerService ```
+## Upgrade an existing cluster to CNI Overlay (Preview)
+
+> [!NOTE]
+> The upgrade capability is still in preview and requires the preview AKS Azure CLI extension.
+You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:
+
+- be on Kubernetes version 1.22+
+- **not** be using the dynamic pod IP allocation feature
+- **not** have network policies enabled
+- **not** be using any Windows node pools with docker as the container runtime
+
+The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to Overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
+
+> [!WARNING]
+> Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, this had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build 20348.1668**
+
+This network disruption will only occur during the upgrade. Once the migration to Overlay has completed for all node pools, all Overlay pods will be able to communicate successfully with the Windows pods.
+ ## Next steps To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Ser
description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks. Previously updated : 07/18/2022 Last updated : 05/10/2023 # Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service (AKS)
az aks create -n myAKSCluster -g myResourceGroup --node-osdisk-diskencryptionset
When new node pools are added to the cluster created above, the customer-managed key provided during the create process is used to encrypt the OS disk.
-## Encrypt your AKS cluster data disk(optional)
+## Encrypt your AKS cluster data disk
-OS disk encryption key is used to encrypt the data disk if the key isn't provided for data disk from AKS version 1.17.2. You can also encrypt AKS data disks with your other keys.
+If you have already provided a disk encryption set during cluster creation, encrypting data disks with the same disk encryption set is the default option. Therefore, this step is optional. However, if you want to encrypt data disks with a different disk encryption set, you can follow these steps.
> [!IMPORTANT] > Ensure you have the proper AKS credentials. The managed identity needs to have contributor access to the resource group where the diskencryptionset is deployed. Otherwise, you'll get an error suggesting that the managed identity does not have permissions.
-```azurecli-interactive
-# Retrieve your Azure Subscription Id from id property as shown below
-az account list
-```
-
-The following example resembles output from the command:
-
-```output
-someuser@Azure:~$ az account list
-[
- {
- "cloudName": "AzureCloud",
- "id": "666e66d8-1e43-4136-be25-f25bb5de5893",
- "isDefault": true,
- "name": "MyAzureSubscription",
- "state": "Enabled",
- "tenantId": "3ebbdf90-2069-4529-a1ab-7bdcb24df7cd",
- "user": {
- "cloudShellID": true,
- "name": "someuser@azure.com",
- "type": "user"
- }
- }
-]
-```
- Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed. ```yaml
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
Title: Customize CoreDNS for Azure Kubernetes Service (AKS)
-description: Learn how to customize CoreDNS to add subdomains or extend custom DNS endpoints using Azure Kubernetes Service (AKS)
+description: Learn how to customize CoreDNS to add subdomains, extend custom DNS endpoints, and change scaling logic using Azure Kubernetes Service (AKS)
Last updated 03/03/2023
-#Customer intent: As a cluster operator or developer, I want to learn how to customize the CoreDNS configuration to add sub domains or extend to custom DNS endpoints within my network
+#Customer intent: As a cluster operator or developer, I want to learn how to customize the CoreDNS configuration to add sub domains or extend to custom DNS endpoints within my network. I also want to learn how to customize the logic for CoreDNS pod scaling.
# Customize CoreDNS with Azure Kubernetes Service
data:
For general CoreDNS troubleshooting steps, such as checking the endpoints or resolution, see [Debugging DNS resolution][coredns-troubleshooting].
+## Configure CoreDNS pod scaling
+
+Sudden spikes in DNS traffic within AKS clusters are a common occurrence due to the elasticity that AKS provides for workloads. These spikes can lead to an increase in memory consumption by CoreDNS pods. In some cases, this increased memory consumption could cause `Out of memory` issues. To preempt this issue, AKS clusters auto scale CoreDNS pods to reduce memory usage per pod. The default settings for this auto scaling logic are stored in the `coredns-autoscaler` ConfigMap. However, you may observe that the default auto scaling of CoreDNS pods is not always aggressive enough to prevent `Out of memory` issues for your CoreDNS pods. In this case, you can directly modify the `coredns-autoscaler` ConfigMap. Please note that simply increasing the number of CoreDNS pods without addressing the root cause of the `Out of memory` issue may only provide a temporary fix. If there is not enough memory available across the nodes where the CoreDNS pods are running, increasing the number of CoreDNS pods will not help. You may need to investigate further and implement appropriate solutions such as optimizing resource usage, adjusting resource requests and limits, or adding more memory to the nodes.
+
+CoreDNS uses [horizontal cluster proportional autoscaler][cluster-proportional-autoscaler] for pod auto scaling. The `coredns-autoscaler` ConfigMap can be edited to configure the scaling logic for the number of CoreDNS pods. The `coredns-autoscaler` ConfigMap currently supports two different ConfigMap key values: `linear` and `ladder` which correspond to two supported control modes. The `linear` controller yields a number of replicas in [min,max] range equivalent to `max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) )`. The `ladder` controller calculates the number of replicas by consulting two different step functions, one for core scaling and another for node scaling, yielding the max of the two replica values. For more information on the control modes and ConfigMap format, please consult the [upstream documentation][cluster-proportional-autoscaler-control-patterns].
+
+To retrieve the `coredns-autoscaler` ConfigMap, you can run the `kubectl get configmap coredns-autoscaler -n kube-system -o yaml` command which will return the following:
+
+```yaml
+apiVersion: v1
+data:
+ ladder: '{"coresToReplicas":[[1,2],[512,3],[1024,4],[2048,5]],"nodesToReplicas":[[1,2],[8,3],[16,4],[32,5]]}'
+kind: ConfigMap
+metadata:
+ name: coredns-autoscaler
+ namespace: kube-system
+ resourceVersion: "..."
+ creationTimestamp: "..."
+```
+ ### Enable DNS query logging 1. Add the following configuration to your coredns-custom ConfigMap:
To learn more about core network concepts, see [Network concepts for application
[kubectl delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [coredns hosts]: https://coredns.io/plugins/hosts/ [coredns-troubleshooting]: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
+[cluster-proportional-autoscaler]: https://github.com/kubernetes-sigs/cluster-proportional-autoscaler
+[cluster-proportional-autoscaler-control-patterns]: https://github.com/kubernetes-sigs/cluster-proportional-autoscaler#control-patterns-and-configmap-formats
<!-- LINKS - internal --> [concepts-network]: concepts-network.md
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Last updated 05/01/2023
-# Deploy a Kubernetes application from Azure Marketplace (preview)
+# Deploy a Kubernetes application from Azure Marketplace
[Azure Marketplace][azure-marketplace] is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace, you can find, try, buy, and deploy the software and services that you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from Microsoft partners.
Included among these solutions are Kubernetes application-based container offers
This feature is currently supported only in the following regions: -- East US-- West US-- Central US-- West Central US-- South Central US-- East US 2-- West US 2-- West Europe-- North Europe-- Canada Central-- Southeast Asia-- Australia East-- Central India
+- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India
Kubernetes application-based container offers can't be deployed on AKS for Azure Stack HCI or AKS Edge Essentials.
az provider register --namespace Microsoft.KubernetesConfiguration --wait
Verify the deployment by using the following command to list the extensions that are running on your cluster:
- ```azurecli-interactive
- az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
- ```
+```azurecli-interactive
+az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters
+```
### [Portal](#tab/azure-portal)
If you experience issues, see the [troubleshooting checklist for failed deployme
<!-- LINKS --> [azure-marketplace]: /marketplace/azure-marketplace-overview+ [cluster-extensions]: ./cluster-extensions.md+ [billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md+ [marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer++
aks Egress Udr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-udr.md
Title: Customize cluster egress with a user-defined routing table
-description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS) with a routing table.
+ Title: Customize cluster egress with a user-defined routing table in Azure Kubernetes Service (AKS)
+description: Learn how to define a custom egress route with a routing table in Azure Kubernetes Service (AKS).
Previously updated : 06/29/2020 Last updated : 05/10/2023
-#Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
+#Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this upfront, I don't want AKS-provided load balancer configurations.
-# Customize cluster egress with a user-defined routing table
+# Customize cluster egress with a user-defined routing table in Azure Kubernetes Service (AKS)
-Egress from an AKS cluster can be customized to fit specific scenarios. By default, AKS will provision a Standard SKU Load Balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress.
+You can customize the egress for your Azure Kubernetes Service (AKS) clusters to fit specific scenarios. AKS provisions a `Standard` SKU load balancer for egress by default. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or the scenario requires extra hops for egress.
-This article walks through how to customize a cluster's egress route to support custom network scenarios, such as those which disallows public IPs and requires the cluster to sit behind a network virtual appliance (NVA).
+This article walks through how to customize a cluster's egress route to support custom network scenarios. These scenarios include ones which disallow public IPs and require the cluster to sit behind a network virtual appliance (NVA).
## Prerequisites
-* Azure CLI version 2.0.81 or greater
-* API version of `2020-01-01` or greater
-## Limitations
-* Setting `outboundType` requires AKS clusters with a `vm-set-type` of `VirtualMachineScaleSets` and `load-balancer-sku` of `Standard`.
+* Azure CLI version 2.0.81 or greater. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* API version `2020-01-01` or greater.
+
+## Requirements and limitations
+
+Using outbound type is an advanced networking scenario and requires proper network configuration. The following requirements and limitations apply to using outbound type:
+
+* Setting `outboundType` requires AKS clusters with a `vm-set-type` of `VirtualMachineScaleSets` and a `load-balancer-sku` of `Standard`.
* Setting `outboundType` to a value of `UDR` requires a user-defined route with valid outbound connectivity for the cluster. * Setting `outboundType` to a value of `UDR` implies the ingress source IP routed to the load-balancer may **not match** the cluster's outgoing egress destination address.
-## Overview
+## Overview of customizing egress with a user-defined routing table
-> [!NOTE]
-> Using outbound type is an advanced networking scenario and requires proper network configuration.
+AKS doesn't automatically configure egress paths if `userDefinedRouting` is set, which means you must configure the egress.
-If `userDefinedRouting` is set, AKS won't automatically configure egress paths. The egress setup must be done by you.
+When you don't use standard load balancer (SLB) architecture, you must establish explicit egress. You must deploy your AKS cluster into an existing virtual network with a subnet that has been previously configured. This architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy so a public IP assigned to the standard load balancer or appliance can handle the Network Address Translation (NAT).
-The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
+### Load balancer creation with `userDefinedRouting`
-#### Load balancer creation with userDefinedRouting
+AKS clusters with an outbound type of UDR get a standard load balancer only when the first Kubernetes service of type `loadBalancer` is deployed. The load balancer is configured with a public IP address for *inbound* requests and a backend pool for *inbound* requests. The Azure cloud provider configures inbound rules, but it **doesn't configure outbound public IP address or outbound rules**. Your UDR is the only source for egress traffic.
-AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for *inbound* requests and a backend pool for *inbound* requests. Inbound rules are configured by the Azure cloud provider, but **no outbound public IP address or outbound rules** are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.
-
-Azure load balancers [don't incur a charge until a rule is placed](https://azure.microsoft.com/pricing/details/load-balancer/).
+> [!NOTE]
+> Azure load balancers [don't incur a charge until a rule is placed](https://azure.microsoft.com/pricing/details/load-balancer/).
## Deploy a cluster with outbound type of UDR and Azure Firewall
-To illustrate the application of a cluster with outbound type using a user-defined route, a cluster can be configured on a virtual network with an Azure Firewall on its own subnet. See this example on the [restrict egress traffic with Azure firewall example](limit-egress-traffic.md).
+To see an application of a cluster with outbound type using a user-defined route, see this [restrict egress traffic with Azure firewall example](limit-egress-traffic.md).
> [!IMPORTANT]
-> Outbound type of UDR requires there is a route for 0.0.0.0/0 and next hop destination of NVA (Network Virtual Appliance) in the route table.
-> The route table already has a default 0.0.0.0/0 to Internet, without a Public IP to SNAT just adding this route will not provide you egress. AKS will validate that you don't create a 0.0.0.0/0 route pointing to the Internet but instead to NVA or gateway, etc.
-> When using an outbound type of UDR, a load balancer public IP address for **inbound requests** is not created unless a service of type *loadbalancer* is configured. A public IP address for **outbound requests** is never created by AKS if an outbound type of UDR is set.
+> Outbound type of UDR requires a route for 0.0.0.0/0 and a next hop destination of NVA in the route table.
+> The route table already has a default 0.0.0.0/0 to the Internet. Without a public IP address for Azure to use for Source Network Address Translation (SNAT), simply adding this route won't provide you outbound Internet connectivity. AKS validates that you don't create a 0.0.0.0/0 route pointing to the Internet but instead to a gateway, NVA, etc.
+> When using an outbound type of UDR, a load balancer public IP address for **inbound requests** isn't created unless you configure a service of type *loadbalancer*. AKS never creates a public IP address for **outbound requests** if you set an outbound type of UDR.
## Next steps
-See [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md).
-
-See [how to create, change, or delete a route table](../virtual-network/manage-route-table.md).
+For more information on user-defined routes and Azure networking, see:
-<!-- LINKS - internal -->
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[byo-route-table]: configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet
+* [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md)
+* [How to create, change, or delete a route table](../virtual-network/manage-route-table.md).
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
To associate the cluster with the firewall, the dedicated subnet for the cluster
az network vnet subnet update -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --route-table $FWROUTE_TABLE_NAME ```
-## Deploy an AKS cluster with a UPR outbound type to the existing network
+## Deploy an AKS cluster with a UDR outbound type to the existing network
Now, you can deploy an AKS cluster into the existing virtual network. You will use the [`userDefinedRouting` outbound type](egress-outboundtype.md), which ensures that any outbound traffic is forced through the firewall and no other egress paths will exist. The [`loadBalancer` outbound type](egress-outboundtype.md#outbound-type-of-loadbalancer) can also be used.
aks Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-azure-ad.md
Title: AKS-managed Azure Active Directory integration description: Learn how to configure Azure AD for your Azure Kubernetes Service (AKS) clusters. Previously updated : 05/02/2023 Last updated : 05/10/2023
Learn more about the Azure AD integration flow in the [Azure AD documentation](c
## Before you begin * Make sure you have Azure CLI version 2.29.0 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* You need `kubectl` with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`](https://github.com/Azure/kubelogin). The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than *one* version. You'll experience authentication issues if you don't use the correct version.
+* You need `kubectl` with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`][kubelogin]. The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than *one* version. You'll experience authentication issues if you don't use the correct version.
* If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3.
-* This article requires you have an Azure AD group for your cluster. This group will be registered as an admin group on the cluster to grant admin permissions. If you don't have an existing Azure AD group, you can create one using the [`az ad group create`](/cli/azure/ad/group#az_ad_group_create) command.
+* This configuration requires you have an Azure AD group for your cluster. This group is registered as an admin group on the cluster to grant admin permissions. If you don't have an existing Azure AD group, you can create one using the [`az ad group create`](/cli/azure/ad/group#az_ad_group_create) command.
## Enable AKS-managed Azure AD integration on your AKS cluster
Learn more about the Azure AD integration flow in the [Azure AD documentation](c
### Use an existing cluster
-* Enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster using the [`az aks update`][az-aks-update] command. Make sure to set your admin group to keep access on your cluster.
+Enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster using the [`az aks update`][az-aks-update] command. Make sure to set your admin group to keep access on your cluster.
- ```azurecli-interactive
- az aks update -g MyResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id-1>,<id-2> [--aad-tenant-id <id>]
- ```
+```azurecli-interactive
+az aks update -g MyResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id-1>,<id-2> [--aad-tenant-id <id>]
+```
- A successful activation of an AKS-managed Azure AD cluster has the following section in the response body:
+A successful activation of an AKS-managed Azure AD cluster has the following section in the response body:
- ```output
- "AADProfile": {
- "adminGroupObjectIds": [
- "5d24****-****-****-****-****afa27aed"
- ],
- "clientAppId": null,
- "managed": true,
- "serverAppId": null,
- "serverAppSecret": null,
- "tenantId": "72f9****-****-****-****-****d011db47"
- }
- ```
+```output
+"AADProfile": {
+ "adminGroupObjectIds": [
+ "5d24****-****-****-****-****afa27aed"
+ ],
+ "clientAppId": null,
+ "managed": true,
+ "serverAppId": null,
+ "serverAppSecret": null,
+ "tenantId": "72f9****-****-****-****-****d011db47"
+ }
+```
### Upgrade a legacy Azure AD cluster to AKS-managed Azure AD integration
-* If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD integration with no downtime using the [`az aks update`][az-aks-update] command.
+If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD integration with no downtime using the [`az aks update`][az-aks-update] command.
- ```azurecli-interactive
- az aks update -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
- ```
+```azurecli-interactive
+az aks update -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
+```
- A successful migration of an AKS-managed Azure AD cluster has the following section in the response body:
+A successful migration of an AKS-managed Azure AD cluster has the following section in the response body:
- ```output
- "AADProfile": {
- "adminGroupObjectIds": [
- "5d24****-****-****-****-****afa27aed"
- ],
- "clientAppId": null,
- "managed": true,
- "serverAppId": null,
- "serverAppSecret": null,
- "tenantId": "72f9****-****-****-****-****d011db47"
- }
- ```
+```output
+"AADProfile": {
+ "adminGroupObjectIds": [
+ "5d24****-****-****-****-****afa27aed"
+ ],
+ "clientAppId": null,
+ "managed": true,
+ "serverAppId": null,
+ "serverAppSecret": null,
+ "tenantId": "72f9****-****-****-****-****d011db47"
+ }
+```
## Access your AKS-managed Azure AD enabled cluster
Learn more about the Azure AD integration flow in the [Azure AD documentation](c
## Non-interactive sign-in with kubelogin
-There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to connect to the cluster with a non-interactive service principal credential. Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [`kubelogin`](https://github.com/Azure/kubelogin) binary in the execution PATH.
+There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`][kubelogin] to connect to the cluster with a non-interactive service principal credential.
-* When getting the clusterUser credential, you can use the `format` query parameter to overwrite the default behavior change. You can set the value to `azure` to use the original kubeconfig format:
+Azure AD integrated clusters using a Kubernetes version newer than version 1.24 automatically use the `kubelogin` format. Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [`kubelogin`][kubelogin] binary in the execution PATH.
+
+* When getting the clusterUser credential, you can use the `format` query parameter to overwrite the default behavior. You can set the value to `azure` to use the original kubeconfig format:
```azurecli-interactive az aks get-credentials --format azure ```
-* Azure AD integrated clusters using a Kubernetes version newer than 1.24 automatically use the `kubelogin` format.
- * If your Azure AD integrated clusters use a Kubernetes version older than 1.24, you need to convert the kubeconfig format manually. ```azurecli-interactive export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig ```
-
+ > [!NOTE]
-> If you meet the `error: The azure auth plugin has been removed.`, you need to run `kubelogin convert-kubeconfig` to convert the kubeconfig format manually.
+> If you receive the message **error: The Azure auth plugin has been removed.**, you need to run the command `kubelogin convert-kubeconfig` to convert the kubeconfig format manually.
## Troubleshoot access issues with AKS-managed Azure AD
If you're permanently blocked by not having access to a valid Azure AD group wit
<!-- LINKS - external --> [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
+[kubelogin]: https://github.com/Azure/kubelogin
<!-- LINKS - Internal --> [aks-concepts-identity]: concepts-identity.md
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
The Web Application Routing add-on deploys the following components:
### Import certificate into Azure Key Vault -- Import the SSL certificate into Azure Key Vault using the [`az keyvault certificate import`][az-keyvault-certificate-import] command.
+- Import the SSL certificate into Azure Key Vault using the [`az keyvault certificate import`][az-keyvault-certificate-import] command. If your certificate is password protected, you can pass the password through the `--password` flag.
```azurecli-interactive
- az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertificateName> -f aks-ingress-tls.pfx
+ az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertificateName> -f aks-ingress-tls.pfx [--password <certificate password if specified>]
``` ### Create an Azure DNS zone
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
This article helps you understand this new authentication feature, and reviews t
## Dependencies - AKS supports Azure AD workload identities on version 1.22 and higher.- - The Azure CLI version 2.47.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. ## Azure Identity client libraries
This article helps you understand this new authentication feature, and reviews t
In the Azure Identity client libraries, choose one of the following approaches: - Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`.-- Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`.
+- Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`.
- Use `WorkloadIdentityCredential` directly. The following table provides the **minimum** package version required for each language's client library.
Azure AD workload identity supports the following mappings related to a service
- One-to-one where a service account references an Azure AD object. - Many-to-one where multiple service accounts references the same Azure AD object.-- One-to-many where a service account references multiple Azure AD objects by changing the client ID annotation.
+- One-to-many where a service account references multiple Azure AD objects by changing the client ID annotation. For more information, see [How to federate multiple identities with a Kubernetes service account][multiple-identities].
> [!NOTE] > If the service account annotations are updated, you need to restart the pod for the changes to take effect.
The following table summarizes our migration or deployment recommendations for w
[custom-resource-definition]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/ [service-account-token-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection [oidc-federation]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
+[multiple-identities]: https://azure.github.io/azure-workload-identity/docs/faq.html#how-to-federate-multiple-identities-with-a-kubernetes-service-account
<!-- INTERNAL LINKS --> [use-azure-ad-pod-identity]: use-azure-ad-pod-identity.md [azure-ad-workload-identity]: ../active-directory/develop/workload-identities-overview.md
analysis-services Analysis Services Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md
This quickstart describes using PowerShell from the command line to create an Az
- **Azure subscription**: Visit [Azure Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) to create an account. - **Azure Active Directory**: Your subscription must be associated with an Azure Active Directory tenant and you must have an account in that directory. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md).-- **Azure PowerShell**. To find the installed version, run `Get-Module -ListAvailable Az`. To install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+- **Azure PowerShell**. To find the installed version, run `Get-Module -ListAvailable Az`. To install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
## Import Az.AnalysisServices module
analysis-services Analysis Services Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md
Return status codes:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-Before using PowerShell, [install or update the latest Azure PowerShell module](/powershell/azure/install-az-ps).
+Before using PowerShell, [install or update the latest Azure PowerShell module](/powershell/azure/install-azure-powershell).
To run sync, use [Sync-AzAnalysisServicesInstance](/powershell/module/az.analysisservices/sync-AzAnalysisServicesinstance).
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
This article shows how to automate backup and restore operations of your API Man
* An Azure storage account. If you don't have one, see [Create a storage account](../storage/common/storage-account-create.md). * [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in the storage account to hold the backup data.
-* The latest version of Azure PowerShell, if you plan to use Azure PowerShell cmdlets. If you haven't already, [install Azure PowerShell](/powershell/azure/install-az-ps).
+* The latest version of Azure PowerShell, if you plan to use Azure PowerShell cmdlets. If you haven't already, [install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Configure storage account access When running a backup or restore operation, you need to configure access to the storage account. API Management supports two storage access mechanisms: an Azure Storage access key, or an API Management managed identity.
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
To follow the steps described in this article, you must have:
- A CER file for the root certificate of the PFX certificates. For more information, see [Certificates for the back end](../application-gateway/certificates-for-backend-authentication.md). For testing purposes, optionally generate [self-signed certificates](../application-gateway/self-signed-certificates.md).
-* The latest version of Azure PowerShell. If you haven't already, [install Azure PowerShell](/powershell/azure/install-az-ps).
+* The latest version of Azure PowerShell. If you haven't already, [install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Scenario
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
To set up a managed identity in the Azure portal, you'll first create an API Man
The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
-1. If needed, install Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-az-ps). Then run `Connect-AzAccount` to create a connection with Azure.
+1. If needed, install Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-azure-powershell). Then run `Connect-AzAccount` to create a connection with Azure.
2. Use the following code to create the instance with a system-assigned managed identity. For more examples of how to use Azure PowerShell with an API Management instance, see [API Management PowerShell samples](powershell-samples.md).
To set up a managed identity in the portal, you'll first create an API Managemen
The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
-1. If needed, install the Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-az-ps). Then run `Connect-AzAccount` to create a connection with Azure.
+1. If needed, install the Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-azure-powershell). Then run `Connect-AzAccount` to create a connection with Azure.
1. Use the following code to create the instance. For more examples of how to use Azure PowerShell with an API Management instance, see [API Management PowerShell samples](powershell-samples.md).
api-management Authorizations How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to-azure-ad.md
Create an Azure AD application for the API and give it the appropriate permissio
1. On the confirmation page, select **Allow access**. 1. After successful authorization, the browser is redirected to API Management and the window is closed. In API Management, select **Next**. 1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
-1. For this example, select **API Management service `<service name>`**.
+1. For this example, select **API Management service `<service name>`**, and then click "+ Add members". You should see your access policy in the Members table below.
:::image type="content" source="media/authorizations-how-to-azure-ad/create-access-policy.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
api-management Authorizations How To Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to-github.md
You learn how to:
1. After successful authorization, the browser is redirected to API Management and the window is closed. When prompted during redirection, select **Allow access**. In API Management, select **Next**. 1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
-1. For this example, select **API Management service `<service name>`**.
+1. For this example, select **API Management service `<service name>`**, and then click "+ Add members". You should see your access policy in the Members table below.
:::image type="content" source="media/authorizations-how-to-azure-ad/create-access-policy.png" alt-text="Screenshot of selecting a managed identity to use the authorization."::: 1. Select **Complete**.
The preceding policy definition consists of three parts:
## Next steps * Learn more about [access restriction policies](api-management-access-restriction-policies.md).
-* Learn more about GitHub's [REST API](https://docs.github.com/en/rest?apiVersion=2022-11-28)
+* Learn more about GitHub's [REST API](https://docs.github.com/en/rest?apiVersion=2022-11-28)
api-management Api Version Retirement Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md
After 30 September 2023, if you prefer not to update your tools, scripts, and pr
* **Azure CLI** - Run `az version` to check your version. If you're running version 2.38.0 or later, no action is required. Use the `az upgrade` command to upgrade the Azure CLI if necessary. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli).
-* **Azure PowerShell** - Run `Get-Module -ListAvailable -Name Az` to check your version. If you're running version 8.1.0 or later, no action is required. Use `Update-Module -Name Az -Repository PSGallery` to update the module if necessary. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+* **Azure PowerShell** - Run `Get-Module -ListAvailable -Name Az` to check your version. If you're running version 8.1.0 or later, no action is required. Use `Update-Module -Name Az -Repository PSGallery` to update the module if necessary. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
* **Other tools** - Use the following versions (or later):
After 30 September 2023, if you prefer not to update your tools, scripts, and pr
## More information * [Azure CLI](/cli/azure/update-azure-cli)
-* [Azure PowerShell](/powershell/azure/install-az-ps)
+* [Azure PowerShell](/powershell/azure/install-azure-powershell)
* [Azure Resource Manager](../../azure-resource-manager/management/overview.md) * [Terraform on Azure](/azure/developer/terraform/) * [Bicep](../../azure-resource-manager/bicep/overview.md)
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2022-08
## Scenario 2: Migrate a network-injected API Management instance
-Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration (see the following section). You can also cause migrate to the `stv2` platform by enabling [zone redundancy](../reliability/migrate-api-mgt.md).
+Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration (see the following section). You can also migrate to the `stv2` platform by enabling [zone redundancy](../reliability/migrate-api-mgt.md).
### Update VNet configuration
Update the configuration of the VNet in each location (region) where the API Man
* A Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region and subscription as your API Management instance.
+> [!IMPORTANT]
+> When you update the VNet configuration for migration to the `stv2` platform, you must provide a public IP address address resource, or migration won't succeed. In an internal VNet, this public IP address is used only for management operations.
+ For details, see [Prerequisites for network connections](api-management-using-with-vnet.md#prerequisites). #### Update VNet configuration
To verify that the migration was successful, check the [platform version](comput
## Next steps * Learn about [stv1 platform retirement](breaking-changes/stv1-platform-retirement-august-2024.md).
-* For instances deployed in a VNet, see the [Virtual network configuration reference](virtual-network-reference.md).
+* For instances deployed in a VNet, see the [Virtual network configuration reference](virtual-network-reference.md).
api-management Powershell Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/powershell-create-service-instance.md
Azure API Management helps organizations publish APIs to external, partner, and
[!INCLUDE [cloud-shell-try-it-no-header](../../includes/cloud-shell-try-it-no-header.md)]
- If you choose to install and use the PowerShell locally, this quickstart requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+ If you choose to install and use the PowerShell locally, this quickstart requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create resource group
api-management Powershell Add User And Get Subscription Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-add-user-and-get-subscription-key.md
This sample script creates a user in API Management and gets a subscription key.
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
api-management Powershell Backup Restore Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-backup-restore-apim-service.md
The sample script in this article shows how to backup and restore the API Manage
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
api-management Powershell Create Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-create-apim-service.md
This sample script creates a Developer SKU API Management Service.
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
api-management Powershell Import Api And Add To Product https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-import-api-and-add-to-product.md
This sample script imports an API and adds it to an API Management product.
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
api-management Powershell Scale And Addregion Apim Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-scale-and-addregion-apim-service.md
This sample script scales and adds region to the API Management service instance
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
api-management Powershell Secure Backend With Mutual Certificate Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-secure-backend-with-mutual-certificate-authentication.md
This sample script secures backend with mutual certificate authentication.
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
api-management Powershell Setup Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-setup-custom-domain.md
This sample script sets up custom domain on proxy and portal endpoint of the API
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
api-management Powershell Setup Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/scripts/powershell-setup-rate-limit-policy.md
This sample script sets up rate limit policy.
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Using the new IPs, update any of your resources or networking components to ensu
App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com). ```azurecli
-az network vnet subnet update --resource-group $VNET_RG -name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
+az network vnet subnet update --resource-group $VNET_RG --name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
``` ## 6. Confirm there are no locks on the virtual network
az lock list --resource-group $VNET_RG --resource <vnet-name> --resource-type Mi
Delete any exisiting locks using the following command. ```azurecli
-az lock delete --resource-group jordan-rg --name <lock-name> --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks
+az lock delete --resource-group $VNET_RG --name <lock-name> --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks
``` For related commands to check if your subscription or resource group has locks, see [Azure CLI reference for locks](../../azure-resource-manager/management/lock-resources.md#azure-cli).
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
+
+ Title: Upgrade to App Service Environment v3
+description: Take the first steps toward upgrading to App Service Environment v3.
+++ Last updated : 05/11/2023++
+# Upgrade to App Service Environment v3
+
+> [!IMPORTANT]
+> If you're currently using App Service Environment v1 or v2, you must migrate your workloads to [App Service Environment v3](overview.md). [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Failure to migrate by that date will result in loss of the environments, running applications, and all application data.
+>
+
+This page is your one-stop shop for guidance and resources to help you upgrade successfully with minimal downtime. Follow the guidance to plan and complete your upgrade as soon as possible. This page will be updated with the latest information as it becomes available.
+
+## Upgrade steps
+
+|Step|Action|Resources|
+|-|||
+|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using the migration feature.<br><br>- [Automated upgrade using the migration feature](migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)|
+|**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)|
+|**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
+|**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|
+|**5**|**Learn more**|[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
+
+## Additional information
+
+### What are the benefits of upgrading?
+
+App Service Environment v3 is the latest version of App Service Environment. It's easier to use, runs on more powerful infrastructure that can go up to 64 cores and 256-GB RAM with faster scaling speeds for both Windows and Linux, and has simpler network topology. For more information about these and other benefits, see the following resources.
+
+- [Three reasons why you should prioritize migrating to App Service Environment v3 for your business](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/three-reasons-why-you-should-prioritize-migrating-to-app-service/ba-p/3596628)
+- [Estimate your cost savings by migrating to App Service Environment v3](https://azure.github.io/AppService/2023/03/02/App-service-environment-v3-pricing.html)
+- [Using App Service Environment v3 in Compliance-Oriented Industries](https://azure.microsoft.com/resources/using-app-service-environment-v3-in-compliance-oriented-industries/)
+
+### What changes when upgrading to App Service Environment v3?
+
+- [App Service Environment v3 overview](overview.md)
+- [App Service Environment version comparison](version-comparison.md)
+- [Feature differences](overview.md#feature-differences)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about App Service Environment v3](overview.md)
app-service Quickstart Multi Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md
When the App Service plan has been created, the Azure CLI shows information simi
## Create a Docker Compose app > [!NOTE]
-> Docker Compose on Azure App Services currently has a limit of 4,000 characters at this time.
+> Docker Compose on Azure App Services currently has a limit of 4,000 characters when converted to Base64 at this time.
In your Cloud Shell terminal, create a multi-container [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp#az-webapp-create) command. Don't forget to replace _\<app_name>_ with a unique app name (valid characters are `a-z`, `0-9`, and `-`).
app-service Terraform Secure Backend Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/terraform-secure-backend-frontend.md
Browse to the [Azure documentation](/azure/developer/terraform/) to learn how to
## The complete terraform file
-To use this file, replace the placeholders _\<unique-frontend-app-name>_ and _\<unique-frontend-app-name>_ (app name is used to form a unique DNS name worldwide).
+To use this file, replace the placeholders _\<unique-frontend-app-name>_ and _\<unique-backend-app-name>_ (app name is used to form a unique DNS name worldwide).
```hcl terraform {
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
Without any further changes, your code is ready to be run in Azure. To debug you
# [Azure PowerShell](#tab/ps)
-1. The Azure Identity client library that you'll use later can use tokens from Azure PowerShell. To enable command-line based development, [install Azure PowerShell](/powershell/azure/install-az-ps) on your local machine.
+1. The Azure Identity client library that you'll use later can use tokens from Azure PowerShell. To enable command-line based development, [install Azure PowerShell](/powershell/azure/install-azure-powershell) on your local machine.
1. Sign in to Azure CLI with the following cmdlet using your Azure AD user:
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
For more information on adding an Active Directory admin, see [Provision an Azur
# [Azure PowerShell](#tab/ps)
-1. The Azure Identity client library that you'll use later can use tokens from Azure PowerShell. To enable command-line based development, [install Azure PowerShell](/powershell/azure/install-az-ps) on your local machine.
+1. The Azure Identity client library that you'll use later can use tokens from Azure PowerShell. To enable command-line based development, [install Azure PowerShell](/powershell/azure/install-azure-powershell) on your local machine.
1. Sign in to Azure CLI with the following cmdlet using your Azure AD user:
application-gateway Add Http Header Rewrite Rule Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/add-http-header-rewrite-rule-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Before you begin -- You need to run Azure PowerShell locally to complete the steps in this article. You also need to have Az module version 1.0.0 or later installed. Run `Import-Module Az` and then `Get-Module Az` to determine the version that you have installed. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
+- You need to run Azure PowerShell locally to complete the steps in this article. You also need to have Az module version 1.0.0 or later installed. Run `Import-Module Az` and then `Get-Module Az` to determine the version that you have installed. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
- You need to have an Application Gateway v2 SKU instance. Rewriting headers isn't supported in the v1 SKU. If you don't have the v2 SKU, create an [Application Gateway v2 SKU](./tutorial-autoscale-ps.md) instance before you begin. ## Create required objects
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md
This article walks you through the steps to configure a Standard v1 Application
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-1. Install the latest version of the Azure PowerShell module by following the [install instructions](/powershell/azure/install-az-ps).
+1. Install the latest version of the Azure PowerShell module by following the [install instructions](/powershell/azure/install-azure-powershell).
2. You create a virtual network and a subnet for Application Gateway. Make sure that no virtual machines or cloud deployments are using the subnet. Application Gateway must be by itself in a virtual network subnet. 3. The servers that you configure to use the application gateway must exist or have their endpoints created either in the virtual network or with a public IP/VIP assigned.
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
The virtual network resource supports [DNS server](../virtual-network/manage-vir
### Virtual network permission Since the application gateway resource is deployed inside a virtual network, we also perform a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations. You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users or service principals that operate application gateways also have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission on the Virtual Network or Subnet.
-You may use the built-in roles, such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which already support this permission. If a built-in role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md). Learn more about [managing subnet permissions](../virtual-network/virtual-network-manage-subnet.md#permissions). You may have to allow sufficient time for [Azure Resource Manager cache refresh](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after role assignment changes.
+You may use the built-in roles, such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which already support this permission. If a built-in role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md). Learn more about [managing subnet permissions](../virtual-network/virtual-network-manage-subnet.md#permissions).
+
+> [!NOTE]
+> You may have to allow sufficient time for [Azure Resource Manager cache refresh](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after role assignment changes.
#### Identifying affected users or service principals for your subscription By visiting Azure Advisor for your account, you can verify if your subscription has any users or service principals with insufficient permission. The details of that recommendation are as follows:
By visiting Azure Advisor for your account, you can verify if your subscription
**Category**: Reliability </br> **Impact**: High </br>
+#### Using temporary Azure Feature Exposure Control (AFEC) flag
+
+As a temporary extension, we have introduced a subscription-level [Azure Feature Exposure Control (AFEC)](../azure-resource-manager/management/preview-features.md?tabs=azure-portal) that you can register for until you fix the permissions for all your users and/or service principals. Register for the given feature by following the same steps as a [preview feature registration](../azure-resource-manager/management/preview-features.md?#required-access) for your Azure subscription.
+
+**Name**: Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck </br>
+**Description**: Disable Application Gateway Subnet Permission Check </br>
+**ProviderNamespace**: Microsoft.Network </br>
+**EnrollmentType**: AutoApprove </br>
+
+> [!NOTE]
+> The provision to circumvent this virtual network permission check by using this feature control (AFEC) is available only for a limited period, **until 30th June 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions.
+ ## Network security groups Network security groups (NSGs) are supported on Application Gateway.
application-gateway Configure Keyvault Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-keyvault-ps.md
For more information, see [TLS termination with Key Vault certificates](key-vaul
This article shows you how to use an Azure PowerShell script to integrate your key vault with your application gateway for TLS/SSL termination certificates.
-This article requires Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). To run the commands in this article, you also need to create a connection with Azure by running `Connect-AzAccount`.
+This article requires Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). To run the commands in this article, you also need to create a connection with Azure by running `Connect-AzAccount`.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
application-gateway Mutual Authentication Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Before you begin
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
You can also complete this quickstart using [Azure CLI](quick-create-cli.md) or
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Azure PowerShell version 1.0.0 or later](/powershell/azure/install-az-ps) (if you run Azure PowerShell locally).
+- [Azure PowerShell version 1.0.0 or later](/powershell/azure/install-azure-powershell) (if you run Azure PowerShell locally).
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
application-gateway Redirect External Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Redirect Http To Https Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This tutorial requires the Azure PowerShell module version 1.0.0 or later to create a certificate and install IIS. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
+This tutorial requires the Azure PowerShell module version 1.0.0 or later to create a certificate and install IIS. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a self-signed certificate
application-gateway Redirect Http To Https Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
+This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a self-signed certificate
application-gateway Redirect Internal Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Waf Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/waf-custom-rules-powershell.md
This script creates an Application Gateway Web Application Firewall that uses cu
If you choose to install and use Azure PowerShell locally, this script requires the Azure PowerShell module version 2.1.0 or later.
-1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
2. To create a connection with Azure, run `Connect-AzAccount`. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
application-gateway Tutorial Autoscale Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-autoscale-ps.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This tutorial requires that you run an administrative Azure PowerShell session locally. You must have Azure PowerShell module version 1.0.0 or later installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
+This tutorial requires that you run an administrative Azure PowerShell session locally. You must have Azure PowerShell module version 1.0.0 or later installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
## Sign in to Azure
application-gateway Tutorial Http Header Rewrite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-http-header-rewrite-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-This article requires that you run Azure PowerShell locally. You must have Az module version 1.0.0 or later installed. Run `Import-Module Az` and then`Get-Module Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
+This article requires that you run Azure PowerShell locally. You must have Az module version 1.0.0 or later installed. Run `Import-Module Az` and then`Get-Module Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
## Sign in to Azure
application-gateway Tutorial Manage Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Tutorial Multiple Sites Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Tutorial Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a self-signed certificate
application-gateway Tutorial Url Redirect Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Tutorial Url Route Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
Because of the time needed to create resources, it can take up to 90 minutes to complete this procedure.
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Previously updated : 03/20/2023 Last updated : 05/10/2023 # Install and run Form Recognizer containers
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD051 -->
+ ::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](../includes/applies-to-v3-0.md)] ::: moniker-end
Azure Form Recognizer is an Azure Applied AI Service that lets you build automat
::: moniker range="form-recog-3.0.0" In this article you learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements.
-* **Read** and **Layout** models are supported by Form Recognizer v3.0 containers.
+* **Read**, **Layout**, **General Document**, **ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by Form Recognizer v3.0 containers.
* **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** models are currently only supported in the [v2.1 containers](form-recognizer-container-install-run.md?view=form-recog-2.1.0&preserve-view=true). ::: moniker-end ::: moniker range="form-recog-2.1.0"+
+> [!IMPORTANT]
+>
+> Form Recognizer v3.0 containers are now generally available. If you are getting started with containers, consider using the v3 containers.
+ In this article you learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. * **Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by six Form Recognizer feature containers.
Feature container | Supporting container(s) |
||--| | **Read** | None | | **Layout** | None|
+| **General Document** | Layout |
+| **Invoice** | Layout|
+| **Receipt** | Read |
+| **ID Document** | Read|
+| **Custom Template** | Layout |
+ :::moniker-end #### Recommended CPU cores and memory
Feature container | Supporting container(s) |
:::moniker range="form-recog-3.0.0"
-##### Read and Layout containers
+##### Form Recognizer containers
| Container | Minimum | Recommended | |--||-|
-| `Read` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+| `Read` | `8` cores, 10-GB memory | `8` cores, 24-GB memory|
| `Layout` | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
+| `General Document` | `8` cores, 12-GB memory | `8` cores, 24-GB memory|
+| `ID Document` | `8` cores, 8-GB memory | `8` cores, 24-GB memory |
+| `Invoice` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+| `Receipt` | `8` cores, 11-GB memory | `8` cores, 24-GB memory |
+| `Custom Template` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+ :::moniker-end :::moniker range="form-recog-2.1.0"
The following host machine requirements are applicable to **train and analyze**
:::moniker range="form-recog-3.0.0"
-### Read
+### [Read](#tab/read)
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with the `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with the `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
```yml version: "3.9"
Now, you can start the service with the [**docker compose**](https://docs.docker
docker-compose up ```
-### Layout
+### [General Document](#tab/general-document)
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your General Document and Layout container instances.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-document:
+ container_name: azure-cognitive-service-document
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/document-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ ports:
+ - "5000:5050"
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+Given the resources on the machine, the General Document container might take some time to start up.
+
+### [Layout](#tab/layout)
+
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
```yml version: "3.9"
Now, you can start the service with the [**docker compose**](https://docs.docker
docker-compose up ```
+### [Invoice](#tab/invoice)
+
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer Invoice container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout container instances.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-invoice:
+ container_name: azure-cognitive-service-invoice
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ ports:
+ - "5000:5050"
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Receipt](#tab/receipt)
+
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt and Read container instances.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-receipt
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
++
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [ID Document](#tab/id-document)
+
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID and Read container instances.
+
+```yml
+ version: "3.9"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-id-document
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
++
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Business Card](#tab/business-card)
+
+The Business Card container is not supported by Form Recognizer v3.0.
+
+### [Custom](#tab/custom)
+
+In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
+
+#### Create a folder to store the following files
+
+* [**.env**](#-create-an-environment-file)
+* [**nginx.conf**](#-create-an-nginx-file)
+* [**docker-compose.yml**](#-create-a-docker-compose-file)
+
+#### Create a folder to store your input data
+
+* Name this folder **files**.
+* We reference the file path for this folder as **{FILE_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called files, located in the same folder as the docker-compose file, the .env file entry is `FILE_MOUNT_PATH="./files"`
+
+#### Create a folder to store the logs written by the Form Recognizer service on your local machine
+
+* Name this folder **output**.
+* We reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called output, located in the same folder as the docker-compose file, the .env file entry is `OUTPUT_MOUNT_PATH="./output"`
+
+#### Create a folder for storing internal processing shared between the containers
+
+* Name this folder **shared**.
+* We reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called shared, located in the same folder as the docker-compose file, the .env file entry is `SHARED_MOUNT_PATH="./shared"`
+
+#### Create a folder for the Studio to store project related information
+
+* Name this folder **db**.
+* We reference the file path for this folder as **{DB_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called db, located in the same folder as the docker-compose file, the .env file entry is `DB_MOUNT_PATH="./db"`
+
+#### Create an environment file
+
+ 1. Name this file **.env**.
+
+ 1. Declare the following environment variables:
+
+ ```text
+SHARED_MOUNT_PATH="./shared"
+OUTPUT_MOUNT_PATH="./output"
+FILE_MOUNT_PATH="./files"
+DB_MOUNT_PATH="./db"
+FORM_RECOGNIZER_ENDPOINT_URI="YourFormRecognizerEndpoint"
+FORM_RECOGNIZER_KEY="YourFormRecognizerKey"
+NGINX_CONF_FILE="./nginx.conf"
+ ```
+
+#### Create an **nginx** file
+
+ 1. Name this file **nginx.conf**.
+
+ 1. Enter the following configuration:
+
+```text
+worker_processes 1;
+
+events { worker_connections 1024; }
+
+http {
+
+ sendfile on;
+ client_max_body_size 90M;
+ upstream docker-custom {
+ server azure-cognitive-service-custom-template:5000;
+ }
+
+ upstream docker-layout {
+ server azure-cognitive-service-layout:5000;
+ }
+
+ server {
+ listen 5000;
+
+ location = / {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+ proxy_pass http://docker-custom/;
+ }
+
+ location /status {
+ proxy_pass http://docker-custom/status;
+ }
+
+ location /test {
+ return 200 $scheme://$host:$server_port;
+ }
+
+ location /ready {
+ proxy_pass http://docker-custom/ready;
+ }
+
+ location /swagger {
+ proxy_pass http://docker-custom/swagger;
+ }
+
+ location /formrecognizer/documentModels/prebuilt-layout {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+
+ add_header 'Access-Control-Allow-Origin' '*' always;
+ add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
+ add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
+ add_header 'Access-Control-Expose-Headers' '*' always;
+
+ if ($request_method = 'OPTIONS') {
+ return 200;
+ }
+
+ proxy_pass http://docker-layout/formrecognizer/documentModels/prebuilt-layout;
+ }
+
+ location /formrecognizer/documentModels {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+
+ add_header 'Access-Control-Allow-Origin' '*' always;
+ add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
+ add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE' always;
+ add_header 'Access-Control-Expose-Headers' '*' always;
+
+ if ($request_method = 'OPTIONS') {
+ return 200;
+ }
+
+ proxy_pass http://docker-custom/formrecognizer/documentModels;
+ }
+
+ location /formrecognizer/operations {
+ add_header 'Access-Control-Allow-Origin' '*' always;
+ add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
+ add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, PATCH' always;
+ add_header 'Access-Control-Expose-Headers' '*' always;
+
+ if ($request_method = OPTIONS ) {
+ return 200;
+ }
+
+ proxy_pass http://docker-custom/formrecognizer/operations;
+ }
+ }
+}
+
+```
+
+#### Create a **docker compose** file
+
+1. Name this file **docker-compose.yml**
+
+2. The following code sample is a self-contained `docker compose` example to run Form Recognizer Layout, Studio and Custom template containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
+
+ ```yml
+version: '3.3'
+
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-template:
+ container_name: azure-cognitive-service-custom-template
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
+ restart: always
+ depends_on:
+ - layout
+ environment:
+ AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ studio:
+ container_name: form-recognizer-studio
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
+ environment:
+ ONPREM_LOCALFILE_BASEPATH: /onprem_folder
+ STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
+ volumes:
+ - type: bind
+ source: ${FILE_MOUNT_PATH} # path to your local folder
+ target: /onprem_folder
+ - type: bind
+ source: ${DB_MOUNT_PATH} # path to your local folder
+ target: /onprem_db
+ ports:
+ - "5001:5001"
+ user: "1000:1000" # echo $(id -u):$(id -g)
+
+ ```
+
+The custom template container can use Azure Storage queues or in memory queues. The `Storage:ObjectStore:AzureBlob:ConnectionString` and `queue:azure:connectionstring` environment variables only need to be set if you're using Azure Storage queues. When running locally, delete these variables.
+
+### Ensure the service is running
+
+To ensure that the service is up and running. Run these commands in an Ubuntu shell.
+
+```bash
+$cd <folder containing the docker-compose file>
+
+$source .env
+
+$docker-compose up
+```
+
+Custom template containers require a few different configurations and support other optional configurations
+
+| Setting | Required | Description |
+|--||-|
+|EULA | Yes | License acceptance Example: Eula=accept|
+|Billing | Yes | Billing endpoint URI of the FR resource |
+|ApiKey | Yes | The endpoint key of the FR resource |
+| Queue:Azure:ConnectionString | No| Azure Queue connection string |
+|Storage:ObjectStore:AzureBlob:ConnectionString | No| Azure Blob connection string |
+| HealthCheck:MemoryUpperboundInMB | No | Memory threshold for reporting unhealthy to liveness. Default: Same as recommended memory |
+| StorageTimeToLiveInMinutes | No| TTL duration to remove all intermediate and final files. Default: Two days, TTL can set between five minutes to seven days |
+| Task:MaxRunningTimeSpanInMinutes | No| Maximum running time for treating request as timeout. Default: 60 minutes |
+| HTTP_PROXY_BYPASS_URLS | No | Specify URLs for bypassing proxy Example: HTTP_PROXY_BYPASS_URLS = abc.com, xyz.com |
+| AzureCognitiveServiceReadHost (Receipt, IdDocument Containers Only)| Yes | Specify Read container uri Example:AzureCognitiveServiceReadHost=http://onprem-frread:5000 |
+| AzureCognitiveServiceLayoutHost (Document, Invoice Containers Only) | Yes | Specify Layout container uri Example:AzureCognitiveServiceLayoutHost=http://onprem-frlayout:5000 |
+
+#### Use the Form Recognizer Studio to train a model
+
+* Gather a set of at least five forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*).
+
+* Once you can confirm that the containers are running, open a browser and navigate to the endpoint where you have the containers deployed. If this deployment is your local machine, the endpoint is [http://localhost:5001](http://localhost:5001).
+* Select the custom extraction model tile.
+* Select the `Create project` option.
+* Provide a project name and optionally a description
+* On the "configure your resource" step, provide the endpoint to your custom template model. If you deployed the containers on your local machine, use this URL [http://localhost:5000](http://localhost:5000)
+* Provide a subfolder for where your training data is located within the files folder.
+* Finally, create the project
+
+You should now have a project created, ready for labeling. Upload your training data and get started labeling. If you're new to labeling, see [build and train a custom model](../how-to-guides/build-a-custom-model.md)
+
+#### Using the API to train
+
+If you plan to call the APIs directly to train a model, the custom template model train API requires a base64 encoded zip file that is the contents of your labeling project. You can omit the PDF or image files and submit only the JSON files.
+
+Once you have your dataset labeled and *.ocr.json, *.labels.json and fields.json files added to a zip, use the PowerShell commands to generate the base64 encoded string.
+
+```powershell
+$bytes = [System.IO.File]::ReadAllBytes("<your_zip_file>.zip")
+$b64String = [System.Convert]::ToBase64String($bytes, [System.Base64FormattingOptions]::None)
+
+```
+
+Use the build model API to post the request.
+
+```http
+POST http://localhost:5000/formrecognizer/documentModels:build?api-version=2022-08-31
+
+{
+ "modelId": "mymodel",
+ "description": "test model",
+ "buildMode": "template",
+
+ "base64Source": "<Your base64 encoded string>",
+ "tags": {
+ "additionalProp1": "string",
+ "additionalProp2": "string",
+ "additionalProp3": "string"
+ }
+}
+```
+ :::moniker-end :::moniker range="form-recog-2.1.0"
+### [Read](#tab/read)
+
+The Read container is not supported by Form Recognizer v2.1.
+
+### [General Document](#tab/general-document)
+
+The General Document container is not supported by Form Recognizer v2.1.
+ ### [Layout](#tab/layout) The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
Now, you can start the service with the [**docker compose**](https://docs.docker
docker-compose up ```
-### [Business Card](#tab/business-card)
+### [Invoice](#tab/invoice)
-The following code sample is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} for your Computer Vision Read container.
+The following code sample is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout containers.
```yml version: "3.9"
- azure-cognitive-service-businesscard:
- container_name: azure-cognitive-service-businesscard
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard
+ azure-cognitive-service-invoice:
+ container_name: azure-cognitive-service-invoice
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI} - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
ports: - "5000:5050" networks: - ocrvnet
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
environment: - EULA=accept
- - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apiKey={COMPUTER_VISION_KEY}
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
networks: - ocrvnet
Now, you can start the service with the [**docker compose**](https://docs.docker
docker-compose up ```
-### [ID Document](#tab/id-document)
+### [Receipt](#tab/receipt)
-The following code sample is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
+The following code sample is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
```yml version: "3.9"
- azure-cognitive-service-id:
- container_name: azure-cognitive-service-id
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-receipt
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
Now, you can start the service with the [**docker compose**](https://docs.docker
docker-compose up ```
-### [Invoice](#tab/invoice)
+### [ID Document](#tab/id-document)
-The following code sample is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout containers.
+The following code sample is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
```yml version: "3.9"
- azure-cognitive-service-invoice:
- container_name: azure-cognitive-service-invoice
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice
+ azure-cognitive-service-id:
+ container_name: azure-cognitive-service-id
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI} - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
ports: - "5000:5050" networks: - ocrvnet
- azure-cognitive-service-layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
environment: - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apiKey={COMPUTER_VISION_KEY}
networks: - ocrvnet
Now, you can start the service with the [**docker compose**](https://docs.docker
docker-compose up ```
-### [Receipt](#tab/receipt)
-The following code sample is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
+### [Business Card](#tab/business-card)
+
+The following code sample is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} for your Computer Vision Read container.
```yml version: "3.9"
- azure-cognitive-service-receipt:
- container_name: azure-cognitive-service-receipt
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt
+ azure-cognitive-service-businesscard:
+ container_name: azure-cognitive-service-businesscard
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
$docker-compose up
* **Save** this connection and use it to label your requests. * You can choose to analyze the file of your choice against the trained model. -- ## The Sample Labeling tool and Azure Container Instances (ACI) To learn how to use the Sample Labeling tool with an Azure Container Instance, *see*, [Deploy the Sample Labeling tool](../deploy-label-tool.md#deploy-with-azure-container-instances-aci). ++ :::moniker-end ## Validate that the service is running
There are several ways to validate that the container is running:
|**http://<span></span>localhost:5000/swagger** | The container provides a full set of documentation for the endpoints and a Try it out feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. | ## Stop the containers
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
The following table lists the supported languages for print text by the most rec
### Print text in preview (API version 2023-02-28-preview)
-Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
+Use the parameter `api-version=2023-02-28-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
|Language| Code (optional) |Language| Code (optional) | |:--|:-:|:--|:-:|
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
This article contains both a quick reference and detailed description of Azure F
| Adjustable | No | No | | **Max number of pages (Training) * Neural** | 50,000 | 50,000 (default value) | | Adjustable | No | No |
-| **Custom neural model train** | 10 per month | 10 per month |
+| **Custom neural model train** | 10 per month | 20 per month |
| Adjustable | No |Yes <sup>3</sup>| | **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) | | Adjustable | No | No |
This article contains both a quick reference and detailed description of Azure F
> <sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/form-recognizer/).</br> > <sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions(#create-and-submit-support-request).</br>
-> <sup>3</sup> Open a support request to increase the monthly training limit.
+> <sup>3</sup> Neural models training count is reset every calendar month. Open a support request to increase the monthly training limit.
::: moniker-end ::: moniker range="form-recog-3.0.0" > <sup>4</sup> This limit applies to all documents found in your training dataset folder prior to any labeling-related updates.
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-hotpatch.md
# Hotpatch for new virtual machines
-> [!IMPORTANT]
-> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the hotpatch capability described below. This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-> [!NOTE]
-> Hotpatch is supported on _Windows Server 2022 Datacenter: Azure Edition_.
- Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits: * Lower workload impact with less reboots * Faster deployment of updates as the packages are smaller, install faster, and have easier patch orchestration with Azure Update Manager * Better protection, as the hotpatch update packages are scoped to Windows security updates that install faster without rebooting
+## Supported platforms
+
+> [!IMPORTANT]
+> Hotpatch is currently in PREVIEW for certain platforms in the following table.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+| Operating system | Azure | Azure Stack HCI |
+| -- | -- | -- |
+| Windows Server 2022 Datacenter: Azure Edition Server Core | Generally available (GA) | Public preview |
+| Windows Server 2022 Datacenter: Azure Edition with Desktop Experience | Public preview | Public preview |
+ ## How hotpatch works Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that builds on that baseline. Hotpatches will contain updates that don't require a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update.
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server-services-overview.md
Azure Automanage for Windows Server brings new capabilities specifically to _Win
- SMB over QUIC - Extended network for Azure
-> [!IMPORTANT]
-> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below. This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Automanage for Windows Server capabilities can be found in one or more of these _Windows Server Azure Edition_ images: - Windows Server 2022 Datacenter: Azure Edition (Desktop Experience)
Hotpatch is available on the following images:
Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of hot patching. To learn more, see [Hotpatch](automanage-hotpatch.md).
+#### Supported platforms
+
+> [!IMPORTANT]
+> Hotpatch is currently in PREVIEW for certain platforms in the following table.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+| Operating system | Azure | Azure Stack HCI |
+| -- | -- | -- |
+| Windows Server 2022 Datacenter: Azure Edition (Core) | Generally available (GA) | Public preview |
+| Windows Server 2022 Datacenter: Azure Edition (Desktop Experience) | Public preview | Public preview |
+ ### SMB over QUIC SMB over QUIC is available on the following images:
automanage Tutorial Create Assignment Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/tutorial-create-assignment-python.md
In this tutorial, you'll create a resource group and a virtual machine. You'll t
## Prerequisites - [Python](https://www.python.org/downloads/)-- [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
## Create resources
automation Automation Create Alert Triggered Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-alert-triggered-runbook.md
You can use [Azure Monitor](../azure-monitor/overview.md) to monitor base-level
* An Azure Automation account with at least one user-assigned managed identity. For more information, see [Using a user-assigned managed identity for an Azure Automation account](./add-user-assigned-identity.md). * Az modules: `Az.Accounts` and `Az.Compute` imported into the Automation account. For more information, see [Import Az modules](./shared-resources/modules.md#import-az-modules). * An [Azure virtual machine](../virtual-machines/windows/quick-create-powershell.md).
-* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
* A general familiarity with [Automation runbooks](./manage-runbooks.md). ## Alert types
automation Automation Deploy Template Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-deploy-template-runbook.md
If you don't have an Azure subscription, create a [free account](https://azure.m
* [Azure Storage account](../storage/common/storage-account-create.md) in which to store the Resource Manager template.
-* Azure PowerShell installed on a local machine. See [Install the Azure PowerShell Module](/powershell/azure/install-az-ps) for information about how to get Azure PowerShell. You'll also need module [Az.ManagedServiceIdentity](/powershell/module/az.managedserviceidentity). `Az.ManagedServiceIdentity` is a preview module and not installed as part of the Az module. To install it, run `Install-Module -Name Az.ManagedServiceIdentity`
+* Azure PowerShell installed on a local machine. See [Install the Azure PowerShell Module](/powershell/azure/install-azure-powershell) for information about how to get Azure PowerShell. You'll also need module [Az.ManagedServiceIdentity](/powershell/module/az.managedserviceidentity). `Az.ManagedServiceIdentity` is a preview module and not installed as part of the Az module. To install it, run `Install-Module -Name Az.ManagedServiceIdentity`
## Assign permissions to managed identities
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-secure-asset-encryption.md
Before enabling customer-managed keys for an Automation account, you must ensure
- An [Azure Key Vault](../key-vault/general/basic-concepts.md) with the **Soft Delete** and **Do Not Purge** properties enabled. These properties are required to allow for recovery of keys if there's accidental deletion. - Only RSA keys are supported with Azure Automation encryption. For more information about keys, see [About Azure Key Vault keys, secrets, and certificates](../key-vault/general/about-keys-secrets-certificates.md). - The Automation account and the key vault can be in different subscriptions but need to be in the same Azure Active Directory tenant.-- When using PowerShell, verify the [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) is installed. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+- When using PowerShell, verify the [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) is installed. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
## Generate and assign a new system-assigned identity for an Automation account
automation Automation Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-send-email.md
If you don't have an Azure subscription, create a [free account](https://azure.m
* An Azure Automation account with at least one user-assigned managed identity. For more information, see [Enable managed identities](./quickstarts/enable-managed-identity.md). * Az modules: `Az.Accounts` and `Az.KeyVault` imported into the Automation account. For more information, see [Import Az modules](./shared-resources/modules.md#import-az-modules).
-* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
## Create an Azure Key Vault
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
If you don't have an Azure subscription, create a [free account](https://azure.m
* An Azure Automation account with at least one user-assigned managed identity. For more information, see [Enable managed identity](../quickstarts/enable-managed-identity.md). * Az modules: `Az.Accounts` and `Az.Compute` imported into the Automation account. For more information, see [Import Az modules](../shared-resources/modules.md#import-az-modules). * Two or more [Azure virtual machines](../../virtual-machines/windows/quick-create-powershell.md). Since you stop and start these machines, they shouldn't be production VMs.
-* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
## Assign permissions to managed identities
automation Powershell Runbook Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/powershell-runbook-managed-identity.md
If you don't have an Azure subscription, create a [free account](https://azure.m
* An Azure Automation account with at least one user-assigned managed identity. For more information, see [Using a user-assigned managed identity for an Azure Automation account](../add-user-assigned-identity.md). * Az modules: `Az.Accounts`, `Az.Automation`, `Az.ManagedServiceIdentity`, and `Az.Compute` imported into the Automation account. For more information, see [Import Az modules](../shared-resources/modules.md#import-az-modules).
-* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-az-ps). `Az.ManagedServiceIdentity` is a preview module and not installed as part of the Az module. To install it, run `Install-Module -Name Az.ManagedServiceIdentity`.
+* The [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your machine. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell). `Az.ManagedServiceIdentity` is a preview module and not installed as part of the Az module. To install it, run `Install-Module -Name Az.ManagedServiceIdentity`.
* An [Azure virtual machine](../../virtual-machines/windows/quick-create-powershell.md). Since you stop and start this machine, it shouldn't be a production VM. * A general familiarity with [Automation runbooks](../manage-runbooks.md).
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Before you begin, review the [conceptual overview of the cluster connect feature
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-az-ps).
+- Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell).
- An existing Azure Arc-enabled Kubernetes connected cluster. - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
Review the [prerequisites for connecting a cluster](quickstart-connect-cluster.m
Make sure you [have the latest version installed](/cli/azure/install-azure-cli).
-If you connected your cluster by using Azure PowerShell, make sure you are [running the latest version](/powershell/azure/install-az-ps).
+If you connected your cluster by using Azure PowerShell, make sure you are [running the latest version](/powershell/azure/install-azure-powershell).
### Is the `connectedk8s` extension the latest version?
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
In addition to the prerequisites below, be sure to meet all [network requirement
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md). * An [identity (user or service principal)](system-requirements.md#azure-ad-identity-requirements) which can be used to [log in to Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc.
-* [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-az-ps)
+* [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell)
* The **Az.ConnectedKubernetes** PowerShell module, installed by running the following command: ```azurepowershell-interactive
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/system-requirements.md
For Azure CLI:
For Azure PowerShell: -- Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-az-ps).
+- Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell).
- Install the **Az.ConnectedKubernetes** PowerShell module: ```azurepowershell-interactive
azure-arc Onboard Group Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy-powershell.md
The Group Policy Object, which is used to onboard Azure Arc-enabled servers, req
* Assign the Azure Connected Machine Onboarding role to your service principal and limit the scope of the role to the target Azure landing zone. * Make a note of the Service Principal Secret; you'll need this value later.
-1. Download and unzip the folder **ArcEnabledServersGroupPolicy_v1.0.1** from [https://github.com/Azure/ArcEnabledServersGroupPolicy/releases/download/1.0.2/ArcEnabledServersGroupPolicy_v1.0.2.zip](https://github.com/Azure/ArcEnabledServersGroupPolicy/releases/download/1.0.2/ArcEnabledServersGroupPolicy_v1.0.2.zip). This folder contains the ArcGPO project structure with the scripts `EnableAzureArc.ps1`, `DeployGPO.ps1`, and `AzureArcDeployment.psm1`. These assets will be used for onboarding the machine to Azure Arc-enabled servers.
+1. Download and unzip the folder **ArcEnabledServersGroupPolicy_vX.X.X** from [https://github.com/Azure/ArcEnabledServersGroupPolicy/releases/latest/](https://github.com/Azure/ArcEnabledServersGroupPolicy/releases/latest/). This folder contains the ArcGPO project structure with the scripts `EnableAzureArc.ps1`, `DeployGPO.ps1`, and `AzureArcDeployment.psm1`. These assets will be used for onboarding the machine to Azure Arc-enabled servers.
1. Download the latest version of the [Azure Connected Machine agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
The Azure Arc service in the Azure portal provides a streamlined way to create a
### Azure PowerShell
-You can use [Azure PowerShell](/powershell/azure/install-az-ps) to create a service principal with the [New-AzADServicePrincipal](/powershell/module/Az.Resources/New-AzADServicePrincipal) cmdlet.
+You can use [Azure PowerShell](/powershell/azure/install-azure-powershell) to create a service principal with the [New-AzADServicePrincipal](/powershell/module/Az.Resources/New-AzADServicePrincipal) cmdlet.
1. Check the context of your Azure PowerShell session to ensure you're working in the correct subscription. Use [Set-AzContext](/powershell/module/az.accounts/set-azcontext) if you need to change the subscription.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
Title: Azure Arc-enabled servers Overview description: Learn how to use Azure Arc-enabled servers to manage servers hosted outside of Azure like an Azure resource. Previously updated : 02/01/2023 Last updated : 05/11/2023
Azure Arc-enabled servers lets you manage Windows and Linux physical servers and
When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group.
-To connect hybrid machines to Azure, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent does not replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) / [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). The Log Analytics agent or Azure Monitor Agent for Windows and Linux is required in order to:
+To connect hybrid machines to Azure, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) / [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). The Log Analytics agent or Azure Monitor Agent for Windows and Linux is required in order to:
* Proactively monitor the OS and workloads running on the machine * Manage it using Automation runbooks or solutions like Update Management
Watch this video to learn more about Azure monitoring, security, and update serv
For a list of supported regions with Azure Arc-enabled servers, see the [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc) page.
-In most cases, the location you select when you create the installation script should be the Azure region geographically closest to your machine's location. Data at rest is stored within the Azure geography containing the region you specify, which may also affect your choice of region if you have data residency requirements. If the Azure region your machine connects to is affected by an outage, the connected machine is not affected, but management operations using Azure may be unable to complete. If there is a regional outage, and if you have multiple locations that support a geographically redundant service, it is best to connect the machines in each location to a different Azure region.
+In most cases, the location you select when you create the installation script should be the Azure region geographically closest to your machine's location. Data at rest is stored within the Azure geography containing the region you specify, which may also affect your choice of region if you have data residency requirements. If the Azure region your machine connects to has an outage, the connected machine isn't affected, but management operations using Azure may be unable to complete. If there's a regional outage, and if you have multiple locations that support a geographically redundant service, it's best to connect the machines in each location to a different Azure region.
[Instance metadata information about the connected machine](agent-overview.md#instance-metadata) is collected and stored in the region where the Azure Arc machine resource is configured, including the following: * Operating system name and version * Computer name
-* Computer fully qualified domain name (FQDN)
+* Computers fully qualified domain name (FQDN)
* Connected Machine agent version For example, if the machine is registered with Azure Arc in the East US region, the metadata is stored in the US region.
The status for a connected machine can be viewed in the Azure portal under **Azu
The Connected Machine agent sends a regular heartbeat message to the service every five minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline, and its status will automatically be changed to **Disconnected** within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed back to **Connected**.
-If a machine remains disconnected for 45 days, its status may change to **Expired**. An expired machine can no longer connect to Azure and requires a server administrator to disconnect and then reconnect it to Azure to continue managing it with Azure Arc. The exact date upon which a machine will expire is determined by the expiration date of the managed identity's credential, which is valid up to 90 days and renewed every 45 days.
+If a machine remains disconnected for 45 days, its status may change to **Expired**. An expired machine can no longer connect to Azure and requires a server administrator to disconnect and then reconnect it to Azure to continue managing it with Azure Arc. The exact date upon which a machine expires is determined by the expiration date of the managed identity's credential, which is valid up to 90 days and renewed every 45 days.
## Service limits
-There is no limit to how many Arc-enabled servers and VM extensions you can deploy in a resource group or subscription. The standard 800 resource limit per resource group applies to the Azure Arc Private Link Scope resource type.
+There's no limit to how many Arc-enabled servers and VM extensions you can deploy in a resource group or subscription. The standard 800 resource limit per resource group applies to the Azure Arc Private Link Scope resource type.
To learn more about resource type limits, see the [Resource instance limit](../../azure-resource-manager/management/resources-without-resource-group-limit.md#microsofthybridcompute) article.
To learn more about resource type limits, see the [Resource instance limit](../.
Azure Arc-enabled servers stores customer data. By default, customer data stays within the region the customer deploys the service instance in. For region with data residency requirements, customer data is always kept within the same region.
+## Disaster Recovery
+
+There are no customer-enabled disaster recovery options for Arc-enabled servers. In the event of an outage in an Azure region, the system will failover to another region in the same [Azure geography](https://azure.microsoft.com/explore/global-infrastructure/geographies/) (if one exists). While this failover procedure is automatic, it does take some time. The Connected Machine agent will be disconnected during this period and will show a status of **Disconnected** until the failover is complete. The system will failback to its original region once the outage has been restored.
+
+An outage of Azure Arc won't affect the customer workload itself; only management of the applicable servers via Arc will be impaired.
+ ## Next steps * Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review the [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-cache-for-redis Cache Event Grid Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-powershell.md
Typically, you send events to an endpoint that processes the event data and take
## Setup
-This quickstart requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+This quickstart requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Sign in to Azure
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
Before you begin, you must have the following:
+ [Azure CLI](/cli/azure/install-azure-cli) [version 2.4](/cli/azure/release-notes-azure-cli#april-21-2020) or later.
- + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later.
+ + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later.
You also need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
Before you begin, you must have the following prerequisites:
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
- + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later.
+ + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later.
::: zone pivot="nodejs-model-v3" + [Node.js](https://nodejs.org/) version 18 or 16.
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md
Before you begin, you must have the following:
+ One of the following tools for creating Azure resources:
- + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 9.4.0 or later.
+ + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 9.4.0 or later.
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Before you begin, you must have the following requirements in place:
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
- + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later.
+ + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later.
+ [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version). ::: zone pivot="python-mode-decorators"
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
Before you begin, you must have the following prerequisites:
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
- + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later.
+ + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 5.9.0 or later.
::: zone pivot="nodejs-model-v3" + [Node.js](https://nodejs.org/) version 18 or 16.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Here's an example that uses HTML:
### Deploy using PowerShell
-The following PowerShell commands create a resource group and deploy a Bicep file/ARM template that creates a function app with its required resources. To run locally, you must have [Azure PowerShell](/powershell/azure/install-az-ps) installed. Run [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount) to sign in.
+The following PowerShell commands create a resource group and deploy a Bicep file/ARM template that creates a function app with its required resources. To run locally, you must have [Azure PowerShell](/powershell/azure/install-azure-powershell) installed. Run [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount) to sign in.
# [Bicep](#tab/bicep)
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Developing functions on your local computer and publishing them to Azure using C
The specific prerequisites for Core Tools depend on the features you plan to use:
-**[Publish](#publish)**: Core Tools currently depends on either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) for authenticating with your Azure account. This means that you must install one of these tools to be able to [publish to Azure](#publish) from Azure Functions Core Tools.
+**[Publish](#publish)**: Core Tools currently depends on either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) for authenticating with your Azure account. This means that you must install one of these tools to be able to [publish to Azure](#publish) from Azure Functions Core Tools.
**[Install extensions](#install-extensions)**: To manually install extensions by using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed. The .NET SDK is used by Core Tools to install extensions from NuGet. You don't need to know .NET to use Azure Functions extensions.
The Azure Functions Core Tools supports two types of deployment:
### Before you publish >[!IMPORTANT]
->You must have the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) installed locally to be able to publish to Azure from Core Tools.
+>You must have the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) installed locally to be able to publish to Azure from Core Tools.
A project folder may contain language-specific files and directories that shouldn't be published. Excluded items are listed in a .funcignore file in the root project folder.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
You're responsible for designing and deploying your applications to meet [US exp
Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For more information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you'll mostly have the same experience as in global Azure. > [!NOTE]
-> This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see [**Introducing the new Azure PowerShell Az module**](/powershell/azure/new-azureps-module-az). For Az module installation instructions, see [**Install the Azure Az PowerShell module**](/powershell/azure/install-az-ps).
+> This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see [**Introducing the new Azure PowerShell Az module**](/powershell/azure/new-azureps-module-az). For Az module installation instructions, see [**Install the Azure Az PowerShell module**](/powershell/azure/install-azure-powershell).
You can use AzureCLI or PowerShell to obtain Azure Government endpoints for services you provisioned:
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
Before starting this how-to guide, you must complete the following prerequisites
- [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization) - [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project)-- Install and set up [Azure PowerShell](/powershell/azure/install-az-ps)
+- Install and set up [Azure PowerShell](/powershell/azure/install-azure-powershell)
If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
This article provides developer guidance for using Computer Vision, Face API, Te
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] -- Install and Configure [Azure PowerShell](/powershell/azure/install-az-ps)
+- Install and Configure [Azure PowerShell](/powershell/azure/install-azure-powershell)
- Connect [PowerShell with Azure Government](documentation-government-get-started-connect-with-ps.md) ## Part 1: Provision Cognitive Services accounts
azure-government Documentation Government Get Started Connect With Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-with-ps.md
This quickstart shows how to use PowerShell to access and start managing resourc
## Install PowerShell
-Install PowerShell on your local machine. For more information, including how to check your PowerShell version, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+Install PowerShell on your local machine. For more information, including how to check your PowerShell version, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
## Specifying Azure Government as the *environment* to connect to
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
The following list shows the QPS usage limits for each Azure Maps service by Pri
| Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, WFS | 50 | Not Available | Not Available | | Data service | 50 | 50 | Not Available |
-| Elevation service ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) | 50 | 50 | Not Available |
| Geolocation service | 50 | 50 | 50 |
-| Render service - Contour tiles, Digital Elevation Model (DEM) tiles ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) and Customer tiles | 50 | 50 | Not Available |
| Render service - Traffic tiles and Static maps | 50 | 50 | 50 | | Render service - Road tiles | 500 | 500 | 50 | | Render service - Satellite tiles | 250 | 250 | Not Available |
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
Once the maven project is created, there should be a `pom.xml` file with basic i
  <artifactId>azure-maps-timezone</artifactId>    <version>1.0.0-beta.1</version>  </dependency> 
-<dependency> 
-  <groupId>com.azure</groupId> 
-  <artifactId>azure-maps-elevation</artifactId> 
-  <version>1.0.0-beta.1</version> 
-</dependency> 
``` Run `mvn clean install` on your project, then create a java file named `demo.java` and import what you need from Azure maps into the file:
New-Item demo.java
| [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] | | [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] | | [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |
-| [Elevation][java elevation readme] ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
## Create and authenticate a MapsSearchClient
public class Demo{
[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account <!-- Java SDK Developers Guide >
-[java elevation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-elevation
-[java elevation readme]: https://github.com/Azure/azure-sdk-for-jav
-[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples
[java geolocation readme]: https://github.com/Azure/azure-sdk-for-jav [java geolocation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-geolocation/src/samples/java/com/azure/maps/geolocation/samples [java geolocation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-geolocation
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-elevation-data.md
- Title: Request elevation data using the Azure Maps Elevation service
-description: Learn how to request elevation data using the Azure Maps Elevation service.
-- Previously updated : 10/28/2021-----
-# Request elevation data using the Azure Maps Elevation service
-
-> [!IMPORTANT]
-> The Azure Maps Elevation services and Render V2 DEM tiles have been retired and will no longer be available or supported after May 5, 2023. No other Azure Maps API, services or tilesets are affected. For more information, see [Elevation Services Retirement](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023).
-
-The Azure Maps [Elevation service](/rest/api/maps/elevation) provides APIs to query elevation data anywhere on the earth's surface. You can request sampled elevation data along paths, within a defined bounding box, or at specific coordinates. Also, you can use the [Render V2 - Get Map Tile API](/rest/api/maps/renderv2) to retrieve elevation data in tile format. The tiles are delivered in GeoTIFF raster format. This article describes how to use Azure Maps Elevation service and the Get Map Tile API to request elevation data. The elevation data can be requested in both GeoJSON and GeoTiff formats.
-
-## Prerequisites
-
-* An [Azure Maps account]
-* A [subscription key]
-
-For more information about authentication in Azure Maps, see [Manage Authentication in Azure Maps](how-to-manage-authentication.md).
-
-This article uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment.
-
-## Request elevation data in raster tile format
-
-To request elevation data in raster tile format, use the [Render V2-Get Map Tile API](/rest/api/maps/renderv2). If the tile can be found, the API returns the tile as a GeoTIFF. Otherwise, the API returns 0. All raster DEM tiles use the geoid (sea level) Earth mode. In this example, we'll request elevation data for Mt. Everest.
-
->[!TIP]
->To retrieve a tile at a specific area on the world map, find the correct tile at the appropriate zoom level. Also note that WorldDEM covers the entire global landmass but it doesn't cover oceans. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
-
-To request elevation data in raster tile format using the Postman app:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request.
-
-4. On the **Builder** tab, select the **GET** HTTP method and then enter the following URL to request the raster tile.
-
- ```http
- https://atlas.microsoft.com/map/tile?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&tilesetId=microsoft.dem&zoom=13&x=6074&y=3432
- ```
-
- >[!Important]
- >For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-
-5. Select the **Send** button.
-
- You should receive the raster tile that contains the elevation data in GeoTIFF format. Each pixel within the raster tile raw data is of type `float`. The value of each pixel represents the elevation height in meters.
-
-## Request elevation data in GeoJSON format
-
-To request elevation data in GeoJSON format, use the Elevation service APIs. This section describes each of these APIs:
-
-* [Get Data for Points](/rest/api/maps/elevation/getdataforpoints)
-* [Post Data for Points](/rest/api/maps/elevation/postdataforpoints)
-* [Get Data for Polyline](/rest/api/maps/elevation/getdataforpolyline)
-* [Post Data for Polyline](/rest/api/maps/elevation/postdataforpolyline)
-* [Get Data for Bounding Box](/rest/api/maps/elevation/getdataforboundingbox)
-
->[!IMPORTANT]
-> When no data can be returned, all APIs return **0**.
-
-### Request elevation data for points
-
-In this example, we'll use the [Get Data for Points API](/rest/api/maps/elevation/getdataforpoints) to request elevation data at Mt. Everest and Chamlang mountains. Then, we'll use the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to request elevation data using the same two points. Latitudes and longitudes in the URL are expected to be in WGS84 (World Geodetic System) decimal degree.
-
- >[!IMPORTANT]
- >The URL character length limit is 2048, so it's not possible to pass more than 100 coordinates as a pipeline-delimited string in a URL GET request. If you intend to pass more than 100 coordinates as a pipeline delimited string, use the Post Data for Points API.
-
-To create the request:
-
-1. In the Postman app, select **New** again.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request.
-
-4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
-
- ```http
- https://atlas.microsoft.com/elevation/point/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&points=-73.998672,40.714728|150.644,-34.397
- ```
-
-5. Select the **Send** button. You'll receive the following JSON response:
-
- ```json
- {
- "data": [
- {
- "coordinate": {
- "latitude": 40.714728,
- "longitude": -73.998672
- },
- "elevationInMeter": 12.142355447638208
- },
- {
- "coordinate": {
- "latitude": -34.397,
- "longitude": 150.644
- },
- "elevationInMeter": 384.47041445517846
- }
- ]
- }
- ```
-
-6. Now, we'll call the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to get elevation data for the same two points. On the **Builder** tab, select the **POST** HTTP method and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
-
- ```http
- https://atlas.microsoft.com/elevation/point/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0
- ```
-
-7. In the **Headers** field of the **POST** request, set `Content-Type` to `application/json`.
-
-8. In the **Body** field, provide the following coordinate point information:
-
- ```json
- [
- {
- "lon": -73.998672,
- "lat": 40.714728
- },
- {
- "lon": 150.644,
- "lat": -34.397
- }
- ]
- ```
-
-9. Select **Send**.
-
-### Request elevation data samples along a Polyline
-
-In this example, we'll use the [Get Data for Polyline API](/rest/api/maps/elevation/getdataforpolyline) to request five equally spaced samples of elevation data along a straight line between coordinates at Mt. Everest and Chamlang mountains. Both coordinates must be defined in longitude/latitude format. If you don't specify a value for the `samples` parameter, the number of samples defaults to 10. The maximum number of samples is 2,000.
-
-Then, we'll use the Get Data for Polyline API to request three equally spaced samples of elevation data along a path. We'll define the precise location for the samples by passing in three longitude/latitude coordinate pairs.
-
-Finally, we'll use the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to request elevation data at the same three equally spaced samples.
-
-Latitudes and longitudes in the URL are expected to be in WGS84 (World Geodetic System) decimal degree.
-
- >[!IMPORTANT]
- >The URL character length limit is 2048, so it's not possible to pass more than 100 coordinates as a pipeline-delimited string in a URL GET request. If you intend to pass more than 100 coordinates as a pipeline delimited string, use the Post Data For Points API.
-
-To create the request:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name**.
-
-4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
-
- ```http
- https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&lines=-73.998672,40.714728|150.644,-34.397&samples=5
- ```
-
-5. Select the **Send** button. You'll receive the following JSON response:
-
- ```JSON
- {
- "data": [
- {
- "coordinate": {
- "latitude": 40.714728,
- "longitude": -73.998672
- },
- "elevationInMeter": 12.14236
- },
- {
- "coordinate": {
- "latitude": 21.936796000000001,
- "longitude": -17.838003999999998
- },
- "elevationInMeter": 0.0
- },
- {
- "coordinate": {
- "latitude": 3.1588640000000012,
- "longitude": 38.322664000000003
- },
- "elevationInMeter": 598.66943
- },
- {
- "coordinate": {
- "latitude": -15.619067999999999,
- "longitude": 94.483332000000019
- },
- "elevationInMeter": 0.0
- },
- {
- "coordinate": {
- "latitude": -34.397,
- "longitude": 150.644
- },
- "elevationInMeter": 384.47041
- }
- ]
- }
- ```
-
-6. Now, we'll request three samples of elevation data along a path between coordinates at Mount Everest, Chamlang, and Jannu mountains. In the **Params** field, enter the following coordinate array for the value of the `lines` query key.
-
- ```html
- 86.9797222, 27.775|86.9252778, 27.9880556 | 88.0444444, 27.6822222
- ```
-
-7. Change the `samples` query key value to `3`. The image below shows the new values.
-
- :::image type="content" source="./media/how-to-request-elevation-data/get-elevation-samples.png" alt-text="Retrieve three elevation data samples.":::
-
-8. Select **Send**. You'll receive the following JSON response:
-
- ```json
- {
- "data": [
- {
- "coordinate": {
- "latitude": 27.775,
- "longitude": 86.9797222
- },
- "elevationInMeter": 7116.0348851572589
- },
- {
- "coordinate": {
- "latitude": 27.737403546316028,
- "longitude": 87.411180791156454
- },
- "elevationInMeter": 1798.6945512521534
- },
- {
- "coordinate": {
- "latitude": 27.682222199999998,
- "longitude": 88.0444444
- },
- "elevationInMeter": 7016.9372013588072
- }
- ]
- }
- ```
-
-9. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. On the **Builder** tab, select the **POST** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
-
- ```http
- https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&samples=5
- ```
-
-10. In the **Headers** field of the **POST** request, set `Content-Type` to `application/json`.
-
-11. In the **Body** field, provide the following coordinate point information.
-
- ```json
- [
- {
- "lon": 86.9797222,
- "lat": 27.775
- },
- {
- "lon": 86.9252778,
- "lat": 27.9880556
- },
- {
- "lon": 88.0444444,
- "lat": 27.6822222
- }
- ]
- ```
-
-12. Select **Send**.
-
-### Request elevation data by Bounding Box
-
-Now we'll use the [Get Data for Bounding Box](/rest/api/maps/elevation/getdataforboundingbox) to request elevation data near Mt. Rainier in Washington state. The elevation data will be returned at equally spaced locations within a bounding box. The bounding area is defined by two sets of latitude/longitude coordinates (south latitude, west longitude | north latitude, east longitude) and is divided into rows and columns. The edges of the bounding box account for two of the rows and two of the columns. Elevations are returned for the grid vertices created at row and column intersections. Up to 2000 elevations can be returned in a single request.
-
-In this example, we'll specify rows=3 and columns=6. The response returns 18 elevation values. In the following diagram, the elevation values are ordered starting with the southwest corner, and then continue west to east and south to north. The elevation points are numbered in the order that they're returned.
--
-To create the request:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name**.
-
-4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
-
- ```http
- https://atlas.microsoft.com/elevation/lattice/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&bounds=-121.66853362143818, 46.84646479863713,-121.65853362143818, 46.85646479863713&rows=2&columns=3
- ```
-
-5. Select **Send**. The response returns 18 elevation data samples, one for each vertex of the grid.
-
- ```json
- {
- "data": [
- {
- "coordinate": {
- "latitude": 46.846464798637129,
- "longitude": -121.66853362143819
- },
- "elevationInMeter": 2298.6581875651746
- },
- {
- "coordinate": {
- "latitude": 46.846464798637129,
- "longitude": -121.66653362143819
- },
- "elevationInMeter": 2306.3980756609963
- },
- {
- "coordinate": {
- "latitude": 46.846464798637129,
- "longitude": -121.66453362143818
- },
- "elevationInMeter": 2279.3385479564113
- },
- {
- "coordinate": {
- "latitude": 46.846464798637129,
- "longitude": -121.66253362143819
- },
- "elevationInMeter": 2233.1549264690366
- },
- {
- "coordinate": {
- "latitude": 46.846464798637129,
- "longitude": -121.66053362143818
- },
- "elevationInMeter": 2196.4485923541492
- },
- {
- "coordinate": {
- "latitude": 46.846464798637129,
- "longitude": -121.65853362143818
- },
- "elevationInMeter": 2133.1756767157253
- },
- {
- "coordinate": {
- "latitude": 46.849798131970459,
- "longitude": -121.66853362143819
- },
- "elevationInMeter": 2345.3227848228803
- },
- {
- "coordinate": {
- "latitude": 46.849798131970459,
- "longitude": -121.66653362143819
- },
- "elevationInMeter": 2292.2449195443587
- },
- {
- "coordinate": {
- "latitude": 46.849798131970459,
- "longitude": -121.66453362143818
- },
- "elevationInMeter": 2270.5867788258074
- },
- {
- "coordinate": {
- "latitude": 46.849798131970459,
- "longitude": -121.66253362143819
- },
- "elevationInMeter": 2296.8311427390604
- },
- {
- "coordinate": {
- "latitude": 46.849798131970459,
- "longitude": -121.66053362143818
- },
- "elevationInMeter": 2266.0729430891065
- },
- {
- "coordinate": {
- "latitude": 46.849798131970459,
- "longitude": -121.65853362143818
- },
- "elevationInMeter": 2242.216346631234
- },
- {
- "coordinate": {
- "latitude": 46.8531314653038,
- "longitude": -121.66853362143819
- },
- "elevationInMeter": 2378.460838833359
- },
- {
- "coordinate": {
- "latitude": 46.8531314653038,
- "longitude": -121.66653362143819
- },
- "elevationInMeter": 2327.6761137260387
- },
- {
- "coordinate": {
- "latitude": 46.8531314653038,
- "longitude": -121.66453362143818
- },
- "elevationInMeter": 2208.3782743402949
- },
- {
- "coordinate": {
- "latitude": 46.8531314653038,
- "longitude": -121.66253362143819
- },
- "elevationInMeter": 2106.9526472760981
- },
- {
- "coordinate": {
- "latitude": 46.8531314653038,
- "longitude": -121.66053362143818
- },
- "elevationInMeter": 2054.3270174034078
- },
- {
- "coordinate": {
- "latitude": 46.8531314653038,
- "longitude": -121.65853362143818
- },
- "elevationInMeter": 2030.6438331110671
- },
- {
- "coordinate": {
- "latitude": 46.856464798637127,
- "longitude": -121.66853362143819
- },
- "elevationInMeter": 2318.753153399402
- },
- {
- "coordinate": {
- "latitude": 46.856464798637127,
- "longitude": -121.66653362143819
- },
- "elevationInMeter": 2253.88875188271
- },
- {
- "coordinate": {
- "latitude": 46.856464798637127,
- "longitude": -121.66453362143818
- },
- "elevationInMeter": 2136.6145845357587
- },
- {
- "coordinate": {
- "latitude": 46.856464798637127,
- "longitude": -121.66253362143819
- },
- "elevationInMeter": 2073.6734467948486
- },
- {
- "coordinate": {
- "latitude": 46.856464798637127,
- "longitude": -121.66053362143818
- },
- "elevationInMeter": 2042.994055784251
- },
- {
- "coordinate": {
- "latitude": 46.856464798637127,
- "longitude": -121.65853362143818
- },
- "elevationInMeter": 1988.3631481900356
- }
- ]
- }
- ```
-
-## Samples: Use Elevation service APIs in Azure Maps Control
-
-### Get elevation data by coordinate position
-
-The following sample webpage describes how to use the map control to display elevation data at a coordinate point. When the user drags the marker, the map displays the elevation data in a pop-up window.
-
-<br/>
-
-<iframe height="500" scrolling="no" title="Get elevation at position" src="https://codepen.io/azuremaps/embed/c840b510e113ba7cb32809591d5f96a2?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/c840b510e113ba7cb32809591d5f96a2'>Get elevation at position</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
-### Get elevation data by bounding box
-
-The following sample webpage describes how to use the map control to display elevation data contained within a bounding box. The user defines the bounding box by selecting the `square` icon in the upper-left corner, and then drawing the square anywhere on the map. The map control then renders the elevation data in accordance with the colors that are specified in the key that's located in the upper-right corner.
-
-<br/>
-
-<iframe height="500" scrolling="no" title="Elevations by bounding box" src="https://codepen.io/azuremaps/embed/619c888c70089c3350a3e95d499f3e48?height=500&theme-id=default&default-tab=js,result" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/619c888c70089c3350a3e95d499f3e48'>Elevations by bounding box</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
-### Get elevation data by Polyline path
-
-The following sample webpage describes how to use the map control to display elevation data along a path. The user defines the path by selecting the `Polyline` icon in the upper-left corner, and then drawing the Polyline on the map. The map control then renders the elevation data in colors that are specified in the key located in the upper-right corner.
-
-<br/>
-
-<iframe height="500" scrolling="no" title="Elevation path gradient" src="https://codepen.io/azuremaps/embed/7bee08e5cb13d05cb0a11636b60f14ca?height=500&theme-id=default&default-tab=js,result&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
- See the Pen <a href='https://codepen.io/azuremaps/pen/7bee08e5cb13d05cb0a11636b60f14ca'>Elevation path gradient</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
--
-## Next steps
-
-To further explore the Azure Maps ElevationAPIs, see:
-
-> [!div class="nextstepaction"]
-> [Elevation - Get Data for Lat Long Coordinates](/rest/api/maps/elevation/getdataforpoints)
-
-> [!div class="nextstepaction"]
-> [Elevation - Get Data for Bounding Box](/rest/api/maps/elevation/getdataforboundingbox)
-
-> [!div class="nextstepaction"]
-> [Elevation - Get Data for Polyline](/rest/api/maps/elevation/getdataforpolyline)
-
-> [!div class="nextstepaction"]
-> [Render V2 ΓÇô Get Map Tile](/rest/api/maps/renderv2)
-
-For a complete list of Azure Maps REST APIs, see:
-
-> [!div class="nextstepaction"]
-> [Azure Maps REST APIs](/rest/api/maps/)
-
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Power Bi Visual Add Bar Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bar-chart-layer.md
Title: Add a bar chart layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the bar chart layer in an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
Title: Add a bubble layer to an Azure Maps Power BI visual description: In this article, you'll learn how to use the bubble layer in an Azure Maps Power BI visual.--++ Last updated 11/14/2022
azure-maps Power Bi Visual Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md
Title: Add a heat map layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the heat map layer in an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Add Pie Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-pie-chart-layer.md
Title: Add a pie chart layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the pie chart layer in an Azure Maps Power BI visual.--++ Last updated 03/15/2022
azure-maps Power Bi Visual Add Reference Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md
Title: Add a reference layer to Azure Maps Power BI visual description: In this article, you will learn how to use the reference layer in Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-tile-layer.md
Title: Add a tile layer to an Azure Maps Power BI visual description: In this article, you will learn how to use the tile layer in Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Filled Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-filled-map.md
Title: Filled map in Azure Maps Power BI Visual description: In this article, you'll learn about the Filled map feature in Azure Maps Power BI Visual.--++ Last updated 04/11/2022
azure-maps Power Bi Visual Geocode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md
Title: Geocoding in Azure Maps Power BI visual description: In this article, you'll learn about geocoding in Azure Maps Power BI visual.--++ Last updated 03/16/2022
azure-maps Power Bi Visual Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md
Title: Get started with Azure Maps Power BI visual description: In this article, you'll learn how to use Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-manage-access.md
Title: Manage Azure Maps Power BI visual within your organization description: In this article, you will learn how to manage Azure Maps Power BI visual within your organization.--++ Last updated 11/29/2021
azure-maps Power Bi Visual On Object Interaction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-on-object-interaction.md
Title: Contextual on-object interaction with Azure Maps Power BI visuals (preview) description: How to format elements by selecting the element directly on the map to bring up the available formatting options.--++ Last updated 03/13/2023
azure-maps Power Bi Visual Show Real Time Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-show-real-time-traffic.md
Title: Show real-time traffic on an Azure Maps Power BI visual description: In this article, you will learn how to show real-time traffic on an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-maps Power Bi Visual Understanding Layers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md
Title: Layers in an Azure Maps Power BI visual description: In this article, you will learn about the different layers available in an Azure Maps Power BI visual.--++ Last updated 11/29/2021
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Oracle Linux 6 | | X | | | Oracle Linux 6.4+ | | X | X | | Red Hat Enterprise Linux Server 9+ | X | | |
-| Red Hat Enterprise Linux Server 8.6 | X<sup>3</sup> | X | |
-| Red Hat Enterprise Linux Server 8+ | X | X | |
+| Red Hat Enterprise Linux Server 8.6 | X<sup>3</sup> | X<sup>2</sup> | X<sup>2</sup> |
+| Red Hat Enterprise Linux Server 8+ | X | X<sup>2</sup> | X<sup>2</sup> |
| Red Hat Enterprise Linux Server 7 | X | X | X | | Red Hat Enterprise Linux Server 6.7+ | | X | X | | Red Hat Enterprise Linux Server 6 | | X | |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
| project TimeGenerated, Computer, Category, EventID, sourceHealthServiceId, ParameterXml, EventData ```
-1. Use [built-in policies](../agents/azure-monitor-agent-manage.md#built-in-policies) to deploy extensions and DCR associations at scale small-scale testing. Using policy also ensures automatic deployment of extensions and DCR associations for new machines.<sup>3</sup>
+1. Use [built-in policies](../agents/azure-monitor-agent-manage.md#built-in-policies) to deploy extensions and DCR associations at scale. Using policy also ensures automatic deployment of extensions and DCR associations for new machines.<sup>3</sup>
Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
An action group is a *global* service, so there's no dependency on a specific Az
> > When you configure an action to notify a person by email or SMS, they receive a confirmation that indicates they were added to the action group.
-### Test an action group in the Azure portal (preview)
+### Test an action group in the Azure portal
When you create or update an action group in the Azure portal, you can test the action group.
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Previously updated : 03/05/2023 Last updated : 05/11/2023 # Create a new alert rule
Alerts triggered by these alert rules contain a payload that uses the [common al
The format for extracting values from the common schema, use a "$", and then the path of the common schema field inside curly brackets. For example: `${data.essentials.monitorCondition}`.
- For example, you could use these values in the **custom properties** to utilize data from the payload.
+ In the following examples, values in the **custom properties** are used to utilize data from the payload:
- |Custom properties name |Custom properties value |Result |
- ||||
- |AdditionalDetails|Evaluation windowStartTime: ${data.alertContext.condition.windowStartTime}. windowEndTime: ${data.alertContext.condition.windowEndTime}|AdditionalDetails": "Evaluation windowStartTime: 2023-04-04T14:39:24.492Z. windowEndTime: 2023-04-04T14:44:24.492Z" |
- |Alert ${data.essentials.monitorCondition} reason |ΓÇ£${data.alertContext.condition.allOf[0].metricName} ${data.alertContext.condition.allOf[0].operator}${data.alertContext.condition.allOf[0].threshold} ${data.essentials.monitorCondition}. The value is ${data.alertContext.condition.allOf[0].metricValue}" |Examples of the results could be: <br> - Alert Resolved reason": "Percentage CPU GreaterThan5 Resolved. The value is 3.585 <br>Percentage CPU GreaterThan5 Fired. The value is 10.585 |
+ **Example 1**
+ - **Name:** "Additional Details"
+ - **Value:** "Evaluation windowStartTime: \${data.alertContext.condition.windowStartTime}. windowEndTime: \${data.alertContext.condition.windowEndTime}"
+ - **Result:** "AdditionalDetails:Evaluation windowStartTime: 2023-04-04T14:39:24.492Z. windowEndTime: 2023-04-04T14:44:24.492Z"
++
+ **Example 2**
+ - **Name:** "Alert \${data.essentials.monitorCondition} reason"
+ - **Value:** "\${data.alertContext.condition.allOf[0].metricName} \${data.alertContext.condition.allOf[0].operator} \${data.alertContext.condition.allOf[0].threshold} \${data.essentials.monitorCondition}. The value is \${data.alertContext.condition.allOf[0].metricValue}"
+ - **Result:** Example results could be something like:
+ - "Alert Resolved reason: Percentage CPU GreaterThan5 Resolved. The value is 3.585"
+ - ΓÇ£Alert Fired reason": "Percentage CPU GreaterThan5 Fired. The value is 10.585"
1. On the **Details** tab, define the **Project details**. - Select the **Subscription**.
azure-monitor Itsmc Secure Webhook Connections Bmc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-bmc.md
description: This article shows you how to connect your ITSM products or service
Last updated 03/30/2022 ++
azure-monitor Change Analysis Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md
If your subscription includes several web apps, run the following script to enab
### Pre-requisites
-PowerShell Az Module. Follow instructions at [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+PowerShell Az Module. Follow instructions at [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell)
### Run the following script:
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
When you use programmatic methods to create DCRs and associations, you require t
| Built-in role | Scopes | Reason | |:|:|:|
-| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Create or edit DCRs. |
-| [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)<br>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Azure Arc-enabled servers</li></ul> | Deploy associations (for example, to assign rules to the machine). |
+| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Create or edit DCRs, assign rules to the machine, deploy associations ). |
+| [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)<br>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Azure Arc-enabled servers</li></ul> | Deploy agent extensions on the VM. |
| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Deploy Azure Resource Manager templates. | ## Limits
azure-monitor Diagnostic Settings Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings-policy.md
For resource types that don't have a built-in policy, you need to create a custo
The script [Create-AzDiagPolicy](https://www.powershellgallery.com/packages/Create-AzDiagPolicy) creates policy files for a particular resource type that you can install by using PowerShell or the Azure CLI. Use the following procedure to create a custom policy definition for diagnostic settings:
-1. Ensure that you have [Azure PowerShell](/powershell/azure/install-az-ps) installed.
+1. Ensure that you have [Azure PowerShell](/powershell/azure/install-azure-powershell) installed.
2. Install the script by using the following command: ```azurepowershell
azure-monitor Prometheus Remote Write Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-azure-ad-pod-identity.md
+
+ Title: Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity (preview)
+description: Configure remote write for Azure Monitor managed service for Prometheus using Azure AD pod identity (preview)
+++ Last updated : 05/11/2023+++
+# Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity (preview)
++
+> [!NOTE]
+> The remote write sidecar should only be configured via the following steps only if the AKS cluster already has the Azure AD pod enabled. This approach is not recommended as AAD pod identity has been deprecated to be replace by [Azure Workload Identity] (https://learn.microsoft.com/azure/active-directory/workload-identities/workload-identities-verview)
++
+To configure remote write for Azure Monitor managed service for Prometheus using Azure AD pod identity, follow the steps below.
+
+1. Create user assigned identity or use an existing user assigned managed identity. For information on creating the managed identity, see [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md#get-the-client-id-of-the-user-assigned-identity).
+1. Assign the `Managed Identity Operator` and `Virtual Machine Contributor` roles to the managed identity created/used in the previous step.
+
+ ```azurecli
+ az role assignment create --role "Managed Identity Operator" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
+
+ az role assignment create --role "Virtual Machine Contributor" --assignee <managed identity clientID> --scope <Node ResourceGroup Id>
+ ```
+
+ The node resource group of the AKS cluster contains resources that you will require for other steps in this process. This resource group has the name MC_<AKS-RESOURCE-GROUP>_<AKS-CLUSTER-NAME>_<REGION>. You can locate it from the Resource groups menu in the Azure portal.
+
+1. Grant user-assigned managed identity `Monitoring Metrics Publisher` roles.
+
+ ```azurecli
+ az role assignment create --role "Monitoring Metrics Publisher" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
+ ```
+
+1. Create AzureIdentityBinding
+
+ The user assigned managed identity requires identity binding in order to be used as a pod identity. Run the following commands:
+
+ Copy the following YAML to the `aadpodidentitybinding.yaml` file.
+
+ ```yml
+
+ apiVersion: "aadpodidentity.k8s.io/v1"
+
+ kind: AzureIdentityBinding
+ metadata:
+ name: demo1-azure-identity-binding
+ spec:
+ AzureIdentity: ΓÇ£<AzureIdentityName>ΓÇ¥
+ Selector: ΓÇ£<AzureIdentityBindingSelector>ΓÇ¥
+ ```
+
+ Run the following command:
+
+ ```azurecli
+ kubectl create -f aadpodidentitybinding.yaml
+ ```
+
+1. Add a `aadpodidbinding` label to the Prometheus pod.
+ The `aadpodidbinding` label must be added to the Prometheus pod for the pod identity to take effect. This can be achieved by updating the `deployment.yaml` or injecting labels while deploying the sidecar as mentioned in the next step.
+
+1. Deploy side car and configure remote write on the Prometheus server.
+
+ 1. Copy the YAML below and save to a file.
+
+ ```yml
+ prometheus:
+ prometheusSpec:
+ podMetadata:
+ labels:
+ aadpodidbinding: <AzureIdentityBindingSelector>
+ externalLabels:
+ cluster: <AKS-CLUSTER-NAME>
+ remoteWrite:
+ - url: 'http://localhost:8081/api/v1/write'
+ containers:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ env:
+ - name: INGESTION_URL
+ value: <INGESTION_URL>
+ - name: LISTENING_PORT
+ value: '8081'
+ - name: IDENTITY_TYPE
+ value: userAssigned
+ - name: AZURE_CLIENT_ID
+ value: <MANAGED-IDENTITY-CLIENT-ID>
+ # Optional parameter
+ - name: CLUSTER
+ value: <CLUSTER-NAME>
+ ```
+
+ b. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+
+ ```azurecli
+ # set context to your cluster
+ az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
+ # use helm to update your remote write config
+ helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack --namespace <namespace where Prometheus pod resides>
+ ```
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Configure a table for Basic logs if:
| Service | Table | |:|:|
- | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
+ | Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates) |
| Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | | Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) |
+ | Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) |
| Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
+ | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
| Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) | | Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) | | Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) |
azure-monitor Query Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-audit.md
An audit record is created each time a query is run. If you send the data to a L
| AADTenantId | ID of the tenant of the user account that started the query. | | AADEmail | Email of the tenant of the user account that started the query. | | AADClientId | ID and resolved name of the application used to start the query. |
-| RequestClientApp | Resolved name of the application used to start the query. |
+| RequestClientApp | Resolved name of the application used to start the query. For more information, see [request client app.](#request-client-app).|
| QueryTimeRangeStart | Start of the time range selected for the query. This may not be populated in certain scenarios such as when the query is started from Log Analytics, and time range is specified inside the query rather than the time picker. | | QueryTimeRangeEnd | End of the time range selected for the query. This may not be populated in certain scenarios such as when the query is started from Log Analytics, and time range is specified inside the query rather than the time picker. | | QueryText | Text of the query that was run. |
An audit record is created each time a query is run. If you send the data to a L
| StatsWorkspaceCount | Number of workspaces accessed by the query. Only populated if query returns with status code 200. | | StatsRegionCount | Number of regions accessed by the query. Only populated if query returns with status code 200. |
+### Request Client App
+| RequestClientApp | Description |
+|:|:|
+|AAPBI|[Log Analytics integration with Power BI](../logs/log-powerbi.md).|
+|AppAnalytics|Experiences of Log Analytics in the Azure portal.|
+|AppInsightsPortalExtension|[Workbooks](../visualize/workbooks-data-sources.md#logs) or [Application insights](../app/app-insights-overview.md).|
+|ASC_Portal|Microsoft Defender for Cloud.|
+|ASI_Portal|Sentinel.|
+|AzureAutomation|[Azure Automation.](../../automation/overview.md)|
+|AzureMonitorLogsConnector|[Azure Monitor Logs Connector](../../connectors/connectors-azure-monitor-logs.md).|
+|csharpsdk|[Log Analytics Query API.](../logs/api/overview.md)|
+|Draft-Monitor|[Log alert creation in the Azure portal.](../alerts/alerts-create-new-alert-rule.md?tabs=metric#create-a-new-alert-rule-in-the-azure-portal)|
+|Grafana|[Grafana connector.](../visualize/grafana-plugin.md)|
+|IbizaExtension|Experiences of Log Analytics in the Azure portal.|
+|infraInsights/container|[Container insights.](../containers/container-insights-overview.md)|
+|infraInsights/vm|[VM insights.](../vm/vminsights-overview.md)|
+|LogAnalyticsExtension|[Azure Dashboard](../../azure-portal/azure-portal-dashboards.md).|
+|LogAnalyticsPSClient|[Log Analytics Query API.](../logs/api/overview.md)|
+|OmsAnalyticsPBI|Log Analytics integration with [Power BI.](../logs/log-powerbi.md)|
+|PowerBIConnector|Log Analytics integration with [Power BI.](../logs/log-powerbi.md)|
+|Sentinel-Investigation-Queries|Sentinel.|
+|Sentinel-DataCollectionAggregator|Sentinel.|
+|Sentinel-analyticsManagement-customerQuery|Sentinel.|
+|Unknown|[Log Analytics Query API.](../logs/api/overview.md)|
+|UpdateManagement|[Update Management.](../../automation/update-management/overview.md)|
++ ## Considerations -- Queries are only logged when executed in a user context. No Service-to-Service within Azure will be logged. The two primary sets of queries this exclusion encompasses are billing calculations and automated alert executions. In the case of alerts, only the scheduled alert query itself will not be logged; the initial execution of the alert in the alert creation screen is executed in a user context, and will be available for audit purposes. -- Performance statistics are not available for queries coming from the Azure Data Explorer proxy. All other data for these queries will still be populated.-- The *h* hint on strings that [obfuscates string literals](/azure/data-explorer/kusto/query/scalar-data-types/string#obfuscated-string-literals) will not have an effect on the query audit logs. The queries will be captured exactly as submitted without the string being obfuscated. You should ensure that only users who have compliance rights to see this data are able to do so using the various Kubernetes RBAC or Azure RBAC modes available in Log Analytics workspaces.
+- Queries are only logged when executed in a user context. No Service-to-Service within Azure will be logged. The two primary sets of queries this exclusion encompasses are billing calculations and automated alert executions. In the case of alerts, only the scheduled alert query itself won't be logged; the initial execution of the alert in the alert creation screen is executed in a user context, and will be available for audit purposes.
+- Performance statistics aren't available for queries coming from the Azure Data Explorer proxy. All other data for these queries will still be populated.
+- The *h* hint on strings that [obfuscates string literals](/azure/data-explorer/kusto/query/scalar-data-types/string#obfuscated-string-literals) won't have an effect on the query audit logs. The queries will be captured exactly as submitted without the string being obfuscated. You should ensure that only users who have compliance rights to see this data are able to do so using the various Kubernetes RBAC or Azure RBAC modes available in Log Analytics workspaces.
- For queries that include data from multiple workspaces, the query will only be captured in those workspaces to which the user has access. ## Costs
-There is no cost for Azure Diagnostic Extension, but you may incur charges for the data ingested. Check [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for the destination where you're collecting data.
+There's no cost for Azure Diagnostic Extension, but you may incur charges for the data ingested. Check [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for the destination where you're collecting data.
## Next steps
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
To configure BYOS for code-level diagnostics (Profiler/Snapshot Debugger), there
1. Make sure you've installed Az PowerShell 4.2.0 or greater.
- To install Azure PowerShell, see the [Azure PowerShell documentation](/powershell/azure/install-az-ps).
+ To install Azure PowerShell, see the [Azure PowerShell documentation](/powershell/azure/install-azure-powershell).
1. Install the Application Insights PowerShell extension.
azure-monitor Workbooks Dropdowns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md
The easiest way to specify a dropdown parameter is by providing a static list in
1. Select **Update**. 1. Select **Save** to create the parameter.
-1. The **Environment** parameter will be a dropdown list with the three values.
+1. The **Environment** parameter is a dropdown list with the three values.
![Screenshot that shows the creation of a static dropdown parameter.](./media/workbooks-dropdowns/dropdown-create.png) ## Create a static dropdown list with groups of items
-If your query result/JSON contains a `group` field, the dropdown list will display groups of values. Follow the preceding sample, but use the following JSON instead:
+If your query result/JSON contains a `group` field, the dropdown list displays groups of values. Follow the preceding sample, but use the following JSON instead:
```json [
If your query result/JSON contains a `group` field, the dropdown list will displ
1. Select **Run Query**. 1. Select **Save** to create the parameter.
-1. The **RequestName** parameter will be a dropdown list with the names of all requests in the app.
+1. The **RequestName** parameter is a dropdown list with the names of all requests in the app.
![Screenshot that shows the creation of a dynamic dropdown parameter.](./media/workbooks-dropdowns/dropdown-dynamic.png)
The examples so far explicitly set the parameter to select only one value in the
You can specify the format of the result set via the **Delimiter** and **Quote with** settings. The default returns the values as a collection in the form of **a**, **b**, **c**. You can also limit the number of selections.
-The KQL referencing the parameter will need to change to work with the format of the result. The most common way to enable it is via the `in` operator.
+The KQL referencing the parameter needs to change to work with the format of the result. The most common way to enable it is via the `in` operator.
```kusto dependencies
This example shows the multi-select dropdown parameter at work:
![Screenshot that shows a multi-select dropdown parameter.](./media/workbooks-dropdowns/dropdown-multiselect.png)
+## Dropdown special selections
+
+Dropdown parameters also allow you to specify special values that will also appear in the dropdown:
+* Any one
+* Any three
+* ...
+* Any 100
+* Any custom limit
+* All
+
+When these special items are selected, the parameter value is automatically set to the specific number of items, or all values.
+
+### Special casing All
+
+When you select the **All** option, an extra field appears, which allows you to specify that a special value will be used for the parameter if the **All** option is selected. This special value is useful for cases where "All" could be a large number of items and could generate a very large query.
++
+In this specific case, the string `[]` is used instead of a value. This string can be used to generate an empty array in the logs query, like:
+
+```kusto
+let selection = dynamic([{Selection}]);
+SomeQuery
+| where array_length(selection) == 0 or SomeField in (selection)
+```
+
+If all items are selected, the value of `Selection` is `[]`, producing an empty array for the `selection` variable in the query. If no values are selected, the value of `Selection` will be an empty string, also resulting in an empty array. If any values are selected, they are formatted inside the dynamic part of the query, causing the array to have those values. You can then test for `array_length` of 0 to have the filter not apply or use the `in` operator to filter on the values in the array.
+
+Other common examples use '*' as the special marker value when a parameter is required, and then test with:
+
+```kusto
+| where "*" in ({Selection}) or SomeField in ({Selection})
+```
+ ## Next steps [Getting started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
To enable VM insights for multiple VMs or virtual machine scale set, use the Pow
For each virtual machine or virtual machine scale set, the script verifies whether the VM extension for the Log Analytics agent and Dependency agent is already installed. If both extensions are installed, the script tries to reinstall it. If both extensions aren't installed, the script installs them.
-Verify that you're using Azure PowerShell module Az version 1.0.0 or later with `Enable-AzureRM` compatibility aliases enabled. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+Verify that you're using Azure PowerShell module Az version 1.0.0 or later with `Enable-AzureRM` compatibility aliases enabled. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
To get a list of the script's argument details and example usage, run `Get-Help`.
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
For registration steps using Portal, open a Cloud Shell session as indicated abo
# [PowerShell](#tab/azure-powershell)
-This how-to article requires the Azure PowerShell module Az version 2.6.0 or later. Run `Get-Module -ListAvailable Az` to find your current version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you prefer, you can use Cloud Shell console in a PowerShell session instead.
+This how-to article requires the Azure PowerShell module Az version 2.6.0 or later. Run `Get-Module -ListAvailable Az` to find your current version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you prefer, you can use Cloud Shell console in a PowerShell session instead.
1. In a PowerShell command prompt (or PowerShell Cloud Shell session), specify the subscription that has been approved for Azure NetApp Files: ```powershell-interactive
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
The following diagram demonstrates how customer-managed keys work with Azure Net
* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * Customer-managed keys private endpoints do not support the **Disable public access** option. You must choose one of the **Allow public access** options.
-* MSI Automatic certificate renewal isn't currently supported.
+* MSI Automatic certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.** * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility. * Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example:
The following diagram demonstrates how customer-managed keys work with Azure Net
* Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled. * If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information.
+* If Azure Key Vault becomes inaccessible, Azure NetApp Files loses its access to the encryption keys and the ability to read or write data to volumes enabled with customer-managed keys. In this situation, create a support ticket to have access manually restored for the affected volumes.
* Currently, customer-managed keys can't be configured while creating data replication volumes to establish an Azure NetApp Files cross-region replication or cross-zone replication relationship. ## Supported regions
azure-netapp-files Performance Oracle Multiple Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-multiple-volumes.md
Migrating highly performant Exadata grade databases to the cloud is increasingly
## Enterprise scale Oracle performance
-When exploring the upper limits of performance, it's important to recognize and reduce any constraints that could falsely skew results. For example, if the intent is to prove performance capabilities of a storage system, the client should ideally be configured so that CPU does not become a mitigating factor before storage performance limits are reached. To that end, testing started with the E104ids_v5 instance type as this VM comes equipped not just with a 100 GBps network interface, but with an equally large (100 GBps) egress limit.
+When exploring the upper limits of performance, it's important to recognize and reduce any constraints that could falsely skew results. For example, if the intent is to prove performance capabilities of a storage system, the client should ideally be configured so that CPU does not become a mitigating factor before storage performance limits are reached. To that end, testing started with the E104ids_v5 instance type as this VM comes equipped not just with a 100 Gbps network interface, but with an equally large (100 Gbps) egress limit.
The testing occurred in two phases:
This section details the criteria to be considered in selecting [VMs](../virtual
#### Chipsets
-The first topic of interest is chipset selection. Make sure that whatever VM SKU you select is built on a single chipset for consistency reasons. The Intel variant of E_v5 VMs runs on a third Generation Intel Xeon Platinum 8370C (Ice Lake) configuration. All VMs in this family come equipped with a single 100 GBps network interface. In contrast, the E_v3 series, mentioned by way of example, is built on four separate chipsets, with various physical network bandwidths. The four chipsets used in the E_v3 family (Broadwell, Skylake, Cascade Lake, Haswell) have different processor speeds, which affect the performance characteristics of the machine.
+The first topic of interest is chipset selection. Make sure that whatever VM SKU you select is built on a single chipset for consistency reasons. The Intel variant of E_v5 VMs runs on a third Generation Intel Xeon Platinum 8370C (Ice Lake) configuration. All VMs in this family come equipped with a single 100 Gbps network interface. In contrast, the E_v3 series, mentioned by way of example, is built on four separate chipsets, with various physical network bandwidths. The four chipsets used in the E_v3 family (Broadwell, Skylake, Cascade Lake, Haswell) have different processor speeds, which affect the performance characteristics of the machine.
Read the [Azure Compute documentation](/azure/architecture/guide/technology-choices/compute-decision-tree) carefully paying attention to chipset options. Also refer to [Azure VM SKUs best practices for Azure NetApp Files](performance-virtual-machine-sku.md). Selecting a VM with a single chipset is preferable for best consistency.
Read the [Azure Compute documentation](/azure/architecture/guide/technology-choi
It's important to understand the difference between the available bandwidth of the VM network interface and the metered bandwidth applied against the same. When [Azure Compute documentation](../virtual-network/virtual-machine-network-throughput.md) speaks to network bandwidth limits, these limits are applied on egress (write) only. Ingress (read) traffic is not metered and as such is limited only by the physical bandwidth of the NIC itself. The network bandwidth of most VMs outpaces the egress limit applied against the machine.
-As Azure NetApp Files volumes are network attached, the egress limit can be understood as being applied against writes specifically whereas ingress is defined as reads and read-like workloads. While the egress limit of most machines is greater than the network bandwidth of the NIC, the same cannot be said for the E104_v5 used in testing for this article. The E104_v5 has a 100 GBps NIC with the egress limit set at 100 GBps as well. By comparison, the E96_v5, with its 100 GBps NIC has an egress limit of 35 GBps with ingress unfettered at 100 GBps. As VMs decrease in size, egress limits decrease but ingress remains unfettered by logically imposed limits.
+As Azure NetApp Files volumes are network attached, the egress limit can be understood as being applied against writes specifically whereas ingress is defined as reads and read-like workloads. While the egress limit of most machines is greater than the network bandwidth of the NIC, the same cannot be said for the E104_v5 used in testing for this article. The E104_v5 has a 100 Gbps NIC with the egress limit set at 100 Gbps as well. By comparison, the E96_v5, with its 100 Gbps NIC has an egress limit of 35 Gbps with ingress unfettered at 100 Gbps. As VMs decrease in size, egress limits decrease but ingress remains unfettered by logically imposed limits.
Egress limits are VM-wide and are applied as such against all network-based workloads. When using Oracle Data Guard, all writes are doubled to archive logs and must be factored to egress limit considerations. This is also true for archive log with multi-destination and RMAN, if used. When selecting VMs, familiarize yourselves with such command line tools as `ethtool`, which expose the configuration of the NIC as Azure does not document network interface configurations.
azure-portal Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-bicep.md
A dashboard in the Azure portal is a focused and organized view of your cloud re
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
## Review the Bicep file
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
- A virtual machine. The dashboard you create in the next part of this quickstart requires an existing VM. Create a VM by following these steps. 1. In the Azure portal, select **Cloud Shell** from the global controls at the top of the page.
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-powershell.md
A dashboard in the Azure portal is a focused and organized view of your cloud re
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
[!INCLUDE [cloud-shell-try-it](../../includes/cloud-shell-try-it.md)]
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md
You need a Bicep file to deploy. The file must be local.
You need Azure PowerShell and to be connected to Azure: -- **Install Azure PowerShell cmdlets on your local computer.** To deploy Bicep files, you need [Azure PowerShell](/powershell/azure/install-az-ps) version **5.6.0 or later**. For more information, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).
+- **Install Azure PowerShell cmdlets on your local computer.** To deploy Bicep files, you need [Azure PowerShell](/powershell/azure/install-azure-powershell) version **5.6.0 or later**. For more information, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).
- **Install Bicep CLI.** Azure PowerShell doesn't automatically install the Bicep CLI. Instead, you must [manually install the Bicep CLI](install.md#install-manually). - **Connect to Azure by using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)**. If you have multiple Azure subscriptions, you might also need to run [Set-AzContext](/powershell/module/Az.Accounts/Set-AzContext). For more information, see [Use multiple Azure subscriptions](/powershell/azure/manage-subscriptions-azureps).
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-what-if.md
To install the module, use:
Install-Module -Name Az -Force ```
-For more information about installing modules, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+For more information about installing modules, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Install Azure CLI module
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Previously updated : 05/03/2023 Last updated : 05/11/2023
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
storageAccountName: 'myStorageAccount' storageAccountKey: 'myKey' }
- azPowerShellVersion: '8.3' // or azCliVersion: '2.40.0'
+ azPowerShellVersion: '9.7' // or azCliVersion: '2.47.0'
arguments: '-name \\"John Dole\\"' environmentVariables: [ {
SubscriptionId : 01234567-89AB-CDEF-0123-456789ABCDEF
ProvisioningState : Succeeded Identity : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mydentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami ScriptKind : AzurePowerShell
-AzPowerShellVersion : 8.3
-StartTime : 6/18/2022 7:46:45 PM
-EndTime : 6/18/2022 7:49:45 PM
-ExpirationDate : 6/19/2022 7:49:45 PM
+AzPowerShellVersion : 9.7
+StartTime : 5/11/2023 7:46:45 PM
+EndTime : 5/11/2023 7:49:45 PM
+ExpirationDate : 5/12/2023 7:49:45 PM
CleanupPreference : OnSuccess StorageAccountId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.Storage/storageAccounts/ftnlvo6rlrvo2azscripts ContainerInstanceId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.ContainerInstance/containerGroups/ftnlvo6rlrvo2azscripts
The list command output is similar to:
[ { "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "8.3",
+ "azPowerShellVersion": "9.7",
"cleanupPreference": "OnSuccess", "containerSettings": { "containerGroupName": null
The list command output is similar to:
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2022-06-25T03:00:16.796923+00:00",
+ "endTime": "2023-05-11T03:00:16.796923+00:00",
"error": null,
- "expirationTime": "2022-06-26T03:00:16.796923+00:00",
- "startTime": "2022-06-25T02:59:07.595140+00:00",
+ "expirationTime": "2023-05-12T03:00:16.796923+00:00",
+ "startTime": "2023-05-11T02:59:07.595140+00:00",
"storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts" }, "storageAccountSettings": null, "supportingScriptUris": null, "systemData": {
- "createdAt": "2022-06-25T02:59:04.750195+00:00",
+ "createdAt": "2023-05-11T02:59:04.750195+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2022-06-25T02:59:04.750195+00:00",
+ "lastModifiedAt": "2023-05-11T02:59:04.750195+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" },
The output is similar to:
"systemData": { "createdBy": "someone@contoso.com", "createdByType": "User",
- "createdAt": "2022-06-25T02:59:04.7501955Z",
+ "createdAt": "2023-05-11T02:59:04.7501955Z",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User",
- "lastModifiedAt": "2022-06-25T02:59:04.7501955Z"
+ "lastModifiedAt": "2023-05-11T02:59:04.7501955Z"
}, "properties": { "provisioningState": "Succeeded", "forceUpdateTag": "20220625T025902Z",
- "azPowerShellVersion": "8.3",
+ "azPowerShellVersion": "9.7",
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "arguments": "-name \\\"John Dole\\\"", "retentionInterval": "P1D",
The output is similar to:
"status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts", "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts",
- "startTime": "2022-06-25T02:59:07.5951401Z",
- "endTime": "2022-06-25T03:00:16.7969234Z",
- "expirationTime": "2022-06-26T03:00:16.7969234Z"
+ "startTime": "2023-05-11T02:59:07.5951401Z",
+ "endTime": "2023-05-11T03:00:16.7969234Z",
+ "expirationTime": "2023-05-12T03:00:16.7969234Z"
}, "outputs": { "text": "Hello John Dole"
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md
You're done with setting up your Bicep environment. The rest of this article des
## Azure PowerShell
-You must have Azure PowerShell version **5.6.0 or later** installed. To update or install, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+You must have Azure PowerShell version **5.6.0 or later** installed. To update or install, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
Azure PowerShell doesn't automatically install the Bicep CLI. Instead, you must [manually install the Bicep CLI](#install-manually).
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-template-specs.md
When you create a template spec, the Bicep file is transpiled into JavaScript Ob
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure PowerShell [version 6.3.0 or later](/powershell/azure/install-az-ps) or Azure CLI [version 2.27.0 or later](/cli/azure/install-azure-cli).
+- Azure PowerShell [version 6.3.0 or later](/powershell/azure/install-azure-powershell) or Azure CLI [version 2.27.0 or later](/cli/azure/install-azure-cli).
- [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). ## Create Bicep file
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
Learn how to publish Bicep modules to private modules registry, and how to call
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008** or later. To use with [Azure CLI](/cli/azure/install-azure-cli), you must also have Azure CLI version **2.31.0** or later; to use with [Azure PowerShell](/powershell/azure/install-az-ps), you must also have Azure PowerShell version **7.0.0** or later.
+To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008** or later. To use with [Azure CLI](/cli/azure/install-azure-cli), you must also have Azure CLI version **2.31.0** or later; to use with [Azure PowerShell](/powershell/azure/install-azure-powershell), you must also have Azure PowerShell version **7.0.0** or later.
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md). To create one, see [Quickstart: Create a container registry by using a Bicep file](../../container-registry/container-registry-get-started-bicep.md).
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
A template spec is a resource type for storing an Azure Resource Manager templat
To deploy the template spec, you use standard Azure tools like PowerShell, Azure CLI, Azure portal, REST, and other supported SDKs and clients. You use the same commands as you would for the template or the Bicep file. > [!NOTE]
-> To use template specs in Bicep with Azure PowerShell, you must install [version 6.3.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.27.0 or later](/cli/azure/install-azure-cli).
+> To use template specs in Bicep with Azure PowerShell, you must install [version 6.3.0 or later](/powershell/azure/install-azure-powershell). To use it with Azure CLI, use [version 2.27.0 or later](/cli/azure/install-azure-cli).
When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Azure Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your Bicep along with copy loops to create multiple instances of these resources.
azure-resource-manager Create Custom Provider Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider-quickstart-powershell.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps). If you choose to use Cloud Shell, see
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell). If you choose to use Cloud Shell, see
[Overview of Azure Cloud Shell](../../cloud-shell/overview.md) for more information.
azure-resource-manager Create Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider.md
Azure CLI examples use `az rest` for `REST` requests. For more information, see
# [PowerShell](#tab/azure-powershell) -- The PowerShell commands are run locally using PowerShell 7 or later and the Azure PowerShell modules. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- The PowerShell commands are run locally using PowerShell 7 or later and the Azure PowerShell modules. For more information, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
- If you don't already have a tool for `REST` operations, install the [ARMClient](https://github.com/projectkudu/ARMClient). It's an open-source command-line tool that simplifies invoking the Azure Resource Manager API. - After the **ARMClient** is installed, you can display usage information from a PowerShell command prompt by typing: `armclient.exe`. Or, go to the [ARMClient wiki](https://github.com/projectkudu/ARMClient/wiki).
azure-resource-manager Create Storage Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-storage-customer-managed-key.md
This article describes how to create an Azure Managed Application that deploys a
- An Azure account with an active subscription and permissions to Azure Active Directory resources like users, groups, or service principals. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).-- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
- Be familiar with how to [create](publish-service-catalog-app.md) and [deploy](deploy-service-catalog-quickstart.md) a service catalog definition. ## Managed identities
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
In this quickstart, you use the managed application definition that you created
- A managed application definition created with [publish an application definition](publish-service-catalog-app.md) or [publish a definition with bring your own storage](publish-service-catalog-bring-your-own-storage.md). - An Azure account with an active subscription. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Visual Studio Code](https://code.visualstudio.com/).-- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
## Create service catalog managed application
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
To complete this quickstart, you need the following items:
- An Azure account with an active subscription and permissions to Azure Active Directory resources like users, groups, or service principals. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).-- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
## Create the ARM template
azure-resource-manager Publish Service Catalog Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md
To complete this quickstart, you need the following items:
- An Azure account with an active subscription and permissions to Azure Active Directory resources like users, groups, or service principals. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).-- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
## Create the ARM template
azure-resource-manager Manage Resource Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-powershell.md
Learn how to use Azure PowerShell with [Azure Resource Manager](overview.md) to
## Prerequisites
-* Azure PowerShell. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+* Azure PowerShell. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
* After installing, sign in for the first time. For more information, see [Sign in](/powershell/azure/install-az-ps#sign-in).
azure-resource-manager Tag Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-powershell.md
This article describes how to use Azure PowerShell to tag resources, resource gr
## Apply tags
-Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) version 3.6.1 or later.
+Azure PowerShell offers two commands to apply tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You need to have the `Az.Resources` module 1.12.0 version or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-azure-powershell) version 3.6.1 or later.
The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When you call the command, pass the resource ID of the entity you want to tag.
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md
To install the module, use:
Install-Module -Name Az -Force ```
-For more information about installing modules, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+For more information about installing modules, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Install Azure CLI module
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Previously updated : 05/03/2023 Last updated : 05/11/2023
The following JSON is an example. For more information, see the latest [template
"storageAccountName": "myStorageAccount", "storageAccountKey": "myKey" },
- "azPowerShellVersion": "8.3", // or "azCliVersion": "2.40.0",
+ "azPowerShellVersion": "9.7", // or "azCliVersion": "2.47.0",
"arguments": "-name \\\"John Dole\\\"", "environmentVariables": [ {
SubscriptionId : 01234567-89AB-CDEF-0123-456789ABCDEF
ProvisioningState : Succeeded Identity : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mydentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami ScriptKind : AzurePowerShell
-AzPowerShellVersion : 8.3
-StartTime : 6/18/2022 7:46:45 PM
-EndTime : 6/18/2022 7:49:45 PM
-ExpirationDate : 6/19/2022 7:49:45 PM
+AzPowerShellVersion : 9.7
+StartTime : 5/11/2023 7:46:45 PM
+EndTime : 5/11/2023 7:49:45 PM
+ExpirationDate : 5/12/2023 7:49:45 PM
CleanupPreference : OnSuccess StorageAccountId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.Storage/storageAccounts/ftnlvo6rlrvo2azscripts ContainerInstanceId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.ContainerInstance/containerGroups/ftnlvo6rlrvo2azscripts
The list command output is similar to:
[ { "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "8.3",
+ "azPowerShellVersion": "9.7",
"cleanupPreference": "OnSuccess", "containerSettings": { "containerGroupName": null }, "environmentVariables": null,
- "forceUpdateTag": "20220625T025902Z",
+ "forceUpdateTag": "20230511T025902Z",
"id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runPowerShellInlineWithOutput", "identity": { "tenantId": "01234567-89AB-CDEF-0123-456789ABCDEF",
The list command output is similar to:
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2022-06-25T03:00:16.796923+00:00",
+ "endTime": "2023-05-11T03:00:16.796923+00:00",
"error": null,
- "expirationTime": "2022-06-26T03:00:16.796923+00:00",
- "startTime": "2022-06-25T02:59:07.595140+00:00",
+ "expirationTime": "2023-05-12T03:00:16.796923+00:00",
+ "startTime": "2023-05-11T02:59:07.595140+00:00",
"storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts" }, "storageAccountSettings": null, "supportingScriptUris": null, "systemData": {
- "createdAt": "2022-06-25T02:59:04.750195+00:00",
+ "createdAt": "2023-05-11T02:59:04.750195+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2022-06-25T02:59:04.750195+00:00",
+ "lastModifiedAt": "2023-05-11T02:59:04.750195+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" },
The output is similar to:
"systemData": { "createdBy": "someone@contoso.com", "createdByType": "User",
- "createdAt": "2022-06-25T02:59:04.7501955Z",
+ "createdAt": "2023-05-11T02:59:04.7501955Z",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User",
- "lastModifiedAt": "2022-06-25T02:59:04.7501955Z"
+ "lastModifiedAt": "2023-05-11T02:59:04.7501955Z"
}, "properties": { "provisioningState": "Succeeded", "forceUpdateTag": "20220625T025902Z",
- "azPowerShellVersion": "8.3",
+ "azPowerShellVersion": "9.7",
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "arguments": "-name \\\"John Dole\\\"", "retentionInterval": "P1D",
The output is similar to:
"status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts", "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts",
- "startTime": "2022-06-25T02:59:07.5951401Z",
- "endTime": "2022-06-25T03:00:16.7969234Z",
- "expirationTime": "2022-06-26T03:00:16.7969234Z"
+ "startTime": "2023-05-11T02:59:07.5951401Z",
+ "endTime": "2023-05-11T03:00:16.7969234Z",
+ "expirationTime": "2023-05-12T03:00:16.7969234Z"
}, "outputs": { "text": "Hello John Dole"
azure-resource-manager Deployment Tutorial Local Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-local-template.md
Let's start by making sure you have the tools you need to deploy templates.
You need either Azure PowerShell or Azure CLI to deploy the template. For the installation instructions, see: -- [Install Azure PowerShell](/powershell/azure/install-az-ps)
+- [Install Azure PowerShell](/powershell/azure/install-azure-powershell)
- [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows) - [Install Azure CLI on Linux](/cli/azure/install-azure-cli-linux) - [Install Azure CLI on macOS](/cli/azure/install-azure-cli-macos)
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-template-specs.md
This quickstart shows you how to package an Azure Resource Manager template (ARM
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > [!NOTE]
-> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-azure-powershell). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
## Create template
azure-resource-manager Template Specs Create Linked https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs-create-linked.md
Learn how to create a [template spec](template-specs.md) with a main template an
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > [!NOTE]
-> To use template specs with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template specs with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-azure-powershell). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
## Create linked templates
azure-resource-manager Template Specs Create Portal Forms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs-create-portal-forms.md
The following screenshot shows a form opened in the Azure portal.
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-For Azure PowerShell, use [version 6.0.0 or later](/powershell/azure/install-az-ps). For Azure CLI, use [version 2.24.0 or later](/cli/azure/install-azure-cli).
+For Azure PowerShell, use [version 6.0.0 or later](/powershell/azure/install-azure-powershell). For Azure CLI, use [version 2.24.0 or later](/cli/azure/install-azure-cli).
## Create template
azure-resource-manager Template Specs Deploy Linked Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs-deploy-linked-template.md
Learn how to deploy an existing [template spec](template-specs.md) by using a [l
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > [!NOTE]
-> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-azure-powershell). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
## Create a template spec
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
A template spec is a resource type for storing an Azure Resource Manager templat
To deploy the template spec, you use standard Azure tools like PowerShell, Azure CLI, Azure portal, REST, and other supported SDKs and clients. You use the same commands as you would for the template. > [!NOTE]
-> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-azure-powershell). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For example, your deployments include multiple instances of Azure Cosmos DB, with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your templates along with copy loops to create multiple instances of these resources.
azure-resource-manager Template Tutorial Create First Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md
Templates are JavaScript Object Notation (JSON) files. To create templates, you
You also need either Azure PowerShell or Azure Command-Line Interface (CLI) to deploy the template. If you use Azure CLI, you need to have version 2.37.0 or later. For the installation instructions, see: -- [Install Azure PowerShell](/powershell/azure/install-az-ps)
+- [Install Azure PowerShell](/powershell/azure/install-azure-powershell)
- [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows) - [Install Azure CLI on Linux](/cli/azure/install-azure-cli-linux) - [Install Azure CLI on macOS](/cli/azure/install-azure-cli-macos)
azure-resource-manager Update Visual Studio Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/update-visual-studio-deployment-script.md
Visual Studio 16.4 supports using the Az PowerShell module in the template deployment script. However, Visual Studio doesn't automatically install that module. To use the Az module, you need to take four steps: 1. [Uninstall AzureRM module](/powershell/azure/uninstall-az-ps#uninstall-the-azurerm-module)
-1. [Install Az module](/powershell/azure/install-az-ps)
+1. [Install Az module](/powershell/azure/install-azure-powershell)
1. Update Visual Studio to 16.4 1. Update the deployment script in your project.
azure-resource-manager Quickstart Troubleshoot Arm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md
To complete this quickstart, you need the following items:
- If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).-- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
## Create a template with errors
azure-resource-manager Quickstart Troubleshoot Bicep Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-bicep-deployment.md
To complete this quickstart, you need the following items:
- If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Visual Studio Code](https://code.visualstudio.com) with the latest [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).-- The latest version of either [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- The latest version of either [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
## Create a Bicep file with errors
azure-signalr Signalr Quickstart Azure Signalr Service Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md
An Azure account with an active subscription. [Create one for free](https://azur
# [PowerShell](#tab/PowerShell) * An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-* If you want to run the code locally, [Azure PowerShell](/powershell/azure/install-az-ps).
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/install-azure-powershell).
# [CLI](#tab/CLI)
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
The article provides six best practices of how to use Azure Video Indexer at sca
## When uploading videos consider using a URL over byte array
-Azure Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, the latter comes with some constraints. For more information, see [uploading considerations and limitations)](upload-index-videos.md#uploading-considerations-and-limitations)
+Azure Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, the latter comes with some constraints. For more information, see [uploading considerations and limitations)](upload-index-videos.md)
First, it has file size limitations. The size of the byte array file is limited to 2 GB compared to the 30-GB upload size limitation while using URL.
When you upload videos using URL, you just need to provide a path to the locatio
> [!TIP] > Use the `videoUrl` optional parameter of the upload video API.
-To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md#code-sample). Or, you can use [AzCopy](../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Azure Video Indexer using [SAS URL](../storage/common/storage-sas-overview.md). Azure Video Indexer recommends using *readonly* SAS URLs.
+To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md). Or, you can use [AzCopy](../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Azure Video Indexer using [SAS URL](../storage/common/storage-sas-overview.md). Azure Video Indexer recommends using *readonly* SAS URLs.
## Automatic Scaling of Media Reserved Units
Azure Video Indexer is built to deal with indexing at scale, and when you want t
## Use callback URL
-We recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can add a [callback URL](upload-index-videos.md#callbackurl), and wait for Azure Video Indexer to update you. As soon as there is any status change in your upload request, you get a POST notification to the URL you specified.
+We recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can add a callback URL and wait for Azure Video Indexer to update you. As soon as there is any status change in your upload request, you get a POST notification to the URL you specified.
You can add a callback URL as one of the parameters of the [upload video API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). Check out the code samples in [GitHub repo](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/).
For callback URL you can also use Azure Functions, a serverless event-driven pla
When making decisions related to using Azure Video Indexer at scale, look at how to get the most out of it with the right parameters for your needs. Think about your use case, by defining different parameters you can save money and make the indexing process for your videos faster.
-Before uploading and indexing your video read this short [documentation](upload-index-videos.md), check the [indexingPreset](upload-index-videos.md#indexingpreset) and [streamingPreset](upload-index-videos.md#streamingpreset) to get a better idea of what your options are.
+Before uploading and indexing your video read the [documentation](upload-index-videos.md) to get a better idea of what your options are.
For example, donΓÇÖt set the preset to streaming if you don't plan to watch the video, don't index video insights if you only need audio insights.
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
Title: Upload and index videos with Azure Video Indexer
-description: Learn two methods for uploading and indexing videos by using Azure Video Indexer.
+ Title: Upload and index videos with Azure Video Indexer using the Video Indexer website
+description: Learn how to upload videos by using Azure Video Indexer.
Previously updated : 03/20/2023- Last updated : 05/10/2023
-# Upload and index your videos
+# Upload media files using the Video Indexer website
-This article shows how to upload and index videos by using the Azure Video Indexer website (see [get started with the website](video-indexer-get-started.md)) and the Upload Video API (see [get started with API](video-indexer-use-apis.md)).
+This article shows how to upload and index media files (audio or video) using the [Azure Video Indexer website](https://aka.ms/vi-portal-link).
-After you upload and index a video, you can use [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md).
+You can upload media files from your file system or from a URL. You can also configure basic or advanced settings for indexing, such as privacy, streaming quality, language, presets, people and brands models, custom logos and metadata.
-## Supported file formats
+## Prerequisites
-For a list of file formats that you can use with Azure Video Indexer, see [Standard Encoder formats and codecs](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
+- To upload media files, you need an active Azure Video Indexer account. If you don't have one, [sign up](https://aka.ms/vi-portal-link) for a free trial account, or create an [unlimited paid account](https://aka.ms/avam-arm-docs).
+- To upload media files, you need at least contributor-level permission for your account. To manage permissions, see [Manage users and groups](restricted-viewer-role.md).
+- To upload media files from a URL, you need a publicly accessible URL for the media file. For example, if the file is hosted in an Azure storage account, you need to [generate a SAS token URL](https://learn.microsoft.com/azure/applied-ai-services/form-recognizer/create-sas-tokens?view=form-recog-3.0.0) and paste it in the input box. You can't use URLs from streaming services such as YouTube.
-## Storage of video files
+## Quick upload
-When you use Azure Video Indexer, video files are stored in Azure Storage through Media Services. The limits are 30 GB in size and 4 hours in length.
-
-You can always delete your video and audio files, along with any metadata and insights that Azure Video Indexer has extracted from them. After you delete a file from Azure Video Indexer, the file and its metadata and insights are permanently removed from Azure Video Indexer. However, if you've implemented your own backup solution in Azure Storage, the file remains in Azure Storage.
-
-The persistence of a video is identical whether you upload by using the Azure Video Indexer website or by using the Upload Video API.
-
-## Upload and index a video by using the website
-
-Sign in on the [Azure Video Indexer](https://www.videoindexer.ai/) website, and then select **Upload**.
+Follow steps below to upload and index a media file using the quick upload option.
> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Screenshot that shows the Upload button.":::
-
-After your video is uploaded, Azure Video Indexer starts indexing and analyzing the video.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Screenshot that shows the progress of an upload.":::
-
-After Azure Video Indexer is done analyzing, you get an email with a link to your video. The email also includes a short description of what was found in your video (for example: people, topics, optical character recognition).
-
-## Upload and index a video by using the API
-
-You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
-
-You can also view the following video.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW10CsK]
-
-> [!NOTE]
-> Before you proceed, make sure to review [API recommendations](video-indexer-use-apis.md#recommendations).
-
-### Configurations and parameters
-
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
-
-#### externalID
-
-Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure Video Indexer website can be searched via the specified external ID.
-
-#### callbackUrl
-
-Use this parameter to specify a callback URL.
--
-Azure Video Indexer returns any existing parameters provided in the original URL. The URL must be encoded.
-
-#### indexingPreset
-
-Use this parameter to define an AI bundle that you want to apply on your audio or video file. This parameter is used to configure the indexing process. You can specify the following values:
--- `AudioOnly`: Index and extract insights by using audio only (ignoring video).-- `VideoOnly`: Index and extract insights by using video only (ignoring audio).-- `Default`: Index and extract insights by using both audio and video.-- `DefaultWithNoiseReduction`: Index and extract insights from both audio and video, while applying noise reduction algorithms on the audio stream.-
- The `DefaultWithNoiseReduction` value is now mapped to a default preset (deprecated).
-- `BasicAudio`: Index and extract insights by using audio only (ignoring video). Include only basic audio features (transcription, translation, formatting of output captions and subtitles).-- `AdvancedAudio`: Index and extract insights by using audio only (ignoring video). Include advanced audio features (such as audio event detection) in addition to the standard audio analysis.-- `AdvancedVideo`: Index and extract insights by using video only (ignoring audio). Include advanced video features (such as observed people tracing) in addition to the standard video analysis.-- `AdvancedVideoAndAudio`: Index and extract insights by using both advanced audio and advanced video analysis.-
-> [!NOTE]
-> The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
-
-Azure Video Indexer covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
-
-Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
-
-#### priority
-
-Azure Video Indexer indexes videos according to their priority. Use the `priority` parameter to specify the index priority. The following values are valid: `Low`, `Normal` (default), and `High`.
-
-This parameter is supported only for paid accounts.
-
-#### streamingPreset
-
-After your video is uploaded, Azure Video Indexer optionally encodes the video. It then proceeds to indexing and analyzing the video. When Azure Video Indexer is done analyzing, you get a notification with the video ID.
-
-When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered.
-
-After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state.
-
-For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Azure Video Indexer encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
-The default setting is [content-aware encoding](/azure/media-services/latest/encode-content-aware-concept).
-
-If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`.
-
-#### videoUrl
-
-This parameter specifies the URL of the video or audio file to be indexed. If the `videoUrl` parameter is not specified, Azure Video Indexer expects you to pass the file as multipart/form body content.
-
-### Code sample
-
-The following C# code snippets demonstrate the usage of all the Azure Video Indexer APIs together.
--
-### [Azure Resource Manager account](#tab/with-arm-account-account/)
-
-After you copy this C# project into your development platform, you need to take the following steps:
-
-1. Go to Program.cs and populate:
-
- - ```SubscriptionId``` with your subscription ID.
- - ```ResourceGroup``` with your resource group.
- - ```AccountName``` with your account name.
- - ```VideoUrl``` with your video URL.
-1. Make sure that .NET 6.0 is installed. If it isn't, [install it](https://dotnet.microsoft.com/download/dotnet/6.0).
-1. Make sure that the Azure CLI is installed. If it isn't, [install it](/cli/azure/install-azure-cli).
-1. Open your terminal and go to the *VideoIndexerArm* folder.
-1. Log in to Azure: ```az login --use-device```.
-1. Build the project: ```dotnet build```.
-1. Run the project: ```dotnet run```.
-
-```csharp
-<Project Sdk="Microsoft.NET.Sdk">
-
- <PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>net5.0</TargetFramework>
- </PropertyGroup>
-
- <ItemGroup>
- <PackageReference Include="Azure.Identity" Version="1.4.1" />
- <PackageReference Include="Microsoft.Identity.Client" Version="4.36.2" />
- </ItemGroup>
-
-</Project>
-```
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text.Json;
-using System.Text.Json.Serialization;
-using System.Threading.Tasks;
-using System.Web;
-using Azure.Core;
-using Azure.Identity;
--
-namespace VideoIndexerArm
-{
- public class Program
- {
- private const string AzureResourceManager = "https://management.azure.com";
- private const string SubscriptionId = ""; // Your Azure subscription
- private const string ResourceGroup = ""; // Your resource group
- private const string AccountName = ""; // Your account name
- private const string VideoUrl = ""; // The video URL you want to index
-
- public static async Task Main(string[] args)
- {
- // Build Azure Video Indexer resource provider client that has access token through Azure Resource Manager
- var videoIndexerResourceProviderClient = await VideoIndexerResourceProviderClient.BuildVideoIndexerResourceProviderClient();
-
- // Get account details
- var account = await videoIndexerResourceProviderClient.GetAccount();
- var accountId = account.Properties.Id;
- var accountLocation = account.Location;
- Console.WriteLine($"account id: {accountId}");
- Console.WriteLine($"account location: {accountLocation}");
-
- // Get account-level access token for Azure Video Indexer
- var accessTokenRequest = new AccessTokenRequest
- {
- PermissionType = AccessTokenPermission.Contributor,
- Scope = ArmAccessTokenScope.Account
- };
-
- var accessToken = await videoIndexerResourceProviderClient.GetAccessToken(accessTokenRequest);
- var apiUrl = "https://api.videoindexer.ai";
- System.Net.ServicePointManager.SecurityProtocol = System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12;
--
- // Create the HTTP client
- var handler = new HttpClientHandler();
- handler.AllowAutoRedirect = false;
- var client = new HttpClient(handler);
-
- // Upload a video
- MultipartFormDataContent content = null;
- Console.WriteLine("Uploading...");
-
- // As an alternative to specifying video URL, you can upload a file.
- // Remove the videoUrl parameter from the query parameters below and add the following lines:
- //content = new MultipartFormDataContent();
- //FileStream video = File.OpenRead(@"c:\videos\democratic3.mp4");
- //byte[] buffer = new byte[video.Length];
- //video.Read(buffer, 0, buffer.Length);
- //content.Add(new ByteArrayContent(buffer), "MyVideo", "MyVideo");
-
- var queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"name", "video sample"},
- {"description", "video_description"},
- {"privacy", "private"},
- {"partition", "partition"},
- {"videoUrl", VideoUrl},
- });
- var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos?{queryParams}", content);
- var uploadResult = await uploadRequestResult.Content.ReadAsStringAsync();
-
- // Get the video ID from the upload result
- string videoId = JsonSerializer.Deserialize<Video>(uploadResult).Id;
- Console.WriteLine("Uploaded");
- Console.WriteLine("Video ID:");
- Console.WriteLine(videoId);
-
- // Wait for the video index to finish
- while (true)
- {
- await Task.Delay(10000);
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"language", "English"},
- });
-
- var videoGetIndexRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/Index?{queryParams}");
- var videoGetIndexResult = await videoGetIndexRequestResult.Content.ReadAsStringAsync();
+> :::image type="content" source="./media/upload-index-videos/file-system-basic.png" alt-text="Screenshot that shows file system basic.":::
- string processingState = JsonSerializer.Deserialize<Video>(videoGetIndexResult).State;
+1. Sign in to the [Video Indexer website](https://aka.ms/vi-portal-link).
+1. Select **Upload**.
+1. Select the file source. You can upload up to 10 files at a time.
- Console.WriteLine("");
- Console.WriteLine("State:");
- Console.WriteLine(processingState);
-
- // Job is finished
- if (processingState != "Uploaded" && processingState != "Processing")
- {
- Console.WriteLine("");
- Console.WriteLine("Full JSON:");
- Console.WriteLine(videoGetIndexResult);
- break;
- }
- }
-
- // Search for the video
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"id", videoId},
- });
-
- var searchRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/Search?{queryParams}");
- var searchResult = await searchRequestResult.Content.ReadAsStringAsync();
- Console.WriteLine("");
- Console.WriteLine("Search:");
- Console.WriteLine(searchResult);
-
- // Get insights widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"widgetType", "Keywords"},
- {"allowEdit", "true"},
- });
- var insightsWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/InsightsWidget?{queryParams}");
- var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location;
- Console.WriteLine("Insights Widget url:");
- Console.WriteLine(insightsWidgetLink);
-
- // Get player widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- });
- var playerWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/PlayerWidget?{queryParams}");
- var playerWidgetLink = playerWidgetRequestResult.Headers.Location;
- Console.WriteLine("");
- Console.WriteLine("Player Widget url:");
- Console.WriteLine(playerWidgetLink);
- Console.WriteLine("\nPress Enter to exit...");
- String line = Console.ReadLine();
- if (line == "enter")
- {
- System.Environment.Exit(0);
- }
-
- }
-
- static string CreateQueryString(IDictionary<string, string> parameters)
- {
- var queryParameters = HttpUtility.ParseQueryString(string.Empty);
- foreach (var parameter in parameters)
- {
- queryParameters[parameter.Key] = parameter.Value;
- }
-
- return queryParameters.ToString();
- }
-
- public class VideoIndexerResourceProviderClient
- {
- private readonly string armAaccessToken;
-
- async public static Task<VideoIndexerResourceProviderClient> BuildVideoIndexerResourceProviderClient()
- {
- var tokenRequestContext = new TokenRequestContext(new[] { $"{AzureResourceManager}/.default" });
- var tokenRequestResult = await new DefaultAzureCredential().GetTokenAsync(tokenRequestContext);
- return new VideoIndexerResourceProviderClient(tokenRequestResult.Token);
- }
- public VideoIndexerResourceProviderClient(string armAaccessToken)
- {
- this.armAaccessToken = armAaccessToken;
- }
-
- public async Task<string> GetAccessToken(AccessTokenRequest accessTokenRequest)
- {
- Console.WriteLine($"Getting access token. {JsonSerializer.Serialize(accessTokenRequest)}");
- // Set the generateAccessToken (from video indexer) HTTP request content
- var jsonRequestBody = JsonSerializer.Serialize(accessTokenRequest);
- var httpContent = new StringContent(jsonRequestBody, System.Text.Encoding.UTF8, "application/json");
-
- // Set request URI
- var requestUri = $"{AzureResourceManager}/subscriptions/{SubscriptionId}/resourcegroups/{ResourceGroup}/providers/Microsoft.VideoIndexer/accounts/{AccountName}/generateAccessToken?api-version=2021-08-16-preview";
-
- // Generate access token from video indexer
- var client = new HttpClient(new HttpClientHandler());
- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", armAaccessToken);
- var result = await client.PostAsync(requestUri, httpContent);
- var jsonResponseBody = await result.Content.ReadAsStringAsync();
- return JsonSerializer.Deserialize<GenerateAccessTokenResponse>(jsonResponseBody).AccessToken;
- }
-
- public async Task<Account> GetAccount()
- {
-
- Console.WriteLine($"Getting account.");
- // Set request URI
- var requestUri = $"{AzureResourceManager}/subscriptions/{SubscriptionId}/resourcegroups/{ResourceGroup}/providers/Microsoft.VideoIndexer/accounts/{AccountName}/?api-version=2021-08-16-preview";
-
- // Get account
- var client = new HttpClient(new HttpClientHandler());
- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", armAaccessToken);
- var result = await client.GetAsync(requestUri);
- var jsonResponseBody = await result.Content.ReadAsStringAsync();
- return JsonSerializer.Deserialize<Account>(jsonResponseBody);
- }
- }
-
- public class AccessTokenRequest
- {
- [JsonPropertyName("permissionType")]
- public AccessTokenPermission PermissionType { get; set; }
-
- [JsonPropertyName("scope")]
- public ArmAccessTokenScope Scope { get; set; }
-
- [JsonPropertyName("projectId")]
- public string ProjectId { get; set; }
-
- [JsonPropertyName("videoId")]
- public string VideoId { get; set; }
- }
-
- [JsonConverter(typeof(JsonStringEnumConverter))]
- public enum AccessTokenPermission
- {
- Reader,
- Contributor,
- MyAccessAdministrator,
- Owner,
- }
-
- [JsonConverter(typeof(JsonStringEnumConverter))]
- public enum ArmAccessTokenScope
- {
- Account,
- Project,
- Video
- }
-
- public class GenerateAccessTokenResponse
- {
- [JsonPropertyName("accessToken")]
- public string AccessToken { get; set; }
-
- }
- public class AccountProperties
- {
- [JsonPropertyName("accountId")]
- public string Id { get; set; }
- }
-
- public class Account
- {
- [JsonPropertyName("properties")]
- public AccountProperties Properties { get; set; }
-
- [JsonPropertyName("location")]
- public string Location { get; set; }
-
- }
-
- public class Video
- {
- [JsonPropertyName("id")]
- public string Id { get; set; }
-
- [JsonPropertyName("state")]
- public string State { get; set; }
- }
- }
-}
-
-```
-
-### [Classic account](#tab/With-classic-account/)
-
-After you copy the following code into your development platform, you'll need to provide two parameters:
-
-* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Azure Video Indexer account.
-
- To get your API key:
-
- 1. Go to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
- 1. Sign in.
- 1. Go to **Products** > **Authorization** > **Authorization subscription**.
- 1. Copy the **Primary key** value.
-
-* Video URL (`videoUrl`): A URL of the video or audio file to be indexed. Here are the requirements:
-
- - The URL must point at a media file. (HTML pages are not supported.)
- - The file can be protected by an access token that's provided as part of the URI. The endpoint that serves the file must be secured with TLS 1.2 or later.
- - The URL must be encoded.
-
-The result of successfully running the code sample includes an insight widget URL and a player widget URL. They allow you to examine the insights and the uploaded video, respectively.
--
-```csharp
-public async Task Sample()
-{
- var apiUrl = "https://api.videoindexer.ai";
- var apiKey = "..."; // Replace with API key taken from https://aka.ms/viapi
-
- System.Net.ServicePointManager.SecurityProtocol =
- System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12;
-
- // Create the HTTP client
- var handler = new HttpClientHandler();
- handler.AllowAutoRedirect = false;
- var client = new HttpClient(handler);
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
-
- // Obtain account information and access token
- string queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"generateAccessTokens", "true"},
- {"allowEdit", "true"},
- });
- HttpResponseMessage result = await client.GetAsync($"{apiUrl}/auth/trial/Accounts?{queryParams}");
- var json = await result.Content.ReadAsStringAsync();
- var accounts = JsonConvert.DeserializeObject<AccountContractSlim[]>(json);
-
- // Take the relevant account. Here we simply take the first.
- // You can also get the account via accounts.First(account => account.Id == <GUID>);
- var accountInfo = accounts.First();
-
- // We'll use the access token from here on, so there's no need for the APIM key
- client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
- // Upload a video
- MultipartFormDataContent content = null;
- Console.WriteLine("Uploading...");
+ - To upload from your file system, select **Browse files** and choose the files you want to upload.
+ - To upload from a URL, select **Enter URL**, paste the source file URL, and select **Add**.
- // Get the video from URL
- var videoUrl = "VIDEO_URL"; // Replace with the video URL
-
- // As an alternative to specifying video URL, you can upload a file.
- // Remove the videoUrl parameter from the query parameters below and add the following lines:
- //content = new MultipartFormDataContent();
- //FileStream video = File.OpenRead(@"c:\videos\democratic3.mp4");
- //byte[] buffer = new byte[video.Length];
- //video.Read(buffer, 0, buffer.Length);
- //content.Add(new ByteArrayContent(buffer), "MyVideo", "MyVideo");
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"name", "video_name"},
- {"description", "video_description"},
- {"privacy", "private"},
- {"partition", "partition"},
- {"videoUrl", videoUrl},
- });
- var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", content);
- var uploadResult = await uploadRequestResult.Content.ReadAsStringAsync();
-
- // Get the video ID from the upload result
- string videoId = JsonConvert.DeserializeObject<dynamic>(uploadResult)["id"];
- Console.WriteLine("Uploaded");
- Console.WriteLine("Video ID:");
- Console.WriteLine(videoId);
-
- // Wait for the video index to finish
- while (true)
- {
- await Task.Delay(10000);
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"language", "English"},
- });
-
- var videoGetIndexRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/Index?{queryParams}");
- var videoGetIndexResult = await videoGetIndexRequestResult.Content.ReadAsStringAsync();
-
- string processingState = JsonConvert.DeserializeObject<dynamic>(videoGetIndexResult)["state"];
-
- Console.WriteLine("");
- Console.WriteLine("State:");
- Console.WriteLine(processingState);
-
- // Job is finished
- if (processingState != "Uploaded" && processingState != "Processing")
- {
- Console.WriteLine("");
- Console.WriteLine("Full JSON:");
- Console.WriteLine(videoGetIndexResult);
- break;
- }
- }
-
- // Search for the video
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"id", videoId},
- });
-
- var searchRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/Search?{queryParams}");
- var searchResult = await searchRequestResult.Content.ReadAsStringAsync();
- Console.WriteLine("");
- Console.WriteLine("Search:");
- Console.WriteLine(searchResult);
-
- // Generate video access token (used for get widget calls)
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
- var videoTokenRequestResult = await client.GetAsync($"{apiUrl}/auth/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/AccessToken?allowEdit=true");
- var videoAccessToken = (await videoTokenRequestResult.Content.ReadAsStringAsync()).Replace("\"", "");
- client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
- // Get insights widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", videoAccessToken},
- {"widgetType", "Keywords"},
- {"allowEdit", "true"},
- });
- var insightsWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/InsightsWidget?{queryParams}");
- var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location;
- Console.WriteLine("Insights Widget url:");
- Console.WriteLine(insightsWidgetLink);
-
- // Get player widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", videoAccessToken},
- });
- var playerWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/PlayerWidget?{queryParams}");
- var playerWidgetLink = playerWidgetRequestResult.Headers.Location;
- Console.WriteLine("");
- Console.WriteLine("Player Widget url:");
- Console.WriteLine(playerWidgetLink);
- Console.WriteLine("\nPress Enter to exit...");
- String line = Console.ReadLine();
- if (line == "enter")
- {
- System.Environment.Exit(0);
- }
-
-}
-
-private string CreateQueryString(IDictionary<string, string> parameters)
-{
- var queryParameters = HttpUtility.ParseQueryString(string.Empty);
- foreach (var parameter in parameters)
- {
- queryParameters[parameter.Key] = parameter.Value;
- }
+ Make sure the URL is valid and the file is accessible.
+
+ > [!NOTE]
+ > If the file name is marked in red, it means the file has an issue and can't be uploaded.
+1. Configure the basic settings for indexing or use the default configuration. You need to specify the following settings for each file:
+
+ - **Privacy**: Choose whether the video URL will be publicly available or private after indexing.
+ - **Streaming quality**: Choose the streaming quality for the video. You can select **No streaming**, **Single bitrate**, or **Adaptive bitrate**. For more information, see [the streaming options](indexing-configuration-guide.md#streaming-quality-options)
+ - **Video source language**: Choose the spoken language of the video to ensure high quality transcript and insights extraction. If you don't know the language or there's more than one spoken language, select **Auto-detect single language** or **Auto-detect multi language**. For more information, see [Language detection](multi-language-identification-transcription.md).
+1. If this is the first time you upload a media file, you need to check the consent checkbox to agree to the terms and conditions.
+1. Select **Upload+index**.
+1. Review the summary page that shows the indexing settings and the upload progress.
+1. After the indexing is done, you can view the insights by selecting the video.
- return queryParameters.ToString();
-}
+## Advanced upload
-public class AccountContractSlim
-{
- public Guid Id { get; set; }
- public string Name { get; set; }
- public string Location { get; set; }
- public string AccountType { get; set; }
- public string Url { get; set; }
- public string AccessToken { get; set; }
-}
-```
+Follow steps below to upload and index a media file using the advanced upload option.
-### Common errors
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/upload-index-videos/advanced-settings.png" alt-text="Screenshot that shows advanced settings.":::
-The upload operation might return the following status codes:
+1. Sign in to the [Video Indexer website](https://aka.ms/vi-portal-link).
+1. Select **Upload**.
+1. Select the file source. You can upload up to 10 files at a time.
-|Status code|ErrorType (in response body)|Description|
-||||
-|409|VIDEO_INDEXING_IN_PROGRESS|The same video is already being processed in this account.|
-|400|VIDEO_ALREADY_FAILED|The same video failed to process in this account less than 2 hours ago. API clients should wait at least 2 hours before reuploading a video.|
-|429||Trial accounts are allowed 5 uploads per minute. Paid accounts are allowed 50 uploads per minute.|
+ - To upload from your file system, select **Browse files** and choose the files you want to upload. To add more files, select **Add file**. To remove a file, select **Remove** on the file name.
+ - To upload from a URL, select **Enter URL**, paste the source file URL, and select **Add**.
+
+ Make sure the URL is valid and the file is accessible.
-## Uploading considerations and limitations
+ > [!Note]
+ > If the file name is marked in red, it means the file has an issue and can't be uploaded. You can add URLs from different storages for each file.
+1. Configure the basic settings, for more information, see the [quick upload](#quick-upload) section above.
+1. Configure the general settings for indexing. You can rename the file names by rewriting the file name. The updated name is reflected as the file name in Video Indexer.
+1. Configure the advanced settings for indexing. The selection of the following settings is for all files in the batch:
-- The name of a video must be no more than 80 characters.-- When you're uploading a video based on the URL (preferred), the endpoint must be secured with TLS 1.2 or later.-- The upload size with the URL option is limited to 30 GB.-- The length of the request URL is limited to 6,144 characters. The length of the query string URL is limited to 4,096 characters.-- The upload size with the byte array option is limited to 2 GB.-- The byte array option times out after 30 minutes.-- The URL provided in the `videoURL` parameter must be encoded.-- Indexing Media Services assets has the same limitation as indexing from a URL.-- Azure Video Indexer has a duration limit of 4 hours for a single file.-- The URL must be accessible (for example, a public URL).
+ - **Indexing preset**: [Choose the preset](indexing-configuration-guide.md#indexing-options) that fits your scenario. You can also exclude sensitive AI by selecting the checkbox.
+ - **People model**: If you're using a customized people model, choose it from the dropdown list.
+ - **Brand categories**: If you're using a customized brand model, choose it from the dropdown list.
+ - **File information**: If you want to add metadata, enter the free text in the input box. The metadata is shared between all files in the same upload batch. When uploading a single file, you can also add a description.
+1. Select **Upload+index**.
+1. Review the summary page that shows the indexing settings and the upload progress.
+1. After the indexing is done, you can view the insights by selecting the video.
- If it's a private URL, the access token must be provided in the request.
-- The URL must point to a valid media file and not to a webpage, such as a link to the `www.youtube.com` page.-- In a paid account, you can upload up to 50 movies per minute. In a trial account, you can upload up to 5 movies per minute.
+## Troubleshoot upload issues
-> [!Tip]
-> We recommend that you use .NET Framework version 4.6.2. or later, because older .NET Framework versions don't default to TLS 1.2.
->
-> If you must use an older .NET Framework version, add one line to your code before making the REST API call:
->
-> `System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;`
+If you encounter any issues while uploading media files, try the following solutions:
-## Firewall
+- If the **Upload** button is disabled, hover over the button and check for the indication of the problem. Try to refresh the page.
-For information about a storage account that's behind a firewall, see the [FAQ](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
+ If you're using a trial account, check if you have reached the account quota for daily count, daily duration, or total duration. To view your quota and usage, see the Account settings.
+- If the upload from URL failed, make sure that the URL is valid and accessible by Video Indexer. Make sure that the URL isn't from a streaming service such as YouTube. Make sure that the media file isn't encrypted, protected by DRM, corrupted, or damaged. Make sure that the media file format is supported by Video Indexer. For a list of supported formats, see [supported media formats](https://learn.microsoft.com/azure/azure-video-indexer/upload-index-videos?tabs=with-arm-account-account#supported-file-formats).
+- If the upload from file system failed, make sure that the file size isn't larger than 2 GB. Make sure that you have a stable internet connection.
## Next steps
-[Examine the Azure Video Indexer output produced by an API](video-indexer-output-json-v2.md)
+[Supported media formats](https://learn.microsoft.com/azure/azure-video-indexer/upload-index-videos?tabs=with-arm-account-account#supported-file-formats)
backup Backup Azure Afs Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md
This article explains how to:
Set up PowerShell as follows:
-1. [Download the latest version of Azure PowerShell](/powershell/azure/install-az-ps).
+1. [Download the latest version of Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!NOTE] > The minimum PowerShell version required for backup of Azure file shares is Az.RecoveryServices 2.6.0. Using the latest version, or at least the minimum version, helps you avoid issues with existing scripts. Install the minimum version by using the following PowerShell command:
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
Review the **Az.RecoveryServices** [cmdlet reference](/powershell/module/az.reco
Set up PowerShell as follows:
-1. [Download the latest version of Az PowerShell](/powershell/azure/install-az-ps). The minimum version required is 1.5.0.
+1. [Download the latest version of Az PowerShell](/powershell/azure/install-azure-powershell). The minimum version required is 1.5.0.
2. Find the Azure Backup PowerShell cmdlets with this command:
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-automation.md
Review the **Az.RecoveryServices** [cmdlet reference](/powershell/module/az.reco
To begin:
-1. [Download the latest version of PowerShell](/powershell/azure/install-az-ps)
+1. [Download the latest version of PowerShell](/powershell/azure/install-azure-powershell)
2. Find the Azure Backup PowerShell cmdlets available by typing the following command:
backup Backup Client Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md
This article shows you how to use PowerShell to set up Azure Backup on Windows S
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-To get started, [install the latest PowerShell release](/powershell/azure/install-az-ps).
+To get started, [install the latest PowerShell release](/powershell/azure/install-azure-powershell).
## Create a Recovery Services vault
backup Backup Dpm Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-dpm-automation.md
Sample DPM scripts: Get-DPMSampleScript
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-To begin, [download the latest Azure PowerShell](/powershell/azure/install-az-ps).
+To begin, [download the latest Azure PowerShell](/powershell/azure/install-azure-powershell).
The following setup and registration tasks can be automated with PowerShell:
backup Quick Backup Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-powershell.md
The [Azure PowerShell AZ](/powershell/azure/new-azureps-module-az) module is use
This quickstart enables backup on an existing Azure VM. If you need to create a VM, you can [create a VM with Azure PowerShell](/previous-versions/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-create-vm?toc=%2fpowershell%2fmodule%2ftoc.json).
-This quickstart requires the Azure PowerShell AZ module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+This quickstart requires the Azure PowerShell AZ module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
backup Backup Powershell Script Undelete File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-script-undelete-file-share.md
Restore-DeletedFileShare $sa.Context $FileShareName $DeletedShareVersion
### Prerequisites
-1. Install the latest Azure PowerShell Az modules from [this link](/powershell/azure/install-az-ps) before running the script.
+1. Install the latest Azure PowerShell Az modules from [this link](/powershell/azure/install-azure-powershell) before running the script.
2. Keep the following details handy as you'll need to pass them as values for different parameters of the script: * **-SubscriptionId** - ID of the subscription where the file share is present.
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
description: Learn how to create a shareable link to let a user connect to a tar
Previously updated : 11/16/2022 Last updated : 05/10/2023
-# Create a shareable link for Bastion - preview
+# Create a shareable link for Bastion
The Bastion **Shareable Link** feature lets users connect to a target resource (virtual machine or virtual machine scale set) using Azure Bastion without accessing the Azure portal. This article helps you use the Shareable Link feature to create a shareable link for an existing Azure Bastion deployment.
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
Register Azure CDN as an app in your Azure Active Directory.
#### Azure PowerShell
-1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) on your local machine.
+1. If needed, install [Azure PowerShell](/powershell/azure/install-azure-powershell) on your local machine.
2. In PowerShell, run the following command:
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
Title: Audio Content Creation - Speech service
-description: Audio Content Creation is an online tool that allows you to run text-to-speech synthesis without writing any code.
+description: Audio Content Creation is an online tool that allows you to run Text to speech synthesis without writing any code.
# Speech synthesis with the Audio Content Creation tool
-You can use the [Audio Content Creation](https://speech.microsoft.com/portal/audiocontentcreation) tool in Speech Studio for text-to-speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
+You can use the [Audio Content Creation](https://speech.microsoft.com/portal/audiocontentcreation) tool in Speech Studio for Text to speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
-Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune text-to-speech voices and design customized audio experiences.
+Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune Text to speech voices and design customized audio experiences.
-The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust text-to-speech output attributes in real-time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
+The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust Text to speech output attributes in real-time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
-- No-code approach: You can use the Audio Content Creation tool for text-to-speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration.
+- No-code approach: You can use the Audio Content Creation tool for Text to speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration.
- Developer-friendly: You can listen to the output audio and adjust the SSML to improve speech synthesis. Then you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-basics.md) to integrate the SSML into your applications. For example, you can use the SSML for building a chat bot. You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
It takes a few moments to deploy your new Speech resource. After the deployment
## Use the tool
-The following diagram displays the process for fine-tuning the text-to-speech outputs.
+The following diagram displays the process for fine-tuning the Text to speech outputs.
:::image type="content" source="media/audio-content-creation/audio-content-creation-diagram.jpg" alt-text="Diagram of the sequence of steps for fine-tuning text-to-speech outputs.":::
After you've reviewed your audio output and are satisfied with your tuning and a
1. Select the file you want to download and **Download**. Now you're ready to use your custom tuned audio in your apps or products.
+
+## Configure BYOS and anonymous public read access for blobs
+
+If you lose access permission to your Bring Your Own Storage (BYOS), you won't be able to view, create, edit, or delete files. To resume your access, you need to remove the current storage and reconfigure the BYOS in the [Azure portal](https://portal.azure.com/#allservices). To learn more about how to configure BYOS, see [Mount Azure Storage as a local share in App Service](/azure/app-service/configure-connect-to-azure-storage?pivots=container-linux&tabs=portal).
+After configuring the BYOS permission, you need to configure anonymous public read access for related containers and blobs. Otherwise, blob data isn't available for public access and your lexicon file in the blob will be inaccessible. By default, a containerΓÇÖs public access setting is disabled. To grant anonymous users read access to a container and its blobs, first set **Allow Blob public access** to **Enabled** to allow public access for the storage account, then set the container's (named **acc-public-files**) public access level (**anonymous read access for blobs only**). To learn more about how to configure anonymous public read access, see [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure?tabs=portal).
+
## Add or remove Audio Content Creation users If more than one user wants to use Audio Content Creation, you can grant them access to the Azure subscription and the Speech resource. If you add users to an Azure subscription, they can access all the resources under the Azure subscription. But if you add users to a Speech resource only, they'll have access only to the Speech resource and not to other resources under this Azure subscription. Users with access to the Speech resource can use the Audio Content Creation tool.
cognitive-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-azure-ad-auth.md
To get the resource ID using PowerShell, confirm that you have PowerShell versio
`Get-Module -ListAvailable Az`
- If nothing appears, or if that version of the Azure PowerShell module is earlier than 5.1.0, follow the instructions at [Install the Azure PowerShell module](/powershell/azure/install-Az-ps) to upgrade.
+ If nothing appears, or if that version of the Azure PowerShell module is earlier than 5.1.0, follow the instructions at [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell) to upgrade.
Now run `Connect-AzAccount` to create a connection with Azure.
cognitive-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-data-loss-prevention.md
There are two parts to enable data loss prevention. First the property restrictO
# [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-az-ps) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
1. Display the current properties for Cognitive Services resource.
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
You can manage default network access rules for Cognitive Services resources thr
# [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-az-ps) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
1. Display the status of the default rule for the Cognitive Services resource.
You can manage virtual network rules for Cognitive Services resources through th
# [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-az-ps) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
1. List virtual network rules.
You can manage IP network rules for Cognitive Services resources through the Azu
# [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-az-ps) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
1. List IP network rules.
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md
+
+ Title: Custom Text Analytics for health data formats
+
+description: Learn about the data formats accepted by custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# Accepted data formats in custom text analytics for health
+
+Use this article to learn about formatting your data to be imported into custom text analytics for health.
+
+If you are trying to [import your data](../how-to/create-project.md#import-project) into custom Text Analytics for health, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/label-data.md).
+
+Your Labels file should be in the `json` format below to be used when importing your labels into a project.
+
+```json
+{
+ "projectFileVersion": "{API-VERSION}",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectName": "{PROJECT-NAME}",
+ "projectKind": "CustomHealthcare",
+ "description": "Trying out custom Text Analytics for health",
+ "language": "{LANGUAGE-CODE}",
+ "multilingual": true,
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "settings": {}
+ },
+ "assets": {
+ "projectKind": "CustomHealthcare",
+ "entities": [
+ {
+ "category": "Entity1",
+ "compositionSetting": "{COMPOSITION-SETTING}",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "One",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "EntityNumberOne",
+ "FirstEntity"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "category": "Entity2"
+ },
+ {
+ "category": "MedicationName",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "research drugs",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "rdrug a",
+ "rdrug b"
+ ]
+ }
+ ]
+
+ }
+ ]
+ }
+ "prebuilts": "MedicationName"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 500,
+ "labels": [
+ {
+ "category": "Entity1",
+ "offset": 25,
+ "length": 10
+ },
+ {
+ "category": "Entity2",
+ "offset": 120,
+ "length": 8
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 100,
+ "labels": [
+ {
+ "category": "Entity2",
+ "offset": 20,
+ "length": 5
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#) to learn more about multilingual support. | `true`|
+|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
+| `storageInputContainerName` |`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `category` | | The name of the entity type, which can be user defined for new entity definitions, or predefined for prebuilt entities. For more information, see the entity naming rules below.| |
+|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
+| `list` | | Array containing all the sublists you have in the project for a specific entity. Lists can be added to prebuilt entities or new entities with learned components.| |
+|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
+| `listKey`| `One` | A normalized value for the list of synonyms to map back to in prediction. | `One` |
+|`synonyms`|`[]`|Array containing all the synonyms|synonym|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the synonym in your sublist. If your project is a multilingual project and you want to support your list of synonyms for all the languages in your project, you have to explicitly add your synonyms to each language. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+| `values`| `"EntityNumberone"`, `"FirstEntity"` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"EntityNumberone"`, `"FirstEntity"` |
+| `prebuilts` | `MedicationName` | The name of the prebuilt component populating the prebuilt entity. [Prebuilt entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project by default but you can extend them with list components in your labels file. | `MedicationName` |
+| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
+| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
+| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
+| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | | The start position for the entity text. | `25`|
+| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+
+## Entity naming rules
+
+1. [Prebuilt entity names](../../text-analytics-for-health/concepts/health-entity-categories.md) are predefined. They must be populated with a prebuilt component and it must match the entity name.
+2. New user defined entities (entities with learned components or labeled text) can't use prebuilt entity names.
+3. New user defined entities can't be populated with prebuilt components as prebuilt components must match their associated entities names and have no labeled data assigned to them in the documents array.
+++
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
+* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
+* When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/entity-components.md
+
+ Title: Entity components in custom Text Analytics for health
+
+description: Learn how custom Text Analytics for health extracts entities from text
++++++ Last updated : 04/14/2023++++
+# Entity components in custom text analytics for health
+
+In custom Text Analytics for health, entities are relevant pieces of information that are extracted from your unstructured input text. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
+
+## Component types
+
+An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you can't add learned components. Similarly, you can create new entities with learned and list components, but you can't populate them with additional prebuilt components.
+
+### Learned component
+
+The learned component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels to your data for the entity. If you do not label any data, it will not have a learned component.
+
+The Text Analytics for health entities, which by default have prebuilt components can't be extended with learned components, meaning they do not require or accept further labeling to function.
++
+### List component
+
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+
+In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
+++
+### Prebuilt component
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components. Entities with prebuilt components are pretrained and can extract information relating to their categories without any labels.
+++
+## Entity options
+
+When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
+
+### Combine components
+
+Combine components as one entity when they overlap by taking the union of all the components.
+
+Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your input data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
++
+By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
++
+Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
++
+With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
+++
+### Don't combine components
+
+Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your labeled data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ labeled as Software:
++
+When you do not combine components, the entity will return twice:
+++
+## How to use components and options
+
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
+
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have a **Medication Name** entity, which has a `Medication.Name` prebuilt component added to it, the entity may not predict all the medication names specific to your domain. You can use a list component to extend the values of the Medication Name entity and thereby extending the prebuilt with your own values of Medication Names.
+
+Other times you may be interested in extracting an entity through context such as a **medical device**. You would label for the learned component of the medical device to learn _where_ a medical device is based on its position within the sentence. You may also have a list of medical devices that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
+
+When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
++
+## Next steps
+
+* [Entities with prebuilt components](../../text-analytics-for-health/concepts/health-entity-categories.md)
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/evaluation-metrics.md
+
+ Title: Custom text analytics for health evaluation metrics
+
+description: Learn about evaluation metrics in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Evaluation metrics for custom Text Analytics for health models
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data labels (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. User defined entities are **included** in the evaluation factoring in Learned and List components; Text Analytics for health prebuilt entities are **not** factored in the model evaluation. For evaluation, custom Text Analytics for health uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
+
+## Model-level and entity-level evaluation metrics
+
+Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
+
+### Example
+
+*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
+
+The model extracting entities from this text could have the following predictions:
+
+| Entity | Predicted as | Actual type |
+|--|--|--|
+| John Smith | Person | Person |
+| Frederick | Person | City |
+| Forrest | City | Person |
+| Fannie Thomas | Person | Person |
+| Colorado Springs | City | City |
+
+### Entity-level evaluation for the *person* entity
+
+The model would have the following entity-level evaluation, for the *person* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
+| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+
+* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
+* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
+* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+
+### Entity-level evaluation for the *city* entity
+
+The model would have the following entity-level evaluation, for the *city* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
+| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Model-level evaluation for the collective model
+
+The model would have the following evaluation for the model in its entirety:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
+| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
+| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
+
+## Interpreting entity-level evaluation metrics
+
+So what does it actually mean to have high precision or high recall for a certain entity?
+
+| Recall | Precision | Interpretation |
+|--|--|--|
+| High | High | This entity is handled well by the model. |
+| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
+| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
+| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
+
+* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
+
+* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
+
+* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
+
+* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
++
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
+The matrix compares the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
+
+The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
++
+You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each entity.
+* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all entities.
+* The *false positive* of the model is the sum of *false positives* for all entities.
+* The *false Negative* of the model is the sum of *false negatives* for all entities.
+
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
+
+ Title: Send a custom Text Analytics for health request to your custom model
+description: Learn how to send a request for custom text analytics for health.
+++++++ Last updated : 04/14/2023+
+ms.devlang: REST API
+++
+# Send queries to your custom Text Analytics for health model
+
+After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
+You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api).
+
+## Test deployed model
+
+You can use Language Studio to submit the custom Text Analytics for health task and visualize the results.
++
+## Send a custom text analytics for health request to your model
+
+# [Language Studio](#tab/language-studio)
++
+# [REST API](#tab/rest-api)
+
+First you will need to get your resource key and endpoint:
++
+### Submit a custom Text Analytics for health task
++
+### Get task results
+++++
+## Next steps
+
+* [Custom text analytics for health](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/create-project.md
+
+ Title: Using Azure resources in custom Text Analytics for health
+
+description: Learn about the steps for using Azure resources with custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# How to create custom Text Analytics for health project
+
+Use this article to learn how to set up the requirements for starting with custom text analytics for health and create a project.
+
+## Prerequisites
+
+Before you start using custom text analytics for health, you need:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Create a Language resource
+
+Before you start using custom text analytics for health, you'll need an Azure Language resource. It's recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text analytics for health.
+
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
+
+## Create Language resource and connect storage account
+
+You can create a resource in the following ways:
+
+* The Azure portal
+* Language Studio
+* PowerShell
+
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+++++
+> [!NOTE]
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
+
+## Using a pre-existing Language resource
++
+## Create a custom Text Analytics for health project
+
+Once your resource and storage container are configured, create a new custom text analytics for health project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Import project
+
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Get project details
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Delete project
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After you define your schema, you can start [labeling your data](label-data.md), which will be used for model training, evaluation, and finally making predictions.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/deploy-model.md
+
+ Title: Deploy a custom Text Analytics for health model
+
+description: Learn about deploying a model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Deploy a custom text analytics for health model
+
+Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
+
+## Deploy model
+
+After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/design-schema.md
+
+ Title: Preparing data and designing a schema for custom Text Analytics for health
+
+description: Learn about how to select and prepare data, to be successful in creating custom TA4H projects.
++++++ Last updated : 04/14/2023++++
+# How to prepare data and define a schema for custom Text Analytics for health
+
+In order to create a custom TA4H model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it entailing defining the entity types or categories that you need your model to extract from the text at runtime.
+
+## Schema design
+
+Custom Text Analytics for health allows you to extend and customize the Text Analytics for health entity map. The first step of the process is building your schema, which allows you to define the new entity types or categories that you need your model to extract from text in addition to the Text Analytics for health existing entities at runtime.
+
+* Review documents in your dataset to be familiar with their format and structure.
+
+* Identify the entities you want to extract from the data.
+
+ For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
+
+* Avoid entity types ambiguity.
+
+ **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
+
+ For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
+
+* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
+
+ For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
++
+## Add entities
+
+To add entities to your project:
+
+1. Move to **Entities** pivot from the top of the page.
+
+2. [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project. To add additional entity categories, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
+
+3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
+
+4. Entities are defined by [entity components](../concepts/entity-components.md): learned, list or prebuilt. Text Analytics for health entities are by default populated with the prebuilt component and cannot have learned components. Your newly defined entities can be populated with the learned component once you add labels for them in your data but cannot be populated with the prebuilt component.
+
+5. You can add a [list](../concepts/entity-components.md#list-component) component to any of your entities.
+
+
+### Add list component
+
+To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
+
+1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
+
+2. For multilingual projects, from the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
+
+ <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
+
+### Define entity options
+
+Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
+
+ <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
++
+After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
++
+## Data selection
+
+The quality of data you train your model with affects model performance greatly.
+
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
+
+* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
+
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+
+* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+
+> [!NOTE]
+> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+
+## Data preparation
+
+As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+
+You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
+
+You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/label-data.md) in Language studio.
+
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set includes documents that represent all entities used in your project.
+
+## Next steps
+
+If you haven't already, create a custom Text Analytics for health project. If it's your first time using custom Text Analytics for health, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/fail-over.md
+
+ Title: Back up and recover your custom Text Analytics for health models
+
+description: Learn how to save and recover your custom Text Analytics for health models.
++++++ Last updated : 04/14/2023++++
+# Back up and recover your custom Text Analytics for health models
+
+When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
+
+If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
++
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get training status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
+
+ [!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
++
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
+
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/label-data.md
+
+ Title: How to label your data for custom Text Analytics for health
+
+description: Learn how to label your data for use with custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Label your data using the Language Studio
+
+Data labeling is a crucial step in development lifecycle. In this step, you label your documents with the new entities you defined in your schema to populate their learned components. This data will be used in the next step when training your model so that your model can learn from the labeled data to know which entities to extract. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio).
+
+## Prerequisites
+
+Before you can label your data, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After preparing your data, designing your schema and creating your project, you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you have connected to this project.
+
+As you label your data, keep in mind:
+
+* You can't add labels for Text Analytics for health entities as they're pretrained prebuilt entities. You can only add labels to new entity categories that you defined during schema definition.
+
+If you want to improve the recall for a prebuilt entity, you can extend it by adding a list component while you are [defining your schema](design-schema.md).
+
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
+
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
+
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the documents.
+ * **Label completely**: Label all the instances of the entity in all your documents.
+
+ > [!NOTE]
+ > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your schema, and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
+
+## Label your data
+
+Use the following steps to label your data:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
+
+ <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
+
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific entity type.
+
+3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
+
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document. Hebrew is not supported with multi-lingual projects.
+
+4. In the right side pane, you can use the **Add entity type** button to add additional entities to your project that you missed during schema definition.
+
+ <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
+
+5. You have two options to label your document:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
+ |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
+
+ The below screenshot shows labeling using a brush.
+
+ :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
+
+6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each. The prebuilt entities will be shown for reference but you will not be able to label for these prebuilt entities as they are pretrained.
+
+7. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. See [training and testing sets](train-model.md#data-splitting) for information on how they are used for model training and evaluation.
+
+ > [!TIP]
+ > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
+
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific entity type.
+ * *Documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
+
+7. When you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
+
+## Remove labels
+
+To remove a label
+
+1. Select the entity you want to remove a label from.
+2. Scroll through the menu that appears, and select **Remove label**.
+
+## Delete entities
+
+You cannot delete any of the Text Analytics for health pretrained entities because they have a prebuilt component. You are only permitted to delete newly defined entity categories. To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
+
+## Next steps
+
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/train-model.md
+
+ Title: How to train your custom Text Analytics for health model
+
+description: Learn about how to train your model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Train your custom Text Analytics for health model
+
+Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
+
+To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
++
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
+
+Custom Text Analytics for health supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
+
+## Train model
+
+# [Language studio](#tab/Language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/view-model-evaluation.md
+
+ Title: Evaluate a Custom Text Analytics for health model
+
+description: Learn how to evaluate and score your Custom Text Analytics for health model
++++++ Last updated : 04/14/2023+++++
+# View a custom text analytics for health model's evaluation and details
+
+After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you train a new model, as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](label-data.md).
+
+## Prerequisites
+
+Before viewing model evaluation, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+* A [successfully trained model](train-model.md)
++
+## Model details
+
+There are several metrics you can use to evaluate your mode. See the [performance metrics](../concepts/evaluation-metrics.md) article for more information on the model details described in this article.
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* [Deploy your model](deploy-model.md)
+* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/language-support.md
+
+ Title: Language and region support for custom Text Analytics for health
+
+description: Learn about the languages and regions supported by custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Language support for custom text analytics for health
+
+Use this article to learn about the languages currently supported by custom Text Analytics for health.
+
+## Multilingual option
+
+With custom Text Analytics for health, you can train a model in one language and use it to extract entities from documents other languages. This feature saves you the trouble of building separate projects for each language and instead combining your datasets in a single project, making it easy to scale your projects to multiple languages. You can train your project entirely with English documents, and query it in: French, German, Italian, and others. You can enable the multilingual option as part of the project creation process or later through the project settings.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. In the [data labeling](how-to/label-data.md) page in Language Studio, you can select the language of the document you're adding. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better. When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+Hebrew is not supported in multilingual projects. If the primary language of the project is Hebrew, you will not be able to add training data in other languages, or query the model with other languages. Similarly, if the primary language of the project is not Hebrew, you will not be able to add training data in Hebrew, or query the model in Hebrew.
+
+## Language support
+
+Custom Text Analytics for health supports `.txt` files in the following languages:
+
+| Language | Language code |
+| | |
+| English | `en` |
+| French | `fr` |
+| German | `de` |
+| Spanish | `es` |
+| Italian | `it` |
+| Portuguese (Portugal) | `pt-pt` |
+| Hebrew | `he` |
++
+## Next steps
+
+* [Custom Text Analytics for health overview](overview.md)
+* [Service limits](reference/service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/overview.md
+
+ Title: Custom Text Analytics for health - Azure Cognitive Services
+
+description: Customize an AI model to label and extract healthcare information from documents using Azure Cognitive Services.
++++++ Last updated : 04/14/2023++++
+# What is custom Text Analytics for health?
+
+Custom Text Analytics for health is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
+
+Custom Text Analytics for health enables users to build custom AI models to extract healthcare specific entities from unstructured text, such as clinical notes and reports. By creating a custom Text Analytics for health project, developers can iteratively define new vocabulary, label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through creating making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/label-data.md) contain instructions for using the service in more specific or customized ways.
+
+## Example usage scenarios
+
+Similarly to Text Analytics for health, custom Text Analytics for health can be used in multiple [scenarios](../text-analytics-for-health/overview.md#example-use-cases) across a variety of healthcare industries. However, the main usage of this feature is to provide a layer of customization on top of Text Analytics for health to extend its existing entity map.
++
+## Project development lifecycle
+
+Using custom Text Analytics for health typically involves several different steps.
++
+* **Define your schema**: Know your data and define the new entities you want extracted on top of the existing Text Analytics for health entity map. Avoid ambiguity.
+
+* **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the files.
+ * **Label completely**: Label all the instances of the entity in all your files.
+
+* **Train the model**: Your model starts learning from your labeled data.
+
+* **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
+
+* **Deploy the model**: Deploying a model makes it available for use via an API.
+
+* **Extract entities**: Use your custom models for entity extraction tasks.
+
+## Reference documentation and code samples
+
+As you use custom Text Analytics for health, see the following reference documentation for Azure Cognitive Services for Language:
+
+|APIs| Reference documentation|
+||||
+|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
+|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using custom Text Analytics for health.
+
+* As you go through the project development lifecycle, review the glossary to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](reference/service-limits.md) for information such as [regional availability](reference/service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/quickstart.md
+
+ Title: Quickstart - Custom Text Analytics for health (Custom TA4H)
+
+description: Quickly start building an AI model to categorize and extract information from healthcare unstructured text.
++++++ Last updated : 04/14/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: custom Text Analytics for health
+
+Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
+
+In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example weΓÇÖll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.
+++++++
+## Next steps
+
+* [Text analytics for health overview](./overview.md)
+
+After you've created entity extraction model, you can:
+
+* [Use the runtime API to extract entities](how-to/call-api.md)
+
+When you start to create your own custom Text Analytics for health projects, use the how-to articles to learn more about data labeling, training and consuming your model in greater detail:
+
+* [Data selection and schema design](how-to/design-schema.md)
+* [Tag data](how-to/label-data.md)
+* [Train a model](how-to/train-model.md)
+* [Model evaluation](how-to/view-model-evaluation.md)
+
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/glossary.md
+
+ Title: Definitions used in custom Text Analytics for health
+
+description: Learn about definitions used in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Terms and definitions used in custom Text Analytics for health
+
+Use this article to learn about some of the definitions and terms you may encounter when using Custom Text Analytics for health
+
+## Entity
+Entities are words in input data that describe information relating to a specific category or concept. If your entity is complex and you would like your model to identify specific parts, you can break your entity into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Prebuilt entity component
+
+Prebuilt entity components represent pretrained entity components that belong to the [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md). These entities are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components.
++
+## Learned entity component
+
+The learned entity component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by labeling your data for the entity. If you do not label any data with the entity, it will not have a learned component. Learned components cannot be added to entities with prebuilt components.
+
+## List entity component
+A list entity component represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
+
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "clinics" and you have the words "clinic a, clinic b, clinic c" in the list, then the size entity will be predicted for all instances of the input data where "clinic a, clinic b, clinic c" are used regardless of the context. List components can be added to all entities regardless of whether they are prebuilt or newly defined.
+
+## Model
+A model is an object that's trained to do a certain task, in this case custom Text Analytics for health models perform all the features of Text Analytics for health in addition to custom entity extraction for the user's defined entities. Models are trained by providing labeled data to learn from so they can later be used to understand context from the input text.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive entities. It's the ratio between the predicted true positives and what was actually labeled. The recall metric reveals how many of the predicted entities are correct.
++
+## Schema
+Schema is defined as the combination of entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about what are the new entities should you add to your project to extend the existing [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md) and which new vocabulary should you add to the prebuilt entities using list components to enhance their recall. For example, adding a new entity for patient name or extending the prebuilt entity "Medication Name" with a new research drug (Ex: research drug A).
+
+## Training data
+Training data is the set of information that is needed to train a model.
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
+
+ Title: Custom Text Analytics for health service limits
+
+description: Learn about the data and service limits when using Custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Custom Text Analytics for health service limits
+
+Use this article to learn about the data and service limits when using custom Text Analytics for health.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
+
+* Your resource must be one of the supported pricing tiers:
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
+
+
+* You can only connect one storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](../how-to/create-project.md#create-language-resource-and-connect-storage-account)
+
+* You can have up to 500 projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Regional availability
+
+Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| East US | Γ£ô | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 20 documents as long as they collectively do not exceed 125,000 characters|
+
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|S|Training time| Unlimited, free |
+|S|Prediction Calls| 5,000 text records for free per language resource|
+
+## Document limits
+
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
+
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
+
+* All files should be available at the root of your container.
+
+## Data limits
+
+The following limits are observed for authoring.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of entity types | 1 | 200 |
+|Entity length in characters | 1 | 500 |
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters. See the supported [data format](../concepts/data-formats.md#entity-naming-rules) for more information on entity names when importing a labels file. |
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
++
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
The Language service also provides several new features as well, which can eithe
:::column-end::: :::row-end:::
+### Custom text analytics for health
+
+ :::column span="":::
+ :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a custom text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
+ :::column-end:::
+ :::column span="":::
+ [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) is a custom feature that extract healthcare specific entities from unstructured text, using a model you create.
+ :::column-end:::
## Which Language service feature should I use?
This section will help you decide which Language service feature you should use
| Disambiguate entities and get links to Wikipedia. | Unstructured text | [Entity linking](./entity-linking/overview.md) | | | Classify documents into one or more categories. | Unstructured text | [Custom text classification](./custom-text-classification/overview.md) | Γ£ô| | Extract medical information from clinical/medical documents, without building a model. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
+| Extract medical information from clinical/medical documents using a model that's trained on your data. | Unstructured text | [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) | |
| Build a conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô | | Detect the language that a text was written in. | Unstructured text | [Language detection](./language-detection/overview.md) | | | Predict the intention of user inputs and extract information from them. | Unstructured user inputs | [Conversational language understanding](./conversational-language-understanding/overview.md) | Γ£ô |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## April 2023
+* [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below. * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md). * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
Assigning yourself to the Cognitive Services User role will allow you to use you
1. Get your user information ```azurecli
- export user=$(az account show | jq -r .user.name)
+ export user=$(az account show -o json | jq -r .user.name)
``` 2. Assign yourself to ΓÇ£Cognitive Services UserΓÇ¥ role. ```azurecli
- export resourceId=$(az group show -g $myResourceGroupName | jq -r .id)
+ export resourceId=$(az group show -g $myResourceGroupName -o json | jq -r .id)
az role assignment create --role "Cognitive Services User" --assignee $user --scope $resourceId ``` > [!NOTE]
- > Role assignment change will take ~5 mins to become effective. Therefore, I did this step ahead of time. Skip this if you have already done this previously.
+ > Role assignment change will take ~5 mins to become effective.
3. Acquire an Azure AD access token. Access tokens expire in one hour. you'll then need to acquire another one. ```azurecli
- export accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com | jq -r .accessToken)
+ export accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com -o json | jq -r .accessToken)
``` 4. Make an API call
cognitive-services Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/use-key-vault.md
Key Vault reduces the chances that secrets may be accidentally leaked, because y
* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free). * [Python 3.7 or later](https://www.python.org/)
-* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
* An [Azure Key Vault](../key-vault/general/quick-create-portal.md) * [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
Key Vault reduces the chances that secrets may be accidentally leaked, because y
* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free). * [Java Development Kit (JDK) version 8 or above](/azure/developer/java/fundamentals/)
-* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
* An [Azure Key Vault](../key-vault/general/quick-create-portal.md) * [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
Key Vault reduces the chances that secrets may be accidentally leaked, because y
* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free). * [Current Node.js v14 LTS or later](https://nodejs.org/)
-* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
* An [Azure Key Vault](../key-vault/general/quick-create-portal.md) * [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
Title: Azure direct routing infrastructure requirements ΓÇö Azure Communication Services
+ Title: Azure direct routing infrastructure requirementsΓÇöAzure Communication Services
description: Familiarize yourself with the infrastructure requirements for Azure Communication Services direct routing configuration Previously updated : 06/30/2021 Last updated : 05/11/2023
An example would be using `\*.contoso.com`, which would match the SBC FQDN `sbc.
>[!NOTE] > SBC FQDN in Azure Communication Services direct routing must be different from SBC FQDN in Teams Direct Routing.
->[!IMPORTANT]
->During Public Preview only: if you plan to use a wildcard certificate for the domain that is not registered in Teams, please raise a support ticket, and our team will add it as a trusted domain.
- Communication Services only trusts certificates signed by Certificate Authorities (CAs) that are part of the Microsoft Trusted Root Certificate Program. Ensure that your SBC certificate is signed by a CA that is part of the program, and that Extended Key Usage (EKU) extension of your certificate includes Server Authentication. Learn more:
Learn more:
>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 i.e. ECDHE-RSA-AES256-SHA384 >TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 i.e. ECDHE-RSA-AES128-SHA256
-SBC pairing works on the Communication Services resource level. It means you can pair many SBCs to a single Communication Services resource. Still, you cannot pair a single SBC to more than one Communication Services resource. Unique SBC FQDNs are required for pairing to different resources.
+SBC pairing works on the Communication Services resource level. It means you can pair many SBCs to a single Communication Services resource. Still, you can't pair a single SBC to more than one Communication Services resource. Unique SBC FQDNs are required for pairing to different resources.
## SIP Signaling: FQDNs The connection points for Communication Services direct routing are the following three FQDNs: -- **sip.pstnhub.microsoft.com** ΓÇö Global FQDN ΓÇö must be tried first. When the SBC sends a request to resolve this name, the Microsoft Azure DNS servers return an IP address that points to the primary Azure datacenter assigned to the SBC. The assignment is based on performance metrics of the datacenters and geographical proximity to the SBC. The IP address returned corresponds to the primary FQDN.
+- **sip.pstnhub.microsoft.com ΓÇö Global FQDN ΓÇö must be tried first. When the SBC sends a request to resolve this name, the Microsoft Azure DNS servers return an IP address that points to the primary Azure datacenter assigned to the SBC. The assignment is based on performance metrics of the datacenters and geographical proximity to the SBC. The IP address returned corresponds to the primary FQDN.
- **sip2.pstnhub.microsoft.com** ΓÇö Secondary FQDN ΓÇö geographically maps to the second priority region. - **sip3.pstnhub.microsoft.com** ΓÇö Tertiary FQDN ΓÇö geographically maps to the third priority region.
communication-services Known Limitations Acs Telephony https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/known-limitations-acs-telephony.md
Previously updated : 09/29/2022 Last updated : 05/11/2023
This article provides information about limitations and known issues related to
- Anonymous calling isn't supported. - will be fixed in GA release-- Different set of Media Processors (MP) is used with different IP addresses. Currently [any Azure IP address](./direct-routing-infrastructure.md#media-traffic-ip-and-port-ranges) can be used for media connection between Azure MP and Session Border Controller (SBC).
- - will be fixed in GA release
+- Maximum number of configured Session Border Controllers (SBC) is 250 per communication resource.
- When you change direct routing configuration (add SBC, change Voice Route, etc.), wait approximately five minutes for changes to take effect. - If you move SBC FQDN to another Communication resource, wait approximately an hour, or restart SBC to force configuration change. - Azure Communication Services SBC Fully Qualified Domain Name (FQDN) must be different from Teams Direct Routing SBC FQDN. - One SBC FQDN can be connected to a single resource only. Unique SBC FQDNs are required for pairing to different resources.-- Wildcard SBC certificates require extra workaround. Contact Azure support for details.
- - will be fixed in GA release
- Media bypass/optimization isn't supported.-- No indication of SBC connection status/details in Azure portal.
- - will be fixed in GA release
- Azure Communication Services direct routing isn't available in Government Clouds. - Multi-tenant trunks aren't supported. - Location-based routing isn't supported.
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
Previously updated : 03/17/2023 Last updated : 05/05/2023 # Deploy Azure Communications Gateway
Once your resource has been provisioned, a message appears saying **Your deploym
:::image type="content" source="media/deploy/go-to-resource-group.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing a completed deployment screen.":::
-## 3. Provide additional information to your onboarding team
+## 3. Find the Object ID and Application ID for your Azure Communication Gateway resource
+
+Each Azure Communications Gateway resource automatically receives a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), which Azure Communications Gateway uses to connect to the Operator Connect environment. You need to find the Object ID and Application ID of the managed identity, so that you can connect Azure Communications Gateway to the Operator Connect or Teams Phone Mobile environment in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway) and [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+
+1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+1. In the search bar at the top of the page, search for your Communications Gateway resource.
+1. Select your Communications Gateway resource.
+1. Select **Identity**.
+1. In **System assigned**, copy the **Object (principal) ID**.
+1. Search for the value of **Object (principal) ID** with the search bar. You should see an enterprise application with that value under the **Azure Active Directory** subheading. You might need to select **Continue searching in Azure Active Directory** to find it.
+1. Make a note of the **Object (principal) ID**.
+1. Select the enterprise application.
+1. Check that the **Object ID** matches the **Object (principal) ID** value that you copied.
+1. Make a note of the **Application ID**.
+
+## 4. Set up application roles for Azure Communications Gateway
+
+Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application when you [prepared to deploy Azure Communications Gateway](prepare-to-deploy.md#1-add-the-project-synergy-application-to-your-azure-tenancy).
+
+> [!IMPORTANT]
+> Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+
+Do the following steps in the tenant that contains your Project Synergy application.
+
+1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
+ 1. Open PowerShell.
+ 1. Run the following command and check whether `AzureAD` appears in the output.
+ ```azurepowershell
+ Get-Module -ListAvailable
+ ```
+ 1. If `AzureAD` doesn't appear in the output, install the module:
+ 1. Close your current PowerShell window.
+ 1. Open PowerShell as an admin.
+ 1. Run the following command.
+ ```azurepowershell
+ Install-Module AzureAD
+ ```
+ 1. Close your PowerShell admin window.
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin.
+1. Select **Azure Active Directory**.
+1. Select **Properties**.
+1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID.
+1. Open PowerShell.
+1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
+ ```azurepowershell
+ Connect-AzureAD -TenantId "<AADTenantID>"
+ ```
+1. Run the following cmdlet, replacing *`<CommunicationsGatewayObjectID>`* with the Object ID you noted down in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+ ```azurepowershell
+ $commGwayObjectId = "<CommunicationsGatewayObjectID>"
+ ```
+1. Run the following PowerShell commands. These commands add the following roles for Azure Communications Gateway: `TrunkManagement.Read`, `TrunkManagement.Write`, `partnerSettings.Read`, `NumberManagement.Read`, `NumberManagement.Write`, `Data.Read`, `Data.Write`.
+ ```azurepowershell
+ # Get the Service Principal ID for Project Synergy (Operator Connect)
+ $projectSynergyApplicationId = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $projectSynergyEnterpriseApplication = Get-AzureADServicePrincipal -Filter "AppId eq '$projectSynergyApplicationId'"
+ $projectSynergyObjectId = $projectSynergyEnterpriseApplication.ObjectId
+
+ # Required Operator Connect - Project Synergy Roles
+ $trunkManagementRead = "72129ccd-8886-42db-a63c-2647b61635c1"
+ $trunkManagementWrite = "e907ba07-8ad0-40be-8d72-c18a0b3c156b"
+ $partnerSettingsRead = "d6b0de4a-aab5-4261-be1b-0e1800746fb2"
+ $numberManagementRead = "130ecbe2-d1e6-4bbd-9a8d-9a7a909b876e"
+ $numberManagementWrite = "752b4e79-4b85-4e33-a6ef-5949f0d7d553"
+ $dataRead = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $dataWrite = "98d32f93-eaa7-4657-b443-090c23e69f27"
+
+ $requiredRoles = $trunkManagementRead, $trunkManagementWrite, $partnerSettingsRead, $numberManagementRead, $numberManagementWrite, $dataRead, $dataWrite
+
+ foreach ($role in $requiredRoles) {
+ # Assign the relevant Role to the managed identity for the Azure Communications Gateway resource
+ New-AzureADServiceAppRoleAssignment -ObjectId $commGwayObjectId -PrincipalId $commGwayObjectId -ResourceId $projectSynergyObjectId -Id $role
+ }
+ ```
+
+## 5. Provide additional information to your onboarding team
> [!NOTE] >This step is required to set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. Skip this step if you have already onboarded to TPM or OC.
Before your onboarding team can finish onboarding you to the Operator Connect an
If you don't already have an onboarding team, contact azcog-enablement@microsoft.com, providing your Azure subscription ID and contact details.
-## 4. Test your Operator Connect portal access
+## 6. Test your Operator Connect portal access
> [!IMPORTANT] > Before testing your Operator Connect portal access, wait for your onboarding team to confirm that the onboarding process is complete. Go to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and check that you're able to sign in.
-## 5. Add the application ID for Azure Communications Gateway to Operator Connect
+## 7. Add the Application ID for Azure Communications Gateway to Operator Connect
-You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md#10-set-up-application-roles-for-azure-communications-gateway).
+You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway).
-To enable the Azure Communications Gateway application, add the application ID of the service principal representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment:
+To enable the application, add the Application ID of the system-assigned managed identity representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment. You found this ID in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
-1. Optionally, check the application ID of the service principal to confirm that you're adding the right application.
- 1. Search for `AzureCommunicationsGateway` with the search bar: it's under the **Azure Active Directory** subheading.
- 1. On the overview page, check that the value of **Application ID** is `8502a0ec-c76d-412f-836c-398018e2312b`.
1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
-1. Add a new **Application Id**, pasting in the following value. This value is the application ID for Azure Communications Gateway.
- ```
- 8502a0ec-c76d-412f-836c-398018e2312b
- ```
+1. Add a new **Application Id**, using the Application ID that you found.
## 6. Register your deployment's domain name in Active Directory
communications-gateway Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability.md
Previously updated : 12/07/2022 Last updated : 04/26/2023
Azure Communications Gateway sits at the edge of your network. This position all
Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to the Microsoft Phone System, allowing you to support Operator Connect (for fixed line networks) and Teams Phone Mobile (for mobile networks). The following diagram shows where Azure Communications Gateway sits in your network. :::image type="complex" source="media/azure-communications-gateway-architecture.png" alt-text="Architecture diagram for Azure Communications Gateway connecting to fixed and mobile networks":::
- Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. The mobile network also contains an application server for anchoring calls in the Microsoft Phone System.
+ Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains certified SBC function and the MCP application server for anchoring mobile calls.
:::image-end::: Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients.
You must provide the networking connection between Azure Communications Gateway
Azure Communications Gateway supports the Microsoft specifications for Certified SBCs for Operator Connect and Teams Phone Mobile. For more information about certification and these specifications, see [Session Border Controllers certified for Direct Routing](/microsoftteams/direct-routing-border-controllers) and the Operator Connect or Teams Phone Mobile documentation provided by your Microsoft representative. ### Call control integration for Teams Phone Mobile+ [Teams Phone Mobile](/microsoftteams/operator-connect-mobile-plan) allows you to offer Microsoft Teams call services for calls made from the native dialer on mobile handsets, for example presence and call history. These features require anchoring the calls in Microsoft's Intelligent Conversation and Communications Cloud (IC3), part of the Microsoft Phone System. The Microsoft Phone System relies on information in SIP signaling to determine whether a call is:
Your core mobile network must supply this information to Azure Communications Ga
Your core mobile network must also be able to anchor and divert calls into the Microsoft Phone System. You can choose from the following options. -- Deploying Metaswitch Mobile Control Point (MCP). MCP is an IMS Application Server that queries the Teams Phone Mobile Consultation API to determine whether the call involves a Teams Phone Mobile Subscriber. MCP then adds X-MS-FMC headers and updates the signaling to divert the call into the Microsoft Phone System through Azure Communications Gateway. For more information, see the [Metaswitch description of Mobile Control Point](https://www.metaswitch.com/products/mobile-control-point).
+- Using Mobile Control Point (MCP) in Azure Communications Gateway. MCP is an IMS Application Server that queries the Teams Phone Mobile Consultation API to determine whether the call involves a Teams Phone Mobile Subscriber. MCP then adds X-MS-FMC headers and updates the signaling to divert the call into the Microsoft Phone System through Azure Communications Gateway. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
+- Deploying an on-premises version of Mobile Control Point (MCP) from Metaswitch. For more information, see the [Metaswitch description of Mobile Control Point](https://www.metaswitch.com/products/mobile-control-point). This version of MCP isn't included in Azure Communications Gateway.
- Using other routing capabilities in your core network to detect Teams Phone Mobile subscribers and route INVITEs to or from these subscribers into the Microsoft Phone System through Azure Communications Gateway. > [!IMPORTANT]
communications-gateway Mobile Control Point https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/mobile-control-point.md
+
+ Title: Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile
+description: Azure Communication Gateway optionally contains Mobile Control Point for anchoring Teams Phone Mobile calls in the Microsoft Could
++++ Last updated : 04/17/2023+++
+# Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile
+
+Mobile Control Point (MCP) is an IMS Application Server integrated into Azure Communications Gateway. It simplifies interworking with Microsoft Phone System (MPS) by minimizing the network adaptation needed in your mobile network to route calls into Microsoft Teams.
+
+MCP queries MPS to determine whether the caller or callee is eligible for Teams Phone Mobile services.
+
+* If the caller or callee is eligible, MCP adds MPS to the call path, so that MPS can provide Team Phone Mobile services.
+* If the user isn't eligible or the call doesn't reach MPS, MCP ensures that native mobile calls continue to reach their target, although without Microsoft Teams services or alerting in Microsoft Teams clients.
+
+For more information about the role MCP provides in a Teams Phone Mobile deployment (including call flows), see the Teams Phone Mobile documentation provided by your Microsoft representative.
+
+## SIP signaling
+
+MCP integrates with your IMS S-CSCF using an ISC interface. This interface is defined in 3GPP TS 23.218 and TS 23.228, with more detail provided in 3GPP TS 24.229. You can optionally deploy ISC gateway function at the edge of your IMS network to provide border control, similar to the border control provided by an IBCF.
+
+MCP acts as a SIP proxy. It queries MPS to determine if a call involves a Teams Phone Mobile subscriber and updates the signaling on the call to route the call to MPS as required. It doesn't process media.
+
+MCP always queries MPS unless the call meets one of the following conditions:
+
+* A mobile originating call has an X-MS-FMC header with any value.
+* A call from a Teams client has an X-MS-FMC header with the value `APP`.
+* A mobile terminating call has an X-MS-FMC header with the value `MT`.
+
+These X-MS-FMC headers are added by MPS, and allow MCP to avoid creating loops where it continually queries MPS.
+
+MCP determines whether a call is mobile originating or mobile terminating by using (in order of preference) the `sescase` parameter on the P-Served-User header, `term` or `orig` parameters on the top Route header or `term` or `orig` parameters in the URI of the Route header. If none of these parameters are present, MCP treats the call as mobile terminating.
+
+MCP determines the served subscriber for a mobile originating call from the URI in the P-Served-User header or P-Asserted-Identity header.
+It determines the served subscriber from a mobile terminating call from the URI in the P-Served-User header or the Request-URI.
+
+If MPS responds with an error or can't provide a number to use to route the call, MCP can't update the signaling, so the call doesn't receive Teams Phone Mobile services. MCP passes any SIP errors back into your mobile network.
+
+MCP supports E.164 numbers and sip: and tel: URIs.
+
+All traffic to MCP must use SIP over TLS.
+
+## Invoking MCP for Teams Phone Mobile subscribers
+
+Teams Phone Mobile subscribers require Initial Filter Criteria (iFC) configuration in the HSS to involve MCP at the appropriate points in the call: we recommend invoking it last in the originating iFC chain and first in the terminating iFC chain. Invoke MCP for all calls involving Teams Phone Mobile subscribers, except for CDIV calls.
+
+The IFCs must use to a hostname for MCP. MCP provides two hostnames, each prioritizing one region (and allowing fallback to the other region). To find the hostnames:
+
+1. Go to the **Overview** page for your Azure Communications Gateway resource.
+1. In each **Service Location** section, find the **MCP hostname** field.
+
+For example, you could use the following iFC (replacing *`<mcp-hostname>`* with one of the hostnames).
+
+```xml
+<InitialFilterCriteria>
+ <Priority>0</Priority>
+ <TriggerPoint>
+ <ConditionTypeCNF>0</ConditionTypeCNF>
+ <SPT>
+ <ConditionNegated>0</ConditionNegated>
+ <Group>0</Group>
+ <Method>INVITE</Method>
+ </SPT>
+ <SPT>
+ <ConditionNegated>1</ConditionNegated>
+ <Group>0</Group>
+ <SessionCase>4</SessionCase>
+ </SPT>
+ </TriggerPoint>
+ <ApplicationServer>
+ <ServerName>sips:<mcp-hostname>;transport=tcp;service=mcp</ServerName>
+ <DefaultHandling>0</DefaultHandling>
+ </ApplicationServer>
+ <ProfilePartIndicator>0</ProfilePartIndicator>
+</InitialFilterCriteria>
+```
+
+## Next steps
+
+- Learn about [preparing to deploy Integrated Mobile Control Point in Azure Communications Gateway](prepare-to-deploy.md)
+- Learn how to [integrate Azure Communications Gateway with Integrated Mobile Control Point with your network](prepare-for-live-traffic.md)
++
communications-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md
Previously updated : 12/14/2022 Last updated : 04/26/2023
Traffic from all enterprises shares a single SIP trunk, using a multi-tenant for
## Voice features
-Azure Communications Gateway supports the SIP and RTP requirements for Teams Certified SBCs. It can transform call flows to suit your network with minimal disruption to existing infrastructure. Its voice features include:
+Azure Communications Gateway supports the SIP and RTP requirements for Teams Certified SBCs. It can transform call flows to suit your network with minimal disruption to existing infrastructure.
+
+Azure Communications Gateway's voice features include:
- **Optional direct peering to Emergency Routing Service Providers (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling with Azure Communications Gateway](emergency-calling.md). - **Voice interworking** - Azure Communications Gateway can resolve interoperability issues between your network and Microsoft Teams. Its position on the edge of your network reduces disruption to your networks, especially in complex scenarios like Teams Phone Mobile where Teams Phone System is the call control element. Azure Communications Gateway includes powerful interworking features, for example:
Azure Communications Gateway supports the SIP and RTP requirements for Teams Cer
- Payload type interworking - Media transcoding - Ringback injection
+- **Call control integration for Teams Phone Mobile** - Azure Communications Gateway includes an optional IMS Application Server called Mobile Control Point (MCP). MCP ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
## API features
communications-gateway Prepare For Live Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic.md
Previously updated : 12/14/2022 Last updated : 05/11/2023 # Prepare for live traffic with Azure Communications Gateway
In some parts of this article, the steps you must take depend on whether your de
## 1. Connect Azure Communications Gateway to your networks 1. Configure your infrastructure to meet the call routing requirements described in [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
-1. Configure your network devices to send and receive traffic from Azure Communications Gateway. You might need to configure SBCs, softswitches and access control lists (ACLs).
+1. Configure your network devices to send and receive SIP traffic from Azure Communications Gateway. You might need to configure SBCs, softswitches and access control lists (ACLs). To find the hostnames to use for SIP traffic:
+ 1. Go to the **Overview** page for your Azure Communications Gateway resource.
+ 1. In each **Service Location** section, find the **Hostname** field.
+1. If your Azure Communications Gateway includes integrated MCP, configure the connection to MCP:
+ 1. Go to the **Overview** page for your Azure Communications Gateway resource.
+ 1. In each **Service Location** section, find the **MCP hostname** field.
+ 1. Configure your test numbers with an iFC of the following form, replacing *`<mcp-hostname>`* with the MCP hostname for the preferred region for that subscriber.
+ ```xml
+ <InitialFilterCriteria>
+ <Priority>0</Priority>
+ <TriggerPoint>
+ <ConditionTypeCNF>0</ConditionTypeCNF>
+ <SPT>
+ <ConditionNegated>0</ConditionNegated>
+ <Group>0</Group>
+ <Method>INVITE</Method>
+ </SPT>
+ <SPT>
+ <ConditionNegated>1</ConditionNegated>
+ <Group>0</Group>
+ <SessionCase>4</SessionCase>
+ </SPT>
+ </TriggerPoint>
+ <ApplicationServer>
+ <ServerName>sips:<mcp-hostname>;transport=tcp;service=mcp</ServerName>
+ <DefaultHandling>0</DefaultHandling>
+ </ApplicationServer>
+ <ProfilePartIndicator>0</ProfilePartIndicator>
+ </InitialFilterCriteria>
+ ```
1. Configure your routers and peering connection to ensure all traffic to Azure Communications Gateway is through Azure Internet Peering for Communications Services (also known as MAPS for Voice). 1. Enable Bidirectional Forwarding Detection (BFD) on your on-premises edge routers to speed up link failure detection. - The interval must be 150 ms (or 300 ms if you can't use 150 ms).
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
description: Learn how to complete the prerequisite tasks required to deploy Azu
- Previously updated : 03/13/2023+ Last updated : 05/05/2023 # Prepare to deploy Azure Communications Gateway This article guides you through each of the tasks you need to complete before you can start to deploy Azure Communications Gateway. In order to be successfully deployed, the Azure Communications Gateway has dependencies on the state of your Operator Connect or Teams Phone Mobile environments.+ The following sections describe the information you need to collect and the decisions you need to make prior to deploying Azure Communications Gateway. ## Prerequisites
To add the Project Synergy application:
1. Select **Properties**. 1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID. 1. Open PowerShell.
-1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 4.
+1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
```azurepowershell Connect-AzureAD -TenantId "<AADTenantID>" New-AzureADServicePrincipal -AppId eb63d611-525e-4a31-abd7-0cb33f679599 -DisplayName "Operator Connect"
The user who sets up Azure Communications Gateway needs to have the Admin user r
Ensure your network is set up as shown in the following diagram and has been configured in accordance with the *Network Connectivity Specification* that you've been issued. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
+For Teams Phone Mobile, you must decide how your network should determine whether a call involves a Teams Phone Mobile subscriber and therefore route the call to Microsoft Phone System. You can:
+
+- Use Azure Communications Gateway's integrated Mobile Control Point (MCP).
+- Connect to an on-premises version of Mobile Control Point (MCP) from Metaswitch.
+- Use other routing capabilities in your core network.
+
+For more information on these options, see [Call control integration for Teams Phone Mobile](interoperability.md#call-control-integration-for-teams-phone-mobile) and [Mobile Control Point in Azure Communications Gateway](mobile-control-point.md).
+ To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md). :::image type="content" source="media/azure-communications-gateway-redundancy.png" alt-text="Network diagram of an Azure Communications Gateway that uses MAPS as its peering service between Azure and an operators network.":::
To configure MAPS, follow the instructions in [Azure Internet peering for Commun
|The scope at which Azure Communications Gateway's autogenerated domain name label is unique. Communications Gateway resources get assigned an autogenerated domain name label that depends on the name of the resource. You'll need to register the domain name later when you deploy Azure Communications Gateway. Selecting **Tenant** gives a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** gives a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** gives a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**Instance details: Auto-generated Domain Name Scope**| |The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Instance details: Teams Voicemail Pilot Number**| |A list of dial strings used for emergency calling.|**Instance details: Emergency Dial Strings**|
- |Whether an on-premises Mobile Control Point is in use.|**Instance details: Enable on-premises MCP functionality**|
+ | How you plan to use Mobile Control Point (MCP) to route Teams Phone Mobile calls to Microsoft Phone System. Choose from **Integrated** (to deploy MCP in Azure Communications Gateway), **On-premises** (to use an existing on-premises MCP) or **None** (if you don't plan to offer Teams Phone Mobile or you'll use another method to route calls). |**Instance details: MCP**|
## 5. Collect Service Regions configuration values
Access to Azure Communications Gateway is restricted. When you've completed the
Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step.
-## 9. Register the Microsoft Voice Services resource provider
-
-If the **Microsoft.VoiceServices** resource provider isn't already registered in your subscription, register this provider.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your Azure subscription.
-1. Select **Resource providers** under the **Settings** tab.
-1. Search for the **Microsoft.VoiceServices** resource provider.
-1. Check if the resource provider is already marked as registered. If it isn't, choose the resource provider and select **Register**.
-
-## 10. Set up application roles for Azure Communications Gateway
-
-Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to an AzureCommunicationsGateway service principal under the Project Synergy Enterprise Application. You created the Project Synergy application in [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy). We created the Azure Communications Gateway service principal for you when you followed [9. Register the Microsoft Voice Services resource provider](#9-register-the-microsoft-voice-services-resource-provider).
-
-> [!IMPORTANT]
-> Granting permissions has two parts: configuring the service principal with the appropriate roles (this step) and adding the ID of the service principal to the Operator Connect or Teams Phone Mobile environment. You'll add the service principal to the Operator Connect or Teams Phone Mobile environment later, as part of [deploying Azure Communications Gateway](deploy.md).
-
-Do the following steps in the tenant that contains your Project Synergy application.
-
-1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
- 1. Open PowerShell.
- 1. Run the following command and check whether `AzureAD` appears in the output.
- ```azurepowershell
- Get-Module -ListAvailable
- ```
- 1. If `AzureAD` doesn't appear in the output, install the module:
- 1. Close your current PowerShell window.
- 1. Open PowerShell as an admin.
- 1. Run the following command.
- ```azurepowershell
- Install-Module AzureAD
- ```
- 1. Close your PowerShell admin window.
-1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin.
-1. Select **Azure Active Directory**.
-1. Select **Properties**.
-1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID.
-1. Open PowerShell.
-1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 4.
- ```azurepowershell
- Connect-AzureAD -TenantId "<AADTenantID>"
- ```
-1. Run the following PowerShell commands. These commands add the following roles for Azure Communications Gateway: `TrunkManagement.Read`, `NumberManagement.Read`, `NumberManagement.Write`, `Data.Read`, `Data.Write`, `TrunkManagement.Write`, `PartnerSettings.Read`.
- ```azurepowershell
- # Get the Service Principal ID for Azure Communications Gateway
- $commGwayApplicationId = "8502a0ec-c76d-412f-836c-398018e2312b"
- $commGwayEnterpriseApplication = Get-AzureADServicePrincipal -Filter "AppId eq '$commGwayApplicationId'"
- $commGwayObjectId = $commGwayEnterpriseApplication.ObjectId
-
- # Get the Service Principal ID for Project Synergy (Operator Connect)
- $projectSynergyApplicationId = "eb63d611-525e-4a31-abd7-0cb33f679599"
- $projectSynergyEnterpriseApplication = Get-AzureADServicePrincipal -Filter "AppId eq '$projectSynergyApplicationId'"
- $projectSynergyObjectId = $projectSynergyEnterpriseApplication.ObjectId
-
- # Required Operator Connect - Project Synergy Roles
- $trunkManagementRead = "72129ccd-8886-42db-a63c-2647b61635c1"
- $trunkManagementWrite = "e907ba07-8ad0-40be-8d72-c18a0b3c156b"
-
- $partnerSettingsRead = "d6b0de4a-aab5-4261-be1b-0e1800746fb2"
-
- $numberManagementRead = "130ecbe2-d1e6-4bbd-9a8d-9a7a909b876e"
- $numberManagementWrite = "752b4e79-4b85-4e33-a6ef-5949f0d7d553"
-
- $dataRead = "eb63d611-525e-4a31-abd7-0cb33f679599"
- $dataWrite = "98d32f93-eaa7-4657-b443-090c23e69f27"
-
- $requiredRoles = $trunkManagementRead, $numberManagementRead, $numberManagementWrite, $dataRead, $dataWrite, $trunkManagementWrite, $partnerSettingsRead
-
- foreach ($role in $requiredRoles) {
- # Assign the relevant Role to the Azure Communications Gateway Service Principal
- New-AzureADServiceAppRoleAssignment -ObjectId $commGwayObjectId -PrincipalId $commGwayObjectId -ResourceId $projectSynergyObjectId -Id $role
- }
- ```
- ## Next steps - [Create an Azure Communications Gateway resource](deploy.md)
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
- subject-reliability - references_regions Previously updated : 01/12/2023 Last updated : 05/11/2023 # What is reliability in Azure Communications Gateway?
Azure Communications Gateway ensures your service is reliable by using Azure red
Each Azure Communications Gateway deployment consists of three separate regions: a Management Region and two Service Regions. This article describes the two different region types and their distinct redundancy models. It covers both regional reliability with availability zones and cross-region reliability with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). :::image type="complex" source="media/reliability/azure-communications-gateway-management-and-service-regions.png" alt-text="Diagram of two service regions, a management region and two operator sites.":::
- Diagram showing two operator sites and the Azure regions for Azure Communications Gateway. Azure Communications Gateway has two service regions and one management region. The service regions connect to the management region and to the operator sites. The management region can be co-located with a service region.
+ Diagram showing two operator sites and the Azure regions for Azure Communications Gateway. Azure Communications Gateway has two service regions and one management region. The service regions connect to the management region and to the operator sites. The management region can be colocated with a service region.
:::image-end::: ## Service regions
-Service regions contain the voice and API infrastructure used for handling traffic between Microsoft Teams Phone System and your network. Each instance of Azure Communications Gateway consists of two service regions that are deployed in an active-active mode. This geo-redundancy is mandated by the Operator Connect and Teams Phone Mobile programs. Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level.
+Service regions contain the voice and API infrastructure used for handling traffic between Microsoft Phone System and your network. Each instance of Azure Communications Gateway consists of two service regions that are deployed in an active-active mode. This geo-redundancy is mandated by the Operator Connect and Teams Phone Mobile programs. Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level.
> [!TIP] > You must always have two service regions, even if one of the service regions chosen is in a single-region Azure Geography (for example, Qatar). If you choose a single-region Azure Geography, choose a second Azure region in a different Azure Geography.
We expect your network to have two geographically redundant sites. Each site sho
Diagram of two operator sites (operator site A and operator site B) and two service regions (service region A and service region B). Operator site A has a primary route to service region A and a secondary route to service region B. Operator site B has a primary route to service region B and a secondary route to service region A. :::image-end:::
-Each Azure Communications Gateway service region provides an SRV record containing all SIP peers within the region.
+Each Azure Communications Gateway service region provides an SRV record. This record contains all the SIP peers within the region at top priority and the SIP peers in the other region at a lower priority. If your Azure Communications Gateway includes Mobile Control Point (MCP), each service region also provides an SRV record for MCP (prioritizing the regions in the same way).
Each site in your network must: > [!div class="checklist"]
-> - Send traffic to its local Azure Communications Gateway service region by default.
-> - Locate Azure Communications Gateway peers within a region using DNS-SRV, as outlined in RFC 3263.
+> - Send traffic to its local Azure Communications Gateway service region by default, using the region's SRV record.
+> - Locate Azure Communications Gateway peers within a region using DNS SRV, as outlined in RFC 3263.
> - Make a DNS SRV lookup on the domain name for the service region, for example pstn-region1.xyz.commsgw.azure.example.com. > - If the SRV lookup returns multiple targets, use the weight and priority of each target to select a single target.
-> - Use SIP OPTIONS (or a combination of OPTIONS and SIP traffic) to monitor the availability of the Azure Communications Gateway peers.
> - Send new calls to available Azure Communications Gateway peers.+
+When your network routes calls to Microsoft Phone System (through Azure Communications Gateway's SIP peers), it must:
+
+> [!div class="checklist"]
+> - Use SIP OPTIONS (or a combination of OPTIONS and SIP traffic) to monitor the availability of the Azure Communications Gateway SIP peers.
> - Retry INVITEs that received 408 responses, 503 responses or 504 responses or did not receive responses, by rerouting them to other available peers in the local site. Hunt to the second service region only if all peers in the local service region have failed.
+> - Never retry calls that receive error responses other than 408, 503 and 504.
-Your network must not retry calls that receive error responses other than 408, 503 and 504.
+If your Azure Communications Gateway deployment includes integrated Mobile Control Point (MCP), your network must do as follows for MCP:
-The details of this routing behavior will be specific to your network. You must agree them with your onboarding team during your integration project.
+> [!div class="checklist"]
+> - Detect when MCP in a region is unavailable, mark the targets for that region's SRV record as unavailable, and retry periodically to determine when the region is available again. MCP does not respond to SIP OPTIONS.
+> - Handle 5xx responses from MCP according to your organization's policy. For example, you could retry the request, or you could allow the call to continue without passing through Azure Communications Gateway and into Microsoft Phone System.
+The details of this routing behavior are specific to your network. You must agree them with your onboarding team during your integration project.
## Management regions
Azure availability zones have a minimum of three physically separate groups of d
### Zone down experience for service regions
-During a zone-wide outage, calls handled by the affected zone are terminated, with a brief loss of capacity within the region until the service's self-healing rebalances underlying resources to healthy zones. This self-healing isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state will compensate for a lost zone, using capacity from other zones. Traffic carrying resources are deployed in a zone-redundant manner but at the lowest scale traffic might be handled by a single resource. In this case, the failover mechanisms described in this article rebalance all traffic to the other service region while the resources that carry traffic are redeployed in a healthy zone.
+During a zone-wide outage, calls handled by the affected zone are terminated, with a brief loss of capacity within the region until the service's self-healing rebalances underlying resources to healthy zones. This self-healing isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state compensates for a lost zone, using capacity from other zones. Traffic carrying resources are deployed in a zone-redundant manner but at the lowest scale traffic might be handled by a single resource. In this case, the failover mechanisms described in this article rebalance all traffic to the other service region while the resources that carry traffic are redeployed in a healthy zone.
### Zone down experience for the management region
This section describes the behavior of Azure Communications Gateway during a reg
### Disaster recovery: cross-region failover for service regions
-During a region-wide outage, the failover mechanisms described in this article (OPTIONS polling and SIP retry on failure) will rebalance all traffic to the other service region, maintaining availability. Microsoft will start restoring regional redundancy. Restoring regional redundancy during extended downtime might require using other Azure regions. If we need to migrate a failed region to another region, we'll consult you before starting any migrations.
+During a region-wide outage, the failover mechanisms described in this article (OPTIONS polling and SIP retry on failure) will rebalance all traffic to the other service region, maintaining availability. We'll start restoring regional redundancy. Restoring regional redundancy during extended downtime might require using other Azure regions. If we need to migrate a failed region to another region, we'll consult you before starting any migrations.
+
+The SIP peers in Azure Communications Gateway provide OPTIONS polling to allow your network to determine peer availability. For MCP, your network must be able to detect when MCP is unavailable, and retry periodically to determine when MCP is available again. MCP doesn't respond to SIP OPTIONS.
### Disaster recovery: cross-region failover for management regions Voice traffic and the API Bridge are unaffected by failures in the management region, because the corresponding Azure resources are hosted in service regions. Users of the API Bridge Number Management Portal might need to sign in again.
-Monitoring services might be temporarily unavailable until service has been restored. If the management region experiences extended downtime, Microsoft will migrate the impacted resources to another available region.
+Monitoring services might be temporarily unavailable until service has been restored. If the management region experiences extended downtime, we'll migrate the impacted resources to another available region.
## Choosing management and service regions
Choose a management region from the following list:
- Southeast Asia - Australia East
-Management regions can be co-located with service regions. We recommend choosing the management region nearest to your service regions.
+Management regions can be colocated with service regions. We recommend choosing the management region nearest to your service regions.
## Service-level agreements
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
Previously updated : 03/16/2023 Last updated : 05/05/2023 # What's new in Azure Communications Gateway? This article covers new features and improvements for Azure Communications Gateway.
-## March 2023: Simpler authentication for Operator Connect APIs
+## May 2023
-Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. Azure Communications Gateway therefore needs to authenticate with your Operator Connect or Teams Phone Mobile environment.
+### Integrated Mobile Control Point for Teams Phone Mobile integration
-From March 2023, Azure Communications Gateway automatically provides a service principal for this authentication. You must set up specific permissions for this service principal and then add the service principal to your Operator Connect or Teams Phone Mobile environment. For more information, see [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
+From May 2023, you can deploy Mobile Control Point (MCP) as part of Azure Communications Gateway. MCP is an IMS Application Server that simplifies interworking with Microsoft Phone System for mobile calls. It ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
+
+You can add MCP when you deploy Azure Communications Gateway or by requesting changes to an existing deployment. For more information, see [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) and [Deploy Azure Communications Gateway](deploy.md) or [Get support or request changes to your Azure Communications Gateway](request-changes.md)
+
+### Authentication with managed identities for Operator Connect APIs
+
+Azure Communications Gateway contains services that need to access the Operator Connect APIs on your behalf. Azure Communications Gateway therefore needs to authenticate with your Operator Connect or Teams Phone Mobile environment.
+
+From May 2023, Azure Communications Gateway automatically provides a managed identity for this authentication. You must set up specific permissions for this managed identity and then add the Application ID of this managed identity to your Operator Connect or Teams Phone Mobile environment. For more information, see [Deploy Azure Communications Gateway](deploy.md).
This new authentication model replaces an earlier model that required you to create an App registration and manage secrets for it. With the new model, you no longer need to create, manage or rotate secrets.
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
This tutorial covers deployment of a confidential VM with a custom configuration
## Prerequisites - An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/). -- If you want to deploy from the Azure CLI, [install PowerShell](/powershell/azure/install-az-ps) and [install the Azure CLI](/cli/azure/install-azure-cli).
+- If you want to deploy from the Azure CLI, [install PowerShell](/powershell/azure/install-azure-powershell) and [install the Azure CLI](/cli/azure/install-azure-cli).
## Deploy confidential VM template with Azure CLI
confidential-computing Quick Create Confidential Vm Azure Cli Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli-amd.md
Create a confidential [disk encryption set](../virtual-machines/linux/disks-enab
1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role. ```azurecli
- Connect-AzureAD -Tenant "your tenant ID"
- New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ Connect-Graph -Tenant "your tenant ID" Application.ReadWrite.All
+ New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
``` 2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. ```azurecli-interactive
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Microsoft Azure confidential ledger is a new and highly secure service for manag
- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Python 3.6+](/azure/developer/python/configure-local-development-environment)-- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
## Set up
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
This article describes how to configure your container app to use managed identi
|--|-| | Azure account | An Azure account with an active subscription. If you don't have one, you can [can create one for free](https://azure.microsoft.com/free/). | | Azure CLI | If using Azure CLI, [install the Azure CLI](/cli/azure/install-azure-cli) on your local machine. |
-| Azure PowerShell | If using PowerShell, [install the Azure PowerShell](/powershell/azure/install-az-ps) on your local machine. Ensure that the latest version of the Az.App module is installed by running the command `Install-Module -Name Az.App`. |
+| Azure PowerShell | If using PowerShell, [install the Azure PowerShell](/powershell/azure/install-azure-powershell) on your local machine. Ensure that the latest version of the Az.App module is installed by running the command `Install-Module -Name Az.App`. |
|Azure Container Registry | A private Azure Container Registry containing an image you want to pull. [Quickstart: Create a private container registry using the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) or [Quickstart: Create a private container registry using Azure PowerShell](../container-registry/container-registry-get-started-powershell.md)| ## Setup
container-instances Container Instances Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
container-registry Container Registry Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-powershell.md
Azure Container Registry is a private registry service for building, storing, an
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This quickstart requires Azure PowerShell module. Run `Get-Module -ListAvailable Az` to determine your installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+This quickstart requires Azure PowerShell module. Run `Get-Module -ListAvailable Az` to determine your installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
You must also have Docker installed locally. Docker provides packages for [macOS][docker-mac], [Windows][docker-windows], and [Linux][docker-linux] systems.
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
In addition to providing incremental data feed from analytical store to diverse
- Changes can be synchronized "from the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥ - There's no limitation around the fixed data retention period for which changes are available
-> [!IMPORTANT]
-> Please note that if the "Start from beginning" option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. Similarly, when the "Start from timestamp" option is selected, the initial load processes the data from the given timestamp, and incremental or changed data is captured in subsequent runs. The `Capture intermediate updates`, `Capture Deletes` and `Capture Transactional store TTLs`, which are found under the [source options](get-started-change-data-capture.md) tab, determine if intermediate updates and deletes are captured in sinks.
- ## Features Change data capture in Azure Cosmos DB analytical store supports the following key features.
-### Capturing deletes and intermediate updates
+### Capturing changes from the beginning
+
+When the `Start from beginning` option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. This is limited by the `analytical TTL` property and documents TTL-removed from analytical store are not included in the change feed. Example: Imagine a container with `analytical TTL` set to 31536000 seconds, what is equivalent to 1 year. If you create a CDC process for this container, only documents newer than 1 year will be included in the initial load.
+
+### Capturing changes from a given timestamp
+
+When the `Start from timestamp` option is selected, the initial load processes the data from the given timestamp, and incremental or changed data is captured in subsequent runs. This process is also limited by the `analytical TTL` property.
+
+### Capturing changes from now
+
+When the `Start from timestamp` option is selected, all past operations of the container are not captured.
++
+### Capturing deletes, intermediate updates, and TTLs
+
+The change data capture feature for the analytical store captures deletes, intermediate updates, and TTL operations. The captured deletes and updates can be applied on Sinks that support delete and update operations. The {_rid} value uniquely identifies the records and so by specifying {_rid} as key column on the Sink side, the update and delete operations would be reflected on the Sink.
-The change data capture feature for the analytical store captures deleted records and the intermediate updates. The captured deletes and updates can be applied on Sinks that support delete and update operations. The {_rid} value uniquely identifies the records and so by specifying {_rid} as key column on the Sink side, the update and delete operations would be reflected on the Sink.
+Note that TTL operations are considered deletes. Check the [source settings](get-started-change-data-capture.md) section to check mode details and the support for intermediate updates and deletes in sinks.
### Filter the change feed for a specific type of operation
FROM c
WHERE Category = 'Urban' ```
-> [!NOTE]
-> If you would like to enable source-query based change data capture on Azure Data Factory data flows during preview, please email [cosmosdbsynapselink@microsoft.com](mailto:cosmosdbsynapselink@microsoft.com) and share your **subscription Id** and **region**. This is not necessary to enable source-query based change data capture on an Azure Synapse data flow.
### Multiple CDC processes
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/powershell-samples.md
# Azure PowerShell samples for Azure Cosmos DB for Apache Cassandra [!INCLUDE[Cassandra](../includes/appliesto-cassandra.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-azure-powershell) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/powershell-samples.md
# Azure PowerShell samples for Azure Cosmos DB for Gremlin [!INCLUDE[Gremlin](../includes/appliesto-gremlin.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-azure-powershell) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Multiple-region accounts experience different behaviors depending on the followi
The following table summarizes the high-availability capabilities of various account configurations.
-|KPI|Single-region writes without availability zones|Single-region writes with availability zones|Multiple-region, single-region writes without availability zones|Multiple-region, single-region writes with availability zones|Multiple-region, single-region writes with or without availability zones|
+|KPI|Single-region writes without availability zones|Single-region writes with availability zones|Multiple-region, single-region writes without availability zones|Multiple-region, single-region writes with availability zones|Multiple-region, multiple-region writes with or without availability zones|
||||||| |Write availability SLA | 99.99% | 99.995% | 99.99% | 99.995% | 99.999% | |Read availability SLA | 99.99% | 99.995% | 99.999% | 99.999% | 99.999% |
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
To ensure that you have access to Azure Cosmos DB metrics from the portal, you n
Use the following steps to configure a service endpoint to an Azure Cosmos DB account by using Azure PowerShell:
-1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
1. Enable the service endpoint for an existing subnet of a virtual network.
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
To get started using partition merge, navigate to the **Features** page in your
Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria). Once you've enabled the feature, it takes 15-20 minutes to take effect. > [!CAUTION]
-> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 or Java SDK >= 4.42.0 or Azure Cosmos DB Spark connector >= 4.18.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, older Java SDK, any JavaScript SDK, any Python SDK, any Go SDK) or unsupported connectors (Azure Data Factory, Azure Search, Azure Functions, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed.
+> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 or Java SDK >= 4.42.0 or Azure Cosmos DB Spark connector >= 4.18.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, older Java SDK, any JavaScript SDK, any Python SDK, any Go SDK) or unsupported connectors (Azure Data Factory, Azure Search, Azure Functionsextension <= 3.x, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed.
:::image type="content" source="media/merge/merge-feature-blade.png" alt-text="Screenshot of Features pane and Partition merge feature.":::
Containers that meet both of these conditions are likely to benefit from merging
- **Condition 1**: The current RU/s per physical partition is <3000 RU/s - **Condition 2**: The current average storage in GB per physical partition is <20 GB
-Condition 1 often occurs when you've previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state.
+Condition 1 often occurs when you have previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state.
Condition 2 often occurs when you delete/TTL a large volume of data, leaving unused partitions. #### Condition 1
To enroll in the preview, your Azure Cosmos DB account must meet all the followi
- Azure Data Factory - Azure Stream Analytics - Logic Apps
- - Azure Functions < 4.0.0
+ - Azure Functions extension <= 3.x (Azure Functions extension 4.0 and higher is supported)
- Azure Search - Azure Cosmos DB Spark connector < 4.18.0 - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0
If you enroll in the preview, the following connectors fail.
- Azure Data Factory ┬╣ - Azure Stream Analytics ┬╣ - Logic Apps ┬╣-- Azure Functions < 4.0.0
+- Azure Functions extension <= 3.x (Azure Functions extension 4.0 and higher is supported) ┬╣
- Azure Search ┬╣ - Azure Cosmos DB Spark connector < 4.18.0 - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Use the following steps to migrate your account from periodic backup to continuo
## <a id="powershell"></a>Migrate using PowerShell
-1. Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps) or any version higher than 6.2.0.
+1. Install the [latest version of Azure PowerShell](/powershell/azure/install-azure-powershell) or any version higher than 6.2.0.
2. To use ``Continous7Days`` mode for provisioning or migrating, you'll have to use preview of the ``cosmosdb`` extension. Use ``Install-Module -Name Az.CosmosDB -AllowPrerelease`` 3. Next, run the following steps:
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/powershell-samples.md
# Azure PowerShell samples for Azure Cosmos DB for MongoDB [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-azure-powershell) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
Get started with MongoDB to create databases, collections, and docs within your
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). - [.NET 6.0](https://dotnet.microsoft.com/download)-- [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
### Prerequisite check
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md
Certificate-based authentication enables your client application to be authentic
## Prerequisites
-* Install the [latest version](/powershell/azure/install-az-ps) of Azure PowerShell.
+* Install the [latest version](/powershell/azure/install-azure-powershell) of Azure PowerShell.
* If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/powershell-samples.md
# Azure PowerShell samples for Azure Cosmos DB for NoSQL [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-azure-powershell) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](../cassandr)
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
For PowerShell and CLI commands, the tier value is optional, if it isn't already
1. Install the latest version of Azure PowerShell
- * Before provisioning the account, install any version of Azure PowerShell higher than 6.2.0. For more information about the latest version of Azure PowerShell, see [latest version of Azure PowerShell](/powershell/azure/install-az-ps).
+ * Before provisioning the account, install any version of Azure PowerShell higher than 6.2.0. For more information about the latest version of Azure PowerShell, see [latest version of Azure PowerShell](/powershell/azure/install-azure-powershell).
* For provisioning the ``Continuous7Days`` tier, you'll need to install the preview version of the module by running ``Install-Module -Name Az.CosmosDB -AllowPrerelease``. 1. Next connect to your Azure account and select the required subscription with the following commands:
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Use the following steps to get the restore details from Azure portal:
## <a id="restore-account-powershell"></a>Restore an account using Azure PowerShell
-Before restoring the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps) or version higher than 9.6.0. Next connect to your Azure account and select the required subscription with the following commands:
+Before restoring the account, install the [latest version of Azure PowerShell](/powershell/azure/install-azure-powershell) or version higher than 9.6.0. Next connect to your Azure account and select the required subscription with the following commands:
1. Sign into Azure using the following command:
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/autoscale.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/create.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/list-get.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/lock.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Account Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/account-update.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Failover Priority Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/failover-priority-update.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Firewall Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/firewall-create.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Keys Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/keys-connection-strings.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires the Az PowerShell module 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Update Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/update-region.md
This PowerShell script updates the Azure regions that an Azure Cosmos DB account
- You need an existing Azure Cosmos DB account in an Azure resource group. -- The script requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to list your installed versions. If you need to install PowerShell, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+- The script requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to list your installed versions. If you need to install PowerShell, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
- Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/autoscale.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/create.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/list-get.md
This PowerShell script lists or gets specific Azure Cosmos DB accounts, API for
## Prerequisites -- This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+- This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
- Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/lock.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/throughput.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/autoscale.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/create.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/list-get.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/lock.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/throughput.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/autoscale.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Create Index None https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create-index-none.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Create Large Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create-large-partition-key.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/list-get.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/lock.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/throughput.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/list-get.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/lock.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
[!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
# Azure PowerShell samples for Azure Cosmos DB for Table [!INCLUDE[Table](../includes/appliesto-table.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-azure-powershell) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Azure Cosmos DB from our GitHub repository, [Azure Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-java.md
Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershe
Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-azure-powershell).
It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-nodejs.md
Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershe
Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-azure-powershell).
It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-python.md
Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershe
Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-azure-powershell).
It typically takes several minutes for the Azure Cosmos DB account creation process to complete.
cost-management-billing Get Small Usage Datasets On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-small-usage-datasets-on-demand.md
description: The article explains how you can use the Cost Details API to get raw, unaggregated cost data that corresponds to your Azure bill. Previously updated : 01/30/2023 Last updated : 05/10/2023
The [Cost Details](/rest/api/cost-management/generate-cost-details-report) repor
## Permissions
-To use the Cost Details API, you need read only permissions for supported features and scopes. For more information, see:
+To use the Cost Details API, you need read only permissions for supported features and scopes.
+
+>[!NOTE]
+> The [Cost Details API](/rest/api/cost-management/generate-cost-details-report/create-operation) doesn't support management groups for either EA or MCA customers.
+
+For more information, see:
- [Azure RBAC scopes - role permissions for feature behavior](../costs/understand-work-scopes.md#feature-behavior-for-each-role) - [Enterprise Agreement scopes - role permissions for feature behavior](../costs/understand-work-scopes.md#feature-behavior-for-each-role-1)
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-work-scopes.md
description: This article helps you understand billing and resource management scopes available in Azure and how to use the scopes in Cost Management and APIs. Previously updated : 02/10/2023 Last updated : 05/10/2023
Cost Management Contributor is the recommended least-privilege role. The role al
- **Viewing cost-saving recommendations** ΓÇô Cost Management Readers and Cost Management Contributors have access to *view* cost recommendations by default. However, access to act on the cost recommendations requires access to individual resources. Consider granting a [service-specific role](../../role-based-access-control/built-in-roles.md#all) if you want to act on a cost-based recommendation. > [!NOTE]
-> Management groups aren't currently supported in Cost Management features for Microsoft Customer Agreement subscriptions.
+> Management groups aren't currently supported in Cost Management features for Microsoft Customer Agreement subscriptions. The [Cost Details API](/rest/api/cost-management/generate-cost-details-report/create-operation) also doesn't support management groups for either EA or MCA customers.
Management groups are only supported if they contain up to 3,000 Enterprise Agreement (EA), Pay-as-you-go (PAYG), or Microsoft internal subscriptions. Management groups with more than 3,000 subscriptions or subscriptions with other offer types, like Microsoft Customer Agreement or Azure Active Directory subscriptions, can't view costs.
cost-management-billing Mca Role Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-role-migration.md
+
+ Title: Copy billing roles from one MCA to another MCA across tenants with a script
+
+description: Describes how to Copy billing roles from one MCA to another MCA across tenants using a PowerShell script.
++
+tags: billing
+++ Last updated : 05/04/2023+++
+# Copy billing roles from one MCA to another MCA across tenants with a script
+
+Subscription migration is automated using the Azure portal however, role migrations aren't. The information in this article helps Billing Account owners to automate role assignments when they consolidate Microsoft Customer Agreement (MCA) enterprise accounts. You can copy billing roles from one MCA enterprise account to another MCA enterprise account across tenants with a script. The following example scenario describes the overall process.
+
+Contoso Ltd acquires Fabrikam, Inc. Both Contoso and Fabrikam have an MCA in their respective tenants. Contoso wants to bring billing management of Fabrikam subscriptions under its own Contoso MCA. Contoso also wants a separate invoice generated for Fabrikam subscriptions and to enable the assignment of users in the Fabrikam tenant to billing roles.
+
+The Contoso MCA billing account owner uses the following process:
+
+1. Associate the Fabrikam tenant with the Contoso MCA billing account.
+1. Create a new billing profile for Fabrikam subscriptions.
+1. Assign a billing profile owner role to a user in Fabrikam tenant.
+
+Keep in mind that there are many other users that have billing roles in the source Fabrikam MCA at the billing account, billing profile, and several invoice section levels.
+
+After the Fabrikam Billing Account Owner role on the source MCA has been given a Billing Account Owner role on the Contoso (target) MCA, they use the following sections to automate the billing role migration from their source MCA to the target MCA.
+
+Use the following information to automate billing role migration from the source (Fabrikam) MCA to the target (Contoso) MCA. The script works at the **billing profile** scope.
+
+## Prerequisites
+
+- You must have **Billing account owner** role on the target MCA and Billing account owner or billing account contributor role on the source MCA.
+- A storage account prepared for the script. For more information, see [Get started with AzCopy](../../storage/common/storage-use-azcopy-v10.md).
+
+## Prepare the target environment
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that has the necessary permissions to the target tenant MCA billing account.
+1. Associate the source tenant with the target MCA billing account. For more information, see [Add an associated billing tenant](manage-billing-across-tenants.md#add-an-associated-billing-tenant).
+1. Add the billing account owner from the associated tenant. For more information, see [Assign roles to users from the associated billing tenant](manage-billing-across-tenants.md#assign-roles-to-users-from-the-associated-billing-tenant).
+1. In the target tenant, add billing profile and invoice section as needed.
+
+## Prepare and run the script
+
+Use the following steps to prepare and then run the script.
+
+1. Copy the script example from the [Role migration script example](#role-migration-script-example) section.
+1. Paste and then save the file locally as PS1 file.
+1. Update the script with source to target mappings for:
+ - `Tenant`
+ - `Billing account`
+ - `Billing profile`
+ - `Invoice sections`
+1. Sign in to the Azure portal (source tenant) and open Cloud Shell. If you're prompted to select between Bash and PowerShell, select **PowerShell**.
+ :::image type="content" source="./media/mca-role-migration/cloud-shell.png" alt-text="Screenshot showing the Cloud Shell symbol." lightbox="./media/mca-role-migration/cloud-shell.png" :::
+1. If you used Bash previously, select **PowerShell** in the Cloud Shell toolbar.
+ :::image type="content" source="./media/mca-role-migration/bash-powershell.png" alt-text="Screenshot showing the PowerShell selection." lightbox="./media/mca-role-migration/bash-powershell.png" :::
+1. Upload the PS1 file to your Azure Storage account.
+1. Execute the PS1 file.
+1. Authenticate to Azure Cloud Shell.
+1. Verify that the roles are in the target MCA after the script runs.
+
+## Role migration script example
+
+You use the following example script to automate the migration of the billing role. The role is copied from the source MCA billing profile to the target MCA billing profile in a different tenant.
+
+```powershell
+## Define source target mapping for
+## 1. Tenant
+## 2. Billing Account
+## 3. Billing Profile
+## 4. Invoice Sections
+##(source) MCA-E details
+$tenantId = ""
+$billingAccount=""
+$billingProfile = ""
+##(destination) MCA-E details
+$targetBillingProfile = ""
+$targetTenantId = ""
+$targerbillingAccout=""
+## Invoice section mapping in hash table
+$hash = @{
+"" = ""; #invoice section 1
+"" = ""; #invoice section 2
+}
+## Conect to Azure account using device authentication using tenantId
+Connect-AzAccount -UseDeviceAuthentication -TenantId $tenantId
+Set-AzContext -TenantId $tenantId
+## Aquire access token for the current user
+$var = Get-AzAccessToken
+$auth = 'Bearer ' + $var.Token
+#### Get Billing Account Role Assignments from source MCA-E
+#define parameters for REST API call
+$params = @{
+ Uri = "https://management.azure.com/providers/Microsoft.Billing/billingAccounts/"+ $billingAccount +"/billingRoleAssignments?api-version=2019-10-01-preview"
+ Headers = @{ 'Authorization' = $auth }
+ Method = 'GET'
+ ContentType = 'application/json'
+}
+#### Call API with parameters defined above
+$ret = Invoke-RestMethod @params
+####Initialize array lists
+$ArrayListBARoles = [System.Collections.Generic.List[string]]::new();
+$ArrayListBPRoles = [System.Collections.Generic.List[string]]::new();
+$ArrayListISRoles = [System.Collections.Generic.List[string]]::new();
+#### Add each billing account role and principal id to array list
+#### Push down the billing accout role assignments to billing profile role assignments (replacong 5 series with 4 series)
+foreach($j in $ret.value){
+ $BANameArrayArray= $j.name -replace "500000", "500000" #-split '_'
+ foreach($i in $BANameArrayArray){
+ $ArrayListBARoles.Add($i)
+ }
+ }
+#### Get Billing Role assignments for billing profile
+$paramsBPRoleAssignments = @{
+ Uri = "https://management.azure.com/providers/Microsoft.Billing/billingAccounts/"+$billingAccount +"/billingProfiles/" +$billingProfile +"/billingRoleAssignments?api-version=2019-10-01-preview"
+ Headers = @{ 'Authorization' = $auth }
+ Method = 'GET'
+ ContentType = 'application/json'
+ }
+$retBPRoles = Invoke-RestMethod @paramsBPRoleAssignments
+####add each role to arraylist
+foreach($k in $retBPRoles.value){
+ $BPNameArrayArray= $k.name #-split '_'
+ foreach($l in $BPNameArrayArray){
+ $ArrayListBPRoles.Add($l)
+ }
+}
+#### Get Invoice sections for billing profile
+$invoiceSections = Get-AzInvoiceSection -BillingAccountName $billingAccount -BillingProfile $billingProfile
+for ($ii=0; $ii -lt $ArrayListBARoles.count; $ii=$ii+1){
+ $paramsBARoleCreation = @{
+ Uri = "https://management.azure.com/providers/Microsoft.Billing/billingAccounts/"+$targerbillingAccout+"/createBillingRoleAssignment?api-version=2020-12-15-privatepreview"
+ Headers = @{ 'Authorization' = $auth }
+ Method = 'POST'
+ ContentType = 'application/json'
+ }
+ $BodyBARoleCreation = @{
+ principalTenantId = $tenantId
+ roleDefinitionId = "/providers/Microsoft.Billing/billingAccounts/" +$targerbillingAccout +"/" +($ArrayListBARoles[$ii] -SPLIT '_')[0]
+ principalId=($ArrayListBARoles[$ii] -SPLIT '_')[1]
+ }
+ $retBARoles = Invoke-RestMethod @paramsBARoleCreation -body @($BodyBARoleCreation | ConvertTo-Json)
+}
+#BILLING PROFILE
+for ($ii=0; $ii -lt $ArrayListBPRoles.count; $ii=$ii+1){
+ $paramsBPRoleCreation = @{
+ Uri = "https://management.azure.com/providers/Microsoft.Billing/billingAccounts/" +$targerbillingAccout +"/billingProfiles/"+ $targetBillingProfile +"/createBillingRoleAssignment?api-version=2020-12-15-privatepreview"
+ Headers = @{ 'Authorization' = $auth }
+ Method = 'POST'
+ ContentType = 'application/json'
+ }
+ $BodyBPRoleCreation = @{
+ principalTenantId = $tenantId
+ roleDefinitionId = "/providers/Microsoft.Billing/billingAccounts/" +$targerbillingAccout +"/billingProfiles/"+ $targetBillingProfile +"/" +($ArrayListBPRoles[$ii] -SPLIT '_')[0]
+ principalId=($ArrayListBPRoles[$ii] -SPLIT '_')[1]
+ }
+ $retBPRoles = Invoke-RestMethod @paramsBPRoleCreation -body @($BodyBPRoleCreation | ConvertTo-Json)
+}
+#INVOICE SECTIONS
+$targetinvoiceSection=""
+#Get Roles for each invoice section
+foreach ($m in $invoiceSections){
+ if ($hash.ContainsKey($m.Name)){
+ $targetinvoiceSection=$hash[$m.Name]
+ 'targetinvoiceSection'
+ $targetinvoiceSection
+
+ $paramsISRoleAssignments = @{
+ Uri = "https://management.azure.com/providers/Microsoft.Billing/billingAccounts/" +$billingAccount +"/billingProfiles/" + $billingProfile +"/invoiceSections/" +$m.Name+ "/billingRoleAssignments?api-version=2019-10-01-preview"
+ Headers = @{ 'Authorization' = $auth }
+ Method = 'GET'
+ ContentType = 'application/json'
+ }
+ $retISRoles = Invoke-RestMethod @paramsISRoleAssignments
+ $ISNameArrayArray=$null
+ $ArrayListISRoles = [System.Collections.Generic.List[string]]::new();
+ foreach($n in $retISRoles.value){
+ $ISNameArrayArray= $n.name #-split '_'
+ foreach($o in $ISNameArrayArray){
+ $ArrayListISRoles.Add($o)
+ }
+ }
+ for ($ii=0; $ii -lt $ArrayListISRoles.count; $ii=$ii+1){
+ $paramsISRoleCreation = @{
+ Uri = "https://management.azure.com/providers/Microsoft.Billing/billingAccounts/" +$targerbillingAccout+ "/billingProfiles/"+ $targetBillingProfile +"/invoiceSections/"+ $targetinvoiceSection +"/createBillingRoleAssignment?api-version=2020-12-15-privatepreview"
+ Headers = @{ 'Authorization' = $auth }
+ Method = 'POST'
+ ContentType = 'application/json'
+ }
+ $BodyISRoleCreation = @{
+ principalTenantId = $tenantId
+ roleDefinitionId = "/providers/Microsoft.Billing/billingAccounts/" +$targerbillingAccout +"/billingProfiles/"+ $targetBillingProfile +"/invoiceSections/"+ $targetinvoiceSection+ "/" +($ArrayListISRoles[$ii] -SPLIT '_')[0]
+ #userEmailAddress = ($graph.UserPrincipalName -Replace '_', '@' -split '#EXT#@' )[0]
+ principalId=($ArrayListISRoles[$ii] -SPLIT '_')[1]
+ }
+ $resISRolesCreation= Invoke-RestMethod @paramsISRoleCreation -body @($BodyISRoleCreation | ConvertTo-Json)
+ }
+ }
+}
+```
+
+## Next steps
+
+- If necessary, give access to billing accounts, billing profiles, and invoice sections using the information at [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md
Previously updated : 01/28/2023 Last updated : 05/08/2023 # Copy data from or to MongoDB Atlas using Azure Data Factory or Synapse Analytics
This MongoDB Atlas connector is supported for the following capabilities:
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
-Specifically, this MongoDB Atlas connector supports **versions up to 4.2**.
- ## Prerequisites [!INCLUDE [data-factory-v2-integration-runtime-requirements](includes/data-factory-v2-integration-runtime-requirements.md)]
Use the following steps to create a linked service to MongoDB Atlas in the Azure
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
-2. Search for MongoDB and select the MongoDB Atlas connector.
+2. Search for MongoDB Atlas and select the MongoDB Atlas connector.
:::image type="content" source="media/connector-mongodb-atlas/mongodb-atlas-connector.png" alt-text="Select the MongoDB Atlas connector.":::
The following properties are supported for MongoDB Atlas linked service:
| type |The type property must be set to: **MongoDbAtlas** |Yes | | connectionString |Specify the MongoDB Atlas connection string e.g. `mongodb+srv://<username>:<password>@<clustername>.<randomString>.<hostName>/<dbname>?<otherProperties>`. <br/><br /> You can also put a connection string in Azure Key Vault. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) with more details. |Yes | | database | Name of the database that you want to access. | Yes |
+| mongoDbAtlasDriverVersion | Specify the driver version to 2.19.0 which supports MongoDB version 3.6 and higher. For more information, go to this [article](https://www.mongodb.com/docs/drivers/csharp/current/compatibility/). |No |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No | **Example:**
The following properties are supported for MongoDB Atlas linked service:
"type": "MongoDbAtlas", "typeProperties": { "connectionString": "mongodb+srv://<username>:<password>@<clustername>.<randomString>.<hostName>/<dbname>?<otherProperties>",
- "database": "myDatabase"
+ "database": "myDatabase",
+ "mongoDbAtlasDriverVersion": "<driver version>"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
data-factory Continuous Integration Delivery Automate Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-azure-pipelines.md
The Azure Key Vault task might fail with an Access Denied error if the correct p
## Updating active triggers
-Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
>[!WARNING] >If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands.
data-factory Continuous Integration Delivery Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md
The following sample demonstrates how to use a pre- and post-deployment script w
## Install Azure PowerShell
-Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
>[!WARNING] >Make sure to use **PowerShell Core** in ADO task to run the script
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime.md
These articles shows how to provision an Azure-SSIS IR by using the [Azure porta
- You want to connect to on-premises data stores from SSIS packages running on your Azure-SSIS IR without configuring a self-hosted IR. -- **Azure PowerShell (optional)**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-az-ps), if you want to run a PowerShell script to provision your Azure-SSIS IR.
+- **Azure PowerShell (optional)**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell), if you want to run a PowerShell script to provision your Azure-SSIS IR.
### Regional support
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
To create a shared self-hosted IR using Azure PowerShell, you can take following
- **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. -- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-az-ps). You use PowerShell to run a script to create a self-hosted integration runtime that can be shared with other data factories.
+- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). You use PowerShell to run a script to create a self-hosted integration runtime that can be shared with other data factories.
> [!NOTE] > For a list of Azure regions in which Data Factory is currently available, select the regions that interest you on [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-factory).
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
To customize your Azure-SSIS IR, you need the following items:
## Instructions
-You can provision or reconfigure your Azure-SSIS IR with custom setups on ADF UI. If you want to do the same using PowerShell, download and install [Azure PowerShell](/powershell/azure/install-az-ps).
+You can provision or reconfigure your Azure-SSIS IR with custom setups on ADF UI. If you want to do the same using PowerShell, download and install [Azure PowerShell](/powershell/azure/install-azure-powershell).
### Standard custom setup
data-factory How To Configure Azure Ssis Ir Enterprise Edition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md
Some of these features require you to install additional components to customize
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-1. Download and install [Azure PowerShell](/powershell/azure/install-az-ps).
+1. Download and install [Azure PowerShell](/powershell/azure/install-azure-powershell).
2. When you provision or reconfigure the Azure-SSIS IR with PowerShell, run `Set-AzDataFactoryV2IntegrationRuntime` with **Enterprise** as the value for the **Edition** parameter before you start the Azure-SSIS IR. Here is a sample script:
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
This section shows you how to use Azure PowerShell to create, start, and monitor
- **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. -- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-az-ps).
+- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
### Sample Code
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
This section shows you how to use Azure PowerShell to create, start, and monitor
- **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. -- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-az-ps).
+- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
- **Azure Data Factory**. Follow the instructions in [Create an Azure Data Factory using PowerShell](./quickstart-create-data-factory-powershell.md) to create a data factory and a pipeline.
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md
In this section, you trigger a pipeline run and then monitor it.
In this section, you use Azure PowerShell to create a Data Factory pipeline with a stored procedure activity that invokes an SSIS package.
-Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
### Create a data factory You can either use the same data factory that has the Azure-SSIS IR or create a separate data factory. The following procedure provides steps to create a data factory. You create a pipeline with a stored procedure activity in this data factory. The stored procedure activity executes a stored procedure in the SSISDB database to run your SSIS package.
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-powershell.md
This quickstart describes how to use PowerShell to create an Azure Data Factory.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
>[!WARNING] >If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands.
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
* **Azure subscription**. If you don't have a subscription, you can create a [free trial](https://azure.microsoft.com/pricing/free-trial/) account. * **Azure Storage account**. You use the blob storage as **source** and **sink** data store. If you don't have an Azure storage account, see the [Create a storage account](../storage/common/storage-account-create.md) article for steps to create one. * Create a **blob container** in Blob Storage, create an input **folder** in the container, and upload some files to the folder. You can use tools such as [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to connect to Azure Blob storage, create a blob container, upload input file, and verify the output file.
-* Install **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps). This quickstart uses PowerShell to invoke REST API calls.
+* Install **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). This quickstart uses PowerShell to invoke REST API calls.
* **Create an application in Azure Active Directory** following [this instruction](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Make note of the following values that you use in later steps: **application ID**, **clientSecrets**, and **tenant ID**. Assign application to "**Contributor**" role at either subscription or resource group level. >[!NOTE] > For Sovereign clouds, you must use the appropriate cloud-specific endpoints for ActiveDirectoryAuthority and ResourceManagerUrl (BaseUri).
data-factory Tutorial Bulk Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
* **Azure Storage account**. The Azure Storage account is used as staging blob storage in the bulk copy operation. * **Azure SQL Database**. This database contains the source data. * **Azure Synapse Analytics**. This data warehouse holds the data copied over from the SQL Database.
data-factory Tutorial Deploy Ssis Packages Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md
In this tutorial, you will:
- Confirm that your database server does not have an SSISDB instance already. The provisioning of an Azure-SSIS IR does not support using an existing SSISDB instance. -- **Azure PowerShell**. To run a PowerShell script to set up your Azure-SSIS IR, follow the instructions in [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+- **Azure PowerShell**. To run a PowerShell script to set up your Azure-SSIS IR, follow the instructions in [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!NOTE] > For a list of Azure regions in which Azure Data Factory and Azure-SSIS IR are currently available, see [Azure Data Factory and Azure-SSIS IR availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all).
data-factory Tutorial Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-powershell.md
In this section, you create a blob container named **adftutorial** in your Azure
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-Install the latest version of Azure PowerShell if you don't already have it on your machine. For detailed instructions, see [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+Install the latest version of Azure PowerShell if you don't already have it on your machine. For detailed instructions, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
#### Log in to PowerShell
data-factory Tutorial Incremental Copy Change Tracking Feature Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites
-* Azure PowerShell. Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+* Azure PowerShell. Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
* **Azure SQL Database**. You use the database as the **source** data store. If you don't have a database in Azure SQL Database, see the [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) article for steps to create one. * **Azure Storage account**. You use the blob storage as the **sink** data store. If you don't have an Azure storage account, see the [Create a storage account](../storage/common/storage-account-create.md) article for steps to create one. Create a container named **adftutorial**.
If you don't have an Azure subscription, create a [free](https://azure.microsoft
``` ### Azure PowerShell
-Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Create a data factory
data-factory Tutorial Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
* **Azure SQL Database**. You use the database as the source data store. If you don't have a database in Azure SQL Database, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) for steps to create one. * **Azure Storage**. You use the blob storage as the sink data store. If you don't have a storage account, see [Create a storage account](../storage/common/storage-account-create.md) for steps to create one. Create a container named adftutorial.
-* **Azure PowerShell**. Follow the instructions in [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+* **Azure PowerShell**. Follow the instructions in [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
### Create a data source table in your SQL database 1. Open SQL Server Management Studio. In **Server Explorer**, right-click the database, and choose **New Query**.
data-factory Tutorial Transform Data Hive Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network-portal.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
- **HDInsight cluster.** Create a HDInsight cluster and join it to the virtual network you created in the previous step by following this article: [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md). Here is a sample configuration of HDInsight in a virtual network. :::image type="content" source="media/tutorial-transform-data-using-hive-in-vnet-portal/hdinsight-virtual-network-settings.png" alt-text="HDInsight in a virtual network":::-- **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+- **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
- **A virtual machine**. Create an Azure virtual machine VM and join it into the same virtual network that contains your HDInsight cluster. For details, see [How to create virtual machines](../virtual-network/quick-create-portal.md#create-virtual-machines). ### Upload Hive script to your Blob Storage account
data-factory Tutorial Transform Data Hive Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
- **HDInsight cluster.** Create a HDInsight cluster and join it to the virtual network you created in the previous step by following this article: [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md). Here is a sample configuration of HDInsight in a virtual network. :::image type="content" source="media/tutorial-transform-data-using-hive-in-vnet/hdinsight-in-vnet-configuration.png" alt-text="HDInsight in a virtual network":::-- **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+- **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
### Upload Hive script to your Blob Storage account
data-factory Tutorial Transform Data Spark Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
> [!NOTE] > HdInsight supports only general-purpose storage accounts with standard tier. Make sure that the account is not a premium or blob only storage account.
-* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
### Upload the Python script to your Blob storage account
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] * **Azure Storage account**. You create a Python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
-* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
### Upload Python script to your Blob Storage account
data-factory Data Factory Copy Activity Tutorial Using Dotnet Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-dotnet-api.md
A pipeline can have more than one activity. And, you can chain two activities (r
* Go through [Tutorial Overview and Pre-requisites](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md) to get an overview of the tutorial and complete the **prerequisite** steps. * Visual Studio 2012 or 2013 or 2015 * Download and install [Azure .NET SDK](https://azure.microsoft.com/downloads/)
-* Azure PowerShell. Follow instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps) article to install Azure PowerShell on your computer. You use Azure PowerShell to create an Azure Active Directory application.
+* Azure PowerShell. Follow instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell) article to install Azure PowerShell on your computer. You use Azure PowerShell to create an Azure Active Directory application.
### Create an application in Azure Active Directory Create an Azure Active Directory application, create a service principal for the application, and assign it to the **Data Factory Contributor** role.
data-factory Data Factory Copy Activity Tutorial Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-powershell.md
A pipeline can have more than one activity. And, you can chain two activities (r
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] - Complete prerequisites listed in the [tutorial prerequisites](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md) article.-- Install **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+- Install **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Steps Here are the steps you perform as part of this tutorial:
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/how-to-invoke-ssis-package-stored-procedure-activity.md
In this section you use Azure PowerShell to create a Data Factory pipeline with
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
### Create a data factory The following procedure provides steps to create a data factory. You create a pipeline with a stored procedure activity in this data factory. The stored procedure activity executes a stored procedure in the SSISDB database to run your SSIS package.
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
In this quickstart, you'll enable DDoS IP protection and link it to a public IP
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell -- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 9.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 9.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
defender-for-cloud Defender For Storage Classic Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-enable.md
Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.s
To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using PowerShell:
-1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
1. Use the `Connect-AzAccount` cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps). 1. Use these commands to register your subscription to the Microsoft Defender for Cloud Resource Provider:
If you want to disable Defender for Storage on the account:
To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using PowerShell:
-1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
1. Use the Connect-AzAccount cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps). 1. Enable Microsoft Defender for Storage for the desired storage account with the [`Enable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection) cmdlet:
To exclude an Azure Storage account from Microsoft Defender for Storage (classic
#### Use PowerShell to exclude an Azure Storage account
-1. If you don't have the Azure Az PowerShell module installed, install it using [the instructions from the Azure PowerShell documentation](/powershell/azure/install-az-ps).
+1. If you don't have the Azure Az PowerShell module installed, install it using [the instructions from the Azure PowerShell documentation](/powershell/azure/install-azure-powershell).
1. Using an authenticated account, connect to Azure with the ``Connect-AzAccount`` cmdlet, as explained in [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
defender-for-cloud Management Groups Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/management-groups-roles.md
You can add subscriptions to the management group that you created.
### Assign Azure roles to users with PowerShell:
-1. Install [Azure PowerShell](/powershell/azure/install-az-ps).
+1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell).
2. Run the following commands: ```azurepowershell
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
Title: Plan a Defender for Servers deployment to protect on-premises and multicloud servers description: Design a solution to protect on-premises and multicloud servers with Microsoft Defender for Servers. Previously updated : 11/06/2022 Last updated : 05/11/2023
The following diagram shows an overview of the Defender for Servers deployment p
When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, all of the connected machines will be protected by Defender for Servers. You can enable Microsoft Defender for Servers at the Log Analytics workspace level, but only servers reporting to that workspace will be protected and billed and those servers won't receive some benefits, such as Microsoft Defender for Endpoint, vulnerability assessment, and just-in-time VM access.
-## Defender for Servers pricing FAQ
--- [My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?](#my-subscription-has-microsoft-defender-for-servers-enabled-which-machines-do-i-pay-for)-- [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)-
-### My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?
-
-When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, all machines in that subscription (including machines that are part of PaaS services and reside in this subscription) are billed according to their power state as shown in the following table:
-
-| State | Description | Instance usage billed |
-|--|--|--|
-| Starting | VM is starting up. | Not billed |
-| Running | Normal working state for a VM | Billed |
-| Stopping | This state is transitional. When completed, it will show as Stopped. | Billed |
-| Stopped | The VM has been shut down from within the guest OS or using the PowerOff APIs. Hardware is still allocated to the VM and it remains on the host. | Billed |
-| Deallocating | This state is transitional. When completed, the VM will show as Deallocated. | Not billed |
-| Deallocated | The VM has been stopped successfully and removed from the host. | Not billed |
--
-### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?
-
-If you already have a license for **Microsoft Defender for Endpoint for Servers Plan 2**, you won't have to pay for that part of your Microsoft Defender for Servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
-
-To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace.
-
-The discount will be effective starting from the approval date, and won't take place retroactively.
- ## Next steps After kicking off the planning process, review the [second article in this planning series](plan-defender-for-servers-data-workspace.md) to understand how your data is stored, and Log Analytics workspace requirements.
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
To create a rule:
### Azure PowerShell > [!NOTE]
-> This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps). To learn how to migrate to the Az PowerShell module, see [Migrate Azure PowerShell from AzureRM to Az](/powershell/azure/migrate-from-azurerm-to-az).
+> This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). To learn how to migrate to the Az PowerShell module, see [Migrate Azure PowerShell from AzureRM to Az](/powershell/azure/migrate-from-azurerm-to-az).
> [!IMPORTANT] > The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module. For these cmdlets, see [AzureRM.Sql](/powershell/module/AzureRM.Sql/). The arguments for the commands in the Az module and in the AzureRm modules are substantially identical.
deployment-environments Tutorial Deploy Environments In Cicd Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md
In this section, you make some changes to the repository and test the CI/CD pipe
1. Merge the PR.
- Your changes are published into the production environment, and delete the branch and pull request environments.
+ Your changes are published into the production environment, and the branch and pull request environments are deleted.
## Clean up resources
devtest-labs Devtest Lab Grant User Permissions To Specific Lab Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-grant-user-permissions-to-specific-lab-policies.md
For example, in order to grant users read/write permission to the **Allowed VM S
To learn more about custom roles in Azure RBAC, see the [Azure custom roles](../role-based-access-control/custom-roles.md). ## Creating a lab custom role using PowerShell
-In order to get started, youΓÇÖll need to [install Azure PowerShell](/powershell/azure/install-az-ps).
+In order to get started, youΓÇÖll need to [install Azure PowerShell](/powershell/azure/install-azure-powershell).
Once youΓÇÖve set up the Azure PowerShell cmdlets, you can perform the following tasks:
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vm-powershell.md
This article shows you how to create an Azure DevTest Labs virtual machine (VM)
You need the following prerequisites to work through this article: - Access to a lab in DevTest Labs. [Create a lab](devtest-lab-create-lab.md), or use an existing lab.-- Azure PowerShell. [Install Azure PowerShell](/powershell/azure/install-az-ps), or [use Azure Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell) in the Azure portal.
+- Azure PowerShell. [Install Azure PowerShell](/powershell/azure/install-azure-powershell), or [use Azure Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell) in the Azure portal.
## PowerShell VM creation script
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Title: Get Azure recommendations for your SQL Server migration
-description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to get SKU recommendation when you migrate SQL Server databases to the Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database.
+description: Discover how to utilize the Azure SQL Migration extension in Azure Data Studio for obtaining Azure recommendations while migrating SQL Server databases to Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database.
Previously updated : 02/22/2022 Last updated : 05/09/2022
-# Get Azure recommendations to migrate your SQL Server database
+# Get Azure recommendations to migrate your SQL Server database (Preview)
-Learn how to use the unified experience in the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to assess your database requirements, get right-sized SKU recommendations for Azure resources, and migrate your SQL Server databases to Azure.
+The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
-Before you migrate your SQL Server databases to Azure, it's important to assess the databases to identify any potential migration issues. You can remediate anticipated issues, and then confidently migrate your databases to Azure.
+Learn how to use this unified experience, collecting performance data from your source SQL Server instance to get right-sized Azure recommendations for your Azure SQL targets.
-It's equally important to identify the right-sized Azure resource to migrate to so that your database workload performance requirements are met with minimal cost.
+## Overview
+
+Before migrating to Azure SQL, you can use the SQL Migration extension in Azure Data Studio to help you generate right-sized recommendations (Preview) for Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure Virtual Machines targets. The tool helps you collect performance data from your source SQL instance (running on-premises or other cloud), and recommend a compute and storage configuration to meet your workload's needs.
+
+The diagram presents the workflow for Azure recommendations in the Azure SQL Migration extension for Azure Data Studio:
+
-The Azure SQL Migration extension for Azure Data Studio provides both the assessment and SKU recommendations when you're trying to choose the best option to migrate your SQL Server databases to Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database. The extension has an intuitive interface to help you efficiently run the assessment and generate recommendations.
> [!NOTE] > Assessment and the Azure recommendation feature in the Azure SQL Migration extension for Azure Data Studio supports source SQL Server instances running on Windows or Linux. ## Prerequisites
-To get an Azure recommendation for your SQL Server database migration, you must meet the following prerequisites:
+To get started with Azure recommendations (Preview) for your SQL Server database migration, you must meet the following prerequisites:
- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio). - [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.-- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permissions.
+- Ensure that the login you use to connect the source SQL Server instance, has the [minimum permissions](#minimum-permissions).
-## Performance data collection and SKU recommendation
+## Supported sources and targets
-The Azure SQL Migration extension first collects performance data from your SQL Server instance. Then, it analyzes the data to generate a recommended SKU for Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database. The SKU recommendation is designed to meet your database performance requirements at the lowest cost in the Azure service.
+Azure recommendations can be generated for the following SQL Server versions:
-The following diagram shows the workflow for data collection and SKU recommendations:
+- SQL Server 2008 and later versions on Windows or Linux are supported.
+- SQL Server running on other clouds may be supported, but the accuracy of results may vary
+Azure recommendations can be generated for the following Azure SQL targets:
+
+- Azure SQL Database
+ - Hardware families: Standard series (Gen5)
+ - Service tiers: General Purpose, Business Critical, Hyperscale
+- Azure SQL Managed Instance
+ - Hardware families: Standard series (Gen5), Premium series, Premium series memory-optimized
+ - Service tiers: General Purpose, Business Critical
+- SQL Server on Azure Virtual Machine
+ - VM families: General purpose, memory-optimized
+ - Storage families: Premium SSD
++
+## Performance data collection
+
+Before recommendations can be generated, performance data needs to be collected from your source SQL Server instance. During this data collection step, multiple [dynamic system views](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views) (DMVs) from your SQL Server instance are queried to capture the performance characteristics of your workload. The tool captures metrics including CPU, memory, storage, and IO usage every 30 seconds, and saves the performance counters locally to your machine as a set of CSV files.
+
+### Instance level
+This performance data is collected once per SQL Server instance:
+
+| Performance dimension | Description | Dynamic Management view (DMV)
+| -- | -- | -- |
+| SqlInstanceCpuPercent | The amount of CPU that the SQL Server process was using, as a percentage | `sys.dm_os_ring_buffers`|
+| PhysicalMemoryInUse | Overall memory footprint of the SQL Server process | `sys.dm_os_process_memory` |
+|MemoryUtilizationPercentage | SQL Server's memory utilization| `sys.dm_os_process_memory`|
-The following list describes each step in the workflow:
+### Database level
-(1) **Performance data collection**: To start the performance data collection process in the migration wizard, select **Get Azure recommendation** and choose the option to collect performance data. Enter the folder path where the collected data will be saved, and then select **Start**.
+| Performance dimension | Description | Dynamic Management view (DMV)
+| -- | -- | -- |
+| DatabaseCpuPercent | The total percentage of CPU used by a database | `sys.dm_exec_query_stats`|
+| CachedSizeInMb | Total size in Megabytes of cache used by a database | `sys.dm_os_buffer_descriptors`|
+
+### File level
+
+| Performance dimension | Description | Dynamic Management view (DMV)
+| -- | -- | -- |
+| ReadIOInMb | The total number of megabytes read from this file | `sys.dm_io_virtual_file_stats`|
+| WriteIOInMb | The total number of megabytes written to this file | `sys.dm_io_virtual_file_stats`|
+| NumOfReads | The total number of reads issued on this file | `sys.dm_io_virtual_file_stats`|
+| NumOfWrites | The total number of writes issued on this file | `sys.dm_io_virtual_file_stats`|
+| ReadLatency | The IO read latency on this file | `sys.dm_io_virtual_file_stats`|
+| WriteLatency | The IO write latency on this file | `sys.dm_io_virtual_file_stats`|
+
+A minimum of 10 minutes of data collection is required before a recommendation can be generated, but to accurately assess your workload, it's recommended that you run the data collection for a duration sufficiently long to capture both on-peak and off-peak usage.
+
+To initiate the data collection process, begin by connecting to your source SQL instance in Azure Data Studio, then launch the SQL Migration wizard. On step 2, select "Get Azure recommendation". Select "Collect performance data now" and select a folder on your machine where the collected data will be saved.
:::image type="content" source="media/ads-sku-recommend/collect-performance-data.png" alt-text="Screenshot that shows the wizard pane to collect performance data for SKU recommendations.":::
-
-When you start the data collection process in the migration wizard, the Azure SQL Migration extension for Azure Data Studio collects data from your SQL Server instance. The data collection includes hardware configuration and aggregated SQL Server-specific performance data from system Dynamic Management Views like CPU utilization, memory utilization, storage size, input/output (I/O), throughput, and I/O latency.
+ > [!IMPORTANT] >
-> - The data collection process runs for 10 minutes to generate the first recommendation. It's important to start the data collection process when your active database workload reflects usage that's similar to your production scenarios.
-> - After the first recommendation is generated, you can continue to run the data collection process to refine recommendations. This option is especially useful if your usage patterns vary over time.
+> The data collection process runs for 10 minutes to generate the first recommendation. It's important to start the data collection process when your active database workload reflects usage that's similar to your production scenarios.
+>
+> After the first recommendation is generated, you can continue to run the data collection process to refine recommendations. This option is especially useful if your usage patterns vary over time.
+
+The data collection process begins once you select **Start**. Every 10 minutes, the collected data points are aggregated and the max, mean and variance of each counter will be written to disk to a set of three CSV files.
+
+You typically see a set of CSV files with the following suffixes in the selected folder:
+
+* `SQLServerInstance`_CommonDbLevel_Counters.csv: Contains static configuration data about the database file layout and metadata.
+* `SQLServerInstance`_CommonInstanceLevel_Counters.csv: Contains static data about the hardware configuration of the server instance.
+* `SQLServerInstance`_PerformanceAggregated_Counters.csv: Contains aggregated performance data that's updated frequently.
+
+During this time, leave Azure Data Studio open, though you can continue with other operations. At any time, you can stop the data collection process by returning to this page and select **Stop data collection**.
++
+## Generating right-sized recommendations
+
+If you have already collected performance data from a previous session, or using a different tool (such as Database Migration Assistant), you can import any existing performance data by selecting the option **I already have the performance data**. Proceed to select the folder where your performance data (three .csv files) is saved and select **Start** to initiate the recommendation process.
++
+> [!NOTE]
+> Step one of the SQL Migration wizard asks you to select a set of databases to assess, and these are the only databases which will be taken into consideration during the recommendation process.
+>
+> However, the performance data collection process collects performance counters for **all databases** from the source SQL Server instance, not just the ones that were selected.
+>
+> This means that previously collected performance data can be used to repeatedly regenerate recommendations for a different subset of databases by specifying a different list on step one.
+
+## Recommendation parameters
+There are multiple configurable settings that could affect your recommendations.
+
-(2) **Save generated data files locally**: The performance data is periodically aggregated and written to the local folder that you selected in the migration wizard. You typically see a set of CSV files with the following suffixes in the folder:
+Select the **Edit parameters** option to adjust these parameters according to your needs.
-- **_CommonDbLevel_Counters.csv** : Contains static configuration data about the database file layout and metadata.-- **_CommonInstanceLevel_Counters.csv** : Contains static data about the hardware configuration of the server instance.-- **_PerformanceAggregated_Counters.csv** : Contains aggregated performance data that's updated frequently.
-(3) **Analyze and recommend SKU**: The SKU recommendation process analyzes the captured common and performance data to recommend the minimum configuration with the least cost that will meet your database's performance requirements. You can also view details about the reason behind the recommendation and the source properties that were analyzed. *For SQL Server on Azure Virtual Machines, the process also includes a recommendation for storage configuration for data files, log files, and tempdb.*
-You can use optional parameters as inputs about the production workload to refine recommendations:
+- **Scale factor**:
+ This option allows you to provide a buffer to apply to each performance dimension. This option accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you determine that a four-vCore CPU requirement has a scale factor of 150%, the true CPU requirement is six vCores.
+
+ The default scale factor volume is 100%.
+
+- **Percentage utilization**:
+ The percentile of data points to be used as performance data is aggregated.
+
+ The default value is the 95th percentile.
+
+- **Enable preview features**:
+ This option allows for configurations to be recommended that may not be generally available to all users in all regions yet.
+
+ This option is turned off by default.
+
+- **Enable elastic recommendation**:
+
+ This option uses an alternate recommendation model that utilizes personalized price-performance profiling against existing on-cloud customers.
+
+ This option is turned off by default.
-- **Scale factor**: Scale (*comfort*) factor is used to inflate or deflate a SKU recommendation based on your understanding of the production workload. For example, if you determine that a four-vCore CPU requirement has a scale factor of 150%, the true CPU requirement is six vCores. The default scale factor volume is 100%.-- **Percentage utilization**: The percentile of data points to be used as performance data is aggregated. The default value is the 95th percentile.-- **Enable preview features**: Enabling this option includes the latest hardware generations that have improved performance and scalability. Currently, these SKUs are in preview, and they might not be available yet in all regions. This option is enabled by default. > [!IMPORTANT] > The data collection process terminates if you close Azure Data Studio. The data that was collected up to that point is saved in your folder.
You can use optional parameters as inputs about the production workload to refin
> - Reopen Azure Data Studio and import the data files that are saved in your local folder. Then, generate a recommendation from the collected data. > - Reopen Azure Data Studio and start data collection again by using the migration wizard.
-### Import existing performance data
-
-You can import any existing performance data that you collected earlier by using the Azure SQL Migration extension or by using the [console application in Data Migration Assistant](/sql/dma/dma-sku-recommend-sql-db).
-
-In the migration wizard, enter the folder path where the performance data files are saved. Then, select **Start** to view the recommendation and related details.
-
+## Minimum permissions
+To query the necessary system views for performance data collection, specific permissions are required for the SQL Server login used for this task. You can create a minimum privileged user for assessment and performance data collection using the following script:
+
+```sql
+-- Create a login to run the assessment
+USE master;
+GO
+
+CREATE LOGIN [assessment] WITH PASSWORD = '<STRONG PASSWORD>';
+
+-- Create user in every database other than TempDB and model and provide minimal read-only permissions
+EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''TempDB'',''model''))
+ BEGIN TRY
+ CREATE USER [assessment] FOR LOGIN [assessment]
+ END TRY
+ BEGIN CATCH
+ PRINT ERROR_MESSAGE()
+ END CATCH'
+
+EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN TRY
+ GRANT SELECT ON sys.sql_expression_dependencies TO [assessment]
+ END TRY
+ BEGIN CATCH
+ PRINT ERROR_MESSAGE()
+ END CATCH'
+
+EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN TRY
+ GRANT VIEW DATABASE STATE TO [assessment]
+ END TRY
+ BEGIN CATCH
+ PRINT ERROR_MESSAGE()
+ END CATCH'
+
+-- Provide server level read-only permissions
+GRANT SELECT ON sys.sql_expression_dependencies TO [assessment];
+GRANT SELECT ON sys.sql_expression_dependencies TO [assessment];
+GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [assessment];
+GRANT VIEW DATABASE STATE TO assessment;
+GRANT VIEW SERVER STATE TO assessment;
+GRANT VIEW ANY DEFINITION TO assessment;
+
+-- Provide msdb specific permissions
+USE msdb;
+GO
+
+GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[syscategories] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [assessment];
+GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [assessment];
+
+-- USE master;
+-- GO
+-- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [assessment] END TRY BEGIN CATCH SELECT ERROR_MESSAGE() END CATCH';
+-- DROP LOGIN [assessment];
+```
+
+## Unsupported scenarios and limitations
+- Azure Recommendations don't include price estimates, as this situation may vary depending on region, currency, and discounts such as the [Azure Hybrid Benefit](/azure/azure-sql/azure-hybrid-benefit). To get price estimates, use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator), or create a [SQL assessment](/azure/migrate/concepts-azure-sql-assessment-calculation) in Azure Migrate.
+- Recommendations for Azure SQL Database with the [DTU-based purchasing model](/azure/azure-sql/database/migrate-dtu-to-vcore) aren't supported.
+- Currently, Azure recommendations for Azure SQL Database serverless compute tier and Elastic Pools aren't supported.
+- Currently, Azure recommendations for SQL Server on Azure Virtual Machine using Premium SSD v2 aren't supported.
+
+## Troubleshooting
+- No recommendations generated
+ - If no recommendations were generated, this situation could mean that no configurations were identified which can fully satisfy the performance requirements of your source instance. In order to see reasons why a particular size, service tier, or hardware family was disqualified:
+ - Access the logs from Azure Data Studio by going to Help > Show All Commands > Open Extension Logs Folder
+ - Navigate to Microsoft.mssql > SqlAssessmentLogs > open SkuRecommendationEvent.log
+ - The log contains a trace of every potential configuration that was assessed and the reason why it was/was not considered being an eligible configuration: 
+ :::image type="content" source="media/ads-sku-recommend/recommendation-log.png" border="false" alt-text="Screenshot that shows SKU recommendations log.":::
+ - Try regenerating the recommendation with [elastic recommendation](#recommendation-parameters) enabled. This option uses an alternate recommendation model, which utilizes personalized price-performance profiling against existing on-cloud customers.
## Next steps -- Learn how to [migrate databases by using the Azure SQL Migration extension in Azure Data Studio](migration-using-azure-data-studio.md).
+- Learn how to [migrate databases by using the Azure SQL Migration extension in Azure Data Studio](migration-using-azure-data-studio.md).
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
Azure DNS Private Resolver is a new service that enables you to query Azure DNS
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-This article assumes you've [installed the Az Azure PowerShell module](/powershell/azure/install-az-ps).
+This article assumes you've [installed the Az Azure PowerShell module](/powershell/azure/install-azure-powershell).
## Install the Az.DnsResolver PowerShell module
event-grid Custom Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart-powershell.md
When you're finished, you see that the event data has been sent to the web app.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
-This article requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+This article requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Create a resource group
event-hubs Event Hubs Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-powershell.md
An Azure account with an active subscription. [Create an account for free](https
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you're using PowerShell locally, you must run the latest version of PowerShell to complete this quickstart. If you need to install or upgrade, see [Install and Configure Azure PowerShell](/powershell/azure/install-az-ps).
+If you're using PowerShell locally, you must run the latest version of PowerShell to complete this quickstart. If you need to install or upgrade, see [Install and Configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Create a resource group Run the following command to create a resource group. A resource group is a logical collection of Azure resources. All resources are deployed and managed in a resource group.
expressroute Reset Circuit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/reset-circuit.md
When an operation on an ExpressRoute circuit doesn't complete successfully, the
## Reset a circuit
-1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
2. Open your PowerShell console with elevated privileges, and connect to your account. Use the following example to help you connect:
firewall Deploy Ps Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-ps-policy.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-This procedure requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
+This procedure requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
## Set up the network
firewall Deploy Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-ps.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-This procedure requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
+This procedure requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
## Set up the network
firewall Firewall Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-sftp.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This article requires the latest Azure PowerShell modules. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+This article requires the latest Azure PowerShell modules. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Deploy the network infrastructure
Create the network infrastructure. This includes a virtual network, subnets and
```azurepowershell # Create a new resource group
-New-AzResourceGroup -Name "$rg" -Location $location
+New-AzResourceGroup -Name $rg -Location $location
# Create new subnets for the firewall $FWsub = New-AzVirtualNetworkSubnetConfig -Name AzureFirewallSubnet -AddressPrefix 10.0.1.0/26
$testVnet = New-AzVirtualNetwork -Name test-fw-vn -ResourceGroupName $rg -Locati
# Create a public IP address for the firewall $pip = New-AzPublicIpAddress ` -ResourceGroupName $rg `
- -Location eastus `
+ -Location $location `
-AllocationMethod Static ` -Sku Standard ` -Name fw-pip
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
Usage example:
> [!IMPORTANT] > The script doesn't migrate Threat Intelligence and SNAT private ranges settings. You'll need to note those settings before proceeding and migrate them manually. Otherwise, you might encounter inconsistent traffic filtering with your new upgraded firewall.
-This script requires the latest Azure PowerShell. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+This script requires the latest Azure PowerShell. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
```azurepowershell <#
firewall Sample Create Firewall Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/scripts/sample-create-firewall-test.md
You can use `PowerShellGet` if you need to upgrade, which is built into Windows
>Other Windows version require you to install `PowerShellGet` before you can use it. >You can run `Get-Module -Name PowerShellGet -ListAvailable | Select-Object -Property Name,Version,Path` to determine if it is installed on your system. If the output is blank, you need to install the latest [Windows Management framework](https://www.microsoft.com/download/details.aspx?id=54616).
-For more information, see [Install Azure PowerShell](/powershell/azure/install-Az-ps)
+For more information, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell)
Any existing Azure PowerShell installation done with the Web Platform installer will conflict with the PowerShellGet installation and needs to be removed.
firewall Tutorial Hybrid Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-ps.md
If you want to use Azure portal instead to complete this tutorial, see [Tutorial
## Prerequisites
-This article requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
+This article requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Login-AzAccount` to create a connection with Azure.
There are three key requirements for this scenario to work correctly:
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Register the service principal for Azure Front Door as an app in your Azure Acti
##### Azure PowerShell
-1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) in PowerShell on your local machine.
+1. If needed, install [Azure PowerShell](/powershell/azure/install-azure-powershell) in PowerShell on your local machine.
2. In PowerShell, run the following command:
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Register the service principal for Azure Front Door as an app in your Azure Acti
# [Azure PowerShell](#tab/powershell)
-1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) in PowerShell on your local machine.
+1. If needed, install [Azure PowerShell](/powershell/azure/install-azure-powershell) in PowerShell on your local machine.
1. Use PowerShell, run the following command:
governance Manage Assignments Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/how-to/manage-assignments-ps.md
with the [Azure PowerShell Docker image](/powershell/azure/azureps-in-docker).
The Azure Blueprints module requires the following software: - Azure PowerShell 1.5.0 or higher. If it isn't yet installed, follow
- [these instructions](/powershell/azure/install-az-ps).
+ [these instructions](/powershell/azure/install-azure-powershell).
- PowerShellGet 2.0.1 or higher. If it isn't installed or updated, follow [these instructions](/powershell/gallery/powershellget/install-powershellget).
governance How To Create Policy Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-create-policy-definition.md
steps in [How to create custom machine configuration package artifacts][04]. The
package in your development environment by following the steps in [How to test machine configuration package artifacts][05].
+> [!NOTE]
+> The example code in this article references the `$contentUri` variable. If you're using the same
+> PowerShell session as the earlier tutorials for creating and testing your package artifacts, that
+> variable may already have the URI to your package.
+>
+> If you don't have the `$contentUri` variable set to the URI for your package in your PowerShell
+> session, you need to set it. This example uses a storage account's [connection string][06] and
+> the `New-AzStorageContext` cmdlet to create a storage context. Then it gets the storage blob for
+> the published package and uses that object's properties to get the content URI.
+>
+> ```azurepowershell-interactive
+> $connectionString = '<storage-account-connection-string>'
+> $context = New-AzStorageContext -ConnectionString $connectionString
+> $getParams = @{
+> Context = $context
+> Container = '<container-name>'
+> Blob = '<published-package-file-name>'
+> }
+> $blob = Get-AzStorageBlob @getParams
+> $contentUri = $blob.ICloudBlob.Uri.AbsoluteUri
+> ```
+ ## Policy requirements for machine configuration The policy definition **metadata** section must include two properties for the machine
Create a policy definition that audits using a custom configuration package, in
```powershell $PolicyConfig = @{ PolicyId = '_My GUID_'
- ContentUri = $contenturi
+ ContentUri = $contentUri
DisplayName = 'My audit policy' Description = 'My audit policy' Path = './policies/auditIfNotExists.json'
specified path:
```powershell $PolicyConfig2 = @{ PolicyId = '_My GUID_'
- ContentUri = $contenturi
+ ContentUri = $contentUri
DisplayName = 'My audit policy' Description = 'My audit policy' Path = './policies/deployIfNotExists.json'
$PolicyParameterInfo = @(
# ...and then passed into the `New-GuestConfigurationPolicy` cmdlet $PolicyParam = @{ PolicyId = 'My GUID'
- ContentUri = $contenturi
+ ContentUri = $contentUri
DisplayName = 'Audit Windows Service.' Description = "Audit if a Windows Service isn't enabled on Windows machine." Path = '.\policies\auditIfNotExists.json'
Finally, you can publish the policy definitions using the `New-AzPolicyDefinitio
below commands publish your machine configuration policy to the policy center. To run the `New-AzPolicyDefinition` command, you need access to create policy definitions in Azure.
-The specific authorization requirements are documented in the [Azure Policy Overview][06] page. The
+The specific authorization requirements are documented in the [Azure Policy Overview][07] page. The
recommended built-in role is `Resource Policy Contributor`. ```azurepowershell-interactive
New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\deployIfNo
``` With the policy definition created in Azure, the last step is to assign the definition. See how to
-assign the definition with [Portal][07], [Azure CLI][08], and [Azure PowerShell][09].
+assign the definition with [Portal][08], [Azure CLI][09], and [Azure PowerShell][10].
## Policy lifecycle
updated.
## Next steps -- [Assign your custom policy definition][07] using Azure portal.-- Learn how to view [compliance details for machine configuration][10] policy assignments.
+- [Assign your custom policy definition][08] using Azure portal.
+- Learn how to view [compliance details for machine configuration][11] policy assignments.
<!-- Reference link definitions --> [01]: ./overview.md
updated.
[03]: ./how-to-set-up-authoring-environment.md [04]: ./how-to-create-package.md [05]: ./how-to-test-package.md
-[06]: ../policy/overview.md
-[07]: ../policy/assign-policy-portal.md
-[08]: ../policy/assign-policy-azurecli.md
-[09]: ../policy/assign-policy-powershell.md
-[10]: ../policy/how-to/determine-non-compliance.md#compliance-details
+[06]: ../../storage/common/storage-configure-connection-string.md#configure-a-connection-string-for-an-azure-storage-account
+[07]: ../policy/overview.md
+[08]: ../policy/assign-policy-portal.md
+[09]: ../policy/assign-policy-azurecli.md
+[10]: ../policy/assign-policy-powershell.md
+[11]: ../policy/how-to/determine-non-compliance.md#compliance-details
governance How To Publish Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-publish-package.md
requirements for the storage account, but it's a good idea to host the file in a
machines. If you prefer to not make the package public, you can include a [SAS token][02] in the URL or implement a [service endpoint][03] for machines in a private network.
+To publish your configuration package to Azure blob storage, you can follow these steps, which use
+the **Az.Storage** module.
+ If you don't have a storage account, use the following example to create one. ```azurepowershell-interactive
$newAccountParams = @{
Name = '<storage-account-name>' SkuName = 'Standard_LRS' }
-New-AzStorageAccount @newAccountParams |
- New-AzStorageContainer -Name guestconfiguration -Permission Blob
+$container = New-AzStorageAccount @newAccountParams |
+ New-AzStorageContainer -Name machine-configuration -Permission Blob
```
-To publish your configuration package to Azure blob storage, you can follow these steps, which use
-the **Az.Storage** module.
+Next, get the context of the storage account you want to store the package in. If you created
+the storage account in the earlier example, you can get the context from the storage container
+object saved in the `$container` variable:
-First, obtain the context of the storage account you want to store the package in. This example
-creates a context by specifying a connection string and saves the context in the variable
-`$Context`.
+```azurepowershell-interactive
+$context = $container.Context
+```
+
+If you're using an existing storage container, you can use the container's [connection string][04]
+with the `New-AzStorageContext` cmdlet:
```azurepowershell-interactive $connectionString = @( 'DefaultEndPointsProtocol=https'
- 'AccountName=ContosoGeneral'
- 'AccountKey=<storage-key-for-ContosoGeneral>' # ends with '=='
+ 'AccountName=<storage-account-name>'
+ 'AccountKey=<storage-key-for-the-account>' # ends with '=='
) -join ';'
-$Context = New-AzStorageContext -ConnectionString $connectionString
+$context = New-AzStorageContext -ConnectionString $connectionString
``` Next, add the configuration package to the storage account. This example uploads the zip file
-`./MyConfig.zip` to the blob `machineConfiguration`.
+`./MyConfig.zip` to the blob container `machine-configuration`.
```azurepowershell-interactive $setParams = @{
- Container = 'machineConfiguration'
+ Container = 'machine-configuration'
File = './MyConfig.zip'
- Context = $Context
+ Context = $context
}
-Set-AzStorageBlobContent @setParams
+$blob = Set-AzStorageBlobContent @setParams
+$contentUri = $blob.ICloudBlob.Uri.AbsoluteUri
```
-Optionally, you can add a SAS token in the URL to ensure the content package is accessed securely.
-The below example generates a blob SAS token with read access and returns the full blob URI with
-the shared access signature token. In this example, the token has a time limit of three years.
+> [!NOTE]
+> If you're running these examples in Cloudshell but created your zip file locally, you can
+> [upload the file to Cloudshell][05].
+
+While this next step is optional, you should add a shared access signature (SAS) token in the URL
+to ensure secure access to the package. The below example generates a blob SAS token with read
+access and returns the full blob URI with the shared access signature token. In this example, the
+token has a time limit of three years.
```azurepowershell-interactive
-$StartTime = Get-Date
-$EndTime = $startTime.AddYears(3)
+$startTime = Get-Date
+$endTime = $startTime.AddYears(3)
$tokenParams = @{
- StartTime = $StartTime
- EndTime = $EndTime
- Container = 'machineConfiguration'
+ StartTime = $startTime
+ ExpiryTime = $endTime
+ Container = 'machine-configuration'
Blob = 'MyConfig.zip' Permission = 'r'
- Context = $Context
+ Context = $context
FullUri = $true }
-$contenturi = New-AzStorageBlobSASToken @tokenParams
+$contentUri = New-AzStorageBlobSASToken @tokenParams
```
+> [!IMPORTANT]
+> After you create the SAS token, note the returned URI. You can't retrieve the token after you
+> create it. You can only create new tokens. For more information about SAS tokens, see
+> [Grant limited access to Azure Storage resources using shared access signatures (SAS)][06].
+ ## Next steps -- [Test the package artifact][04] from your development environment.-- Use the **GuestConfiguration** module to [create an Azure Policy definition][05] for at-scale
+- [Test the package artifact][07] from your development environment.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][08] for at-scale
management of your environment.-- [Assign your custom policy definition][06] using Azure portal.
+- [Assign your custom policy definition][09] using Azure portal.
<!-- Reference link definitions --> [01]: ./overview.md [02]: ../../storage/common/storage-sas-overview.md [03]: ../../storage/common/storage-network-security.md#grant-access-from-a-virtual-network
-[04]: ./how-to-test-package.md
-[05]: ./how-to-create-policy-definition.md
-[06]: ../policy/assign-policy-portal.md
+[04]: ../../storage/common/storage-configure-connection-string.md#configure-a-connection-string-for-an-azure-storage-account
+[05]: /azure/cloud-shell/using-the-shell-window#upload-and-download-files
+[06]: ../../storage/common/storage-sas-overview.md
+[07]: ./how-to-test-package.md
+[08]: ./how-to-create-policy-definition.md
+[09]: ../policy/assign-policy-portal.md
governance How To Test Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-test-package.md
After running the command `Start-GuestConfigurationPackageRemediation`, you can
## Next steps -- [Publish the package artifact][05] so it's accessible to your machines. - Use the **GuestConfiguration** module to [create an Azure Policy definition][06] for at-scale management of your environment. - [Assign your custom policy definition][07] using Azure portal.
governance Create Management Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-powershell.md
directory. You receive a notification when the process is complete. For more inf
account before you begin. - Before you start, make sure that the latest version of Azure PowerShell is installed. See
- [Install Azure PowerShell module](/powershell/azure/install-az-ps) for detailed information.
+ [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for detailed information.
- Any Azure AD user in the tenant can create a management group without the management group write permission assigned to that user if
governance Assign Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-powershell.md
This guide explains how to use Az module to create a policy assignment.
account before you begin. - Before you start, make sure that the latest version of Azure PowerShell is installed. See
- [Install Azure PowerShell module](/powershell/azure/install-az-ps) for detailed information.
+ [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for detailed information.
- Register the Azure Policy Insights resource provider using Azure PowerShell. Registering the resource provider makes sure that your subscription works with it. To register a resource
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
az policy state list --filter "ResourceType eq 'Microsoft.Network/virtualNetwork
The Azure PowerShell module for Azure Policy is available on the PowerShell Gallery as [Az.PolicyInsights](https://www.powershellgallery.com/packages/Az.PolicyInsights). Using PowerShellGet, you can install the module using `Install-Module -Name Az.PolicyInsights` (make sure
-you have the latest [Azure PowerShell](/powershell/azure/install-az-ps) installed):
+you have the latest [Azure PowerShell](/powershell/azure/install-azure-powershell) installed):
```azurepowershell-interactive # Install from PowerShell Gallery via PowerShellGet
governance Programmatically Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/programmatically-create.md
Before you begin, make sure that the following prerequisites are met:
tool that sends HTTP requests to Azure Resource Manager-based APIs. 1. Update your Azure PowerShell module to the latest version. See
- [Install Azure PowerShell module](/powershell/azure/install-az-ps) for detailed information. For
+ [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for detailed information. For
more information about the latest version, see [Azure PowerShell](https://github.com/Azure/azure-powershell/releases).
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md
with the [PowerShell Docker image](https://hub.docker.com/_/microsoft-powershell
The Azure Resource Graph module requires the following software: - Azure PowerShell 1.0.0 or higher. If it isn't yet installed, follow
- [these instructions](/powershell/azure/install-az-ps).
+ [these instructions](/powershell/azure/install-azure-powershell).
- PowerShellGet 2.0.1 or higher. If it isn't installed or updated, follow [these instructions](/powershell/gallery/powershellget/install-powershellget).
governance Paginate Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/paginate-powershell.md
added. This module can be used with locally installed PowerShell, with
The Azure Resource Graph module requires the following software: - Azure PowerShell 8.x or higher. If it isn't yet installed, follow
- [these instructions](/powershell/azure/install-az-ps).
+ [these instructions](/powershell/azure/install-azure-powershell).
- PowerShellGet 2.0.1 or higher. If it isn't installed or updated, follow [these instructions](/powershell/gallery/powershellget/install-powershellget).
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/operations/azure-operations-guide.md
In addition to creating, managing, and deleting resources by using the Azure por
#### Azure PowerShell
-Azure PowerShell is a set of modules that provide cmdlets for managing Azure. You can use the cmdlets to create, manage, and remove Azure services. The cmdlets can help you can achieve consistent, repeatable, and hands-off deployments. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+Azure PowerShell is a set of modules that provide cmdlets for managing Azure. You can use the cmdlets to create, manage, and remove Azure services. The cmdlets can help you can achieve consistent, repeatable, and hands-off deployments. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
#### Azure CLI
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
description: Learn how to plan Azure HDInsight security with Enterprise Security
Previously updated : 04/14/2022 Last updated : 05/11/2023 # Use Enterprise Security Package in HDInsight
hdinsight Apache Domain Joined Run Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-run-hive.md
Title: Apache Hive policies in Apache Ranger - Azure HDInsight
description: Learn how to configure Apache Ranger policies for Hive in an Azure HDInsight service with Enterprise Security Package. Previously updated : 04/08/2022 Last updated : 04/11/2023 # Configure Apache Hive policies in HDInsight with Enterprise Security Package
hdinsight Hdinsight Use Oozie Domain Joined Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/hdinsight-use-oozie-domain-joined-clusters.md
description: Secure Apache Oozie workflows using the Azure HDInsight Enterprise
Previously updated : 04/08/2022 Last updated : 05/11/2023 # Run Apache Oozie in Azure HDInsight clusters with Enterprise Security Package
hdinsight Hbase Troubleshoot Phoenix Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-phoenix-connectivity.md
Title: Apache Phoenix connectivity issues in Azure HDInsight
description: Connectivity issues between Apache HBase and Apache Phoenix in Azure HDInsight Previously updated : 04/08/2022 Last updated : 05/11/2023 # Scenario: Apache Phoenix connectivity issues in Azure HDInsight
hdinsight Hdinsight 5X Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-5x-component-versioning.md
Title: Open-source components and versions - Azure HDInsight 5.x
description: Learn about the open-source components and versions in Azure HDInsight 5.x Previously updated : 03/16/2023 Last updated : 05/11/2023 # HDInsight 5.x component versions
The Open-source component versions associated with HDInsight 5.1 listed in the f
| Component | HDInsight 5.1 | HDInsight 5.0 | ||||
-| Apache Spark | 3.3 * | 3.1.3 |
-| Apache Hive | 3.1.2 * | 3.1.2 |
-| Apache Kafka | 3.2.0 ** | 2.4.1 |
-| Apache Hadoop with YARN | 3.3.4 * | 3.1.1 |
-| Apache Tez | 0.9.1 * | 0.9.1 |
-| Apache Pig | 0.17.0 * | 0.16.1 |
-| Apache Ranger | 2.1.0 * | 1.1.0 |
-| Apache HBase | 2.4.11 ** | - |
-| Apache Sqoop | 1.5.0 * | 1.5.0 |
-| Apache Oozie | 5.2.1 * | 4.3.1 |
-| Apache Zookeeper | 3.6.3 * | 3.4.6 |
-| Apache Livy | 0.7.1 * | 0.5 |
+| Apache Spark | 3.3.1 ** | 3.1.3 |
+| Apache Hive | 3.1.2 ** | 3.1.2 |
+| Apache Kafka | 3.2.0 ** | 2.4.1 |
+| Apache Hadoop with YARN | 3.3.4, ZK 3.6.3 | 3.1.1 |
+| Apache Tez | 0.9.1 ** | 0.9.1 |
+| Apache Ranger | 2.3.0 * | 1.1.0 |
+| Apache HBase | 2.4.11 ** | - |
+| Apache Oozie | 5.2.1 * | 4.3.1 |
+| Apache Zookeeper | 3.6.3 | 3.4.6 |
+| Apache Livy | 0.5. ** | 0.5 |
| Apache Ambari | 2.7.0 ** | 2.7.0 |
-| Apache Zeppelin | 0.10.0 * | 0.8.0 |
-| Apache Phoenix | 5.1.2 ** | - |
+| Apache Zeppelin | 0.10.1 ** | 0.8.0 |
+| Apache Phoenix | 5.1.2 ** | - |
\* Under development/Planned
hdinsight Hdinsight Hadoop Add Hive Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-add-hive-libraries.md
description: Learn how to add Apache Hive libraries (jar files) to an HDInsight
Previously updated : 04/08/2022 Last updated : 05/11/2023 # Add custom Apache Hive libraries when creating your HDInsight cluster
hdinsight Hdinsight Hadoop Create Linux Clusters Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-adf.md
If you don't have an Azure subscription, [create a free account](https://azure.m
## Prerequisites
-* The PowerShell [Az Module](/powershell/azure/install-az-ps) installed.
+* The PowerShell [Az Module](/powershell/azure/install-azure-powershell) installed.
* An Azure Active Directory service principal. Once you've created the service principal, be sure to retrieve the **application ID** and **authentication key** using the instructions in the linked article. You need these values later in this tutorial. Also, make sure the service principal is a member of the *Contributor* role of the subscription or the resource group in which the cluster is created. For instructions to retrieve the required values and assign the right roles, see [Create an Azure Active Directory service principal](../active-directory/develop/howto-create-service-principal-portal.md).
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-[Azure PowerShell](/powershell/azure/install-Az-ps) Az module.
+[Azure PowerShell](/powershell/azure/install-azure-powershell) Az module.
## Create cluster
hdinsight Hdinsight Hadoop Windows Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-windows-tools.md
Examples of tasks you can do with PowerShell:
* [Run Apache Hive queries using PowerShell](hadoop/apache-hadoop-use-hive-powershell.md). * [Manage clusters with PowerShell](hdinsight-administer-use-powershell.md).
-Follow steps to [install and configure Azure PowerShell](/powershell/azure/install-az-ps) to get the latest version.
+Follow steps to [install and configure Azure PowerShell](/powershell/azure/install-azure-powershell) to get the latest version.
## Utilities you can run in a browser
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 02/28/2023 Last updated : 05/11/2023 # Archived release notes
Last updated 02/28/2023
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/Azure/HDInsight/releases).
+## Release date: February 28, 2023
+
+This release applies to HDInsight 4.0. and 5.0, 5.1. HDInsight release will be available to all regions over several days. This release is applicable for image number **2302250400**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+
+HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
+
+**OS versions**
+
+* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+
+For workload specific versions, see
+
+* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
+* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
+
+> [!IMPORTANT]
+> Microsoft has issued [CVE-2023-23408](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-23408), which is fixed on the current release and customers are advised to upgrade their clusters to latest image. 
+
+![Icon showing new features with text.](media/hdinsight-release-notes/new-icon-for-new-feature.png)
+
+**HDInsight 5.1**
+
+We have started rolling out a new version of HDInsight 5.1. All new open-source releases added as incremental releases on HDInsight 5.1.
+
+For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-versioning.md)
+
+![Icon showing update with text.](media/hdinsight-release-notes/new-icon-for-updated.png)
+
+**Kafka 3.2.0 Upgrade (Preview)**
+
+* Kafka 3.2.0 includes several significant new features/improvements.
+ * Upgraded Zookeeper to 3.6.3
+ * Kafka Streams support
+ * Stronger delivery guarantees for the Kafka producer enabled by default.
+ * log4j 1.x replaced with reload4j.
+ * Send a hint to the partition leader to recover the partition.
+ * `JoinGroupRequest` and `LeaveGroupRequest` have a reason attached.
+ * Added Broker count metrics8.
+ * Mirror Maker2 improvements.
+
+**HBase 2.4.11 Upgrade (Preview)**
+* This version has new features such as the addition of new caching mechanism types for block cache, the ability to alter `hbase:meta table` and view the `hbase:meta` table from the HBase WEB UI.
+
+**Phoenix 5.1.2 Upgrade (Preview)**
+ * Phoenix version upgraded to 5.1.2 in this release. This upgrade includes the Phoenix Query Server. The Phoenix Query Server proxies the standard Phoenix JDBC driver and provides a backwards-compatible wire protocol to invoke that JDBC driver.
+
+**Ambari CVEs**
+ * Multiple Ambari CVEs are fixed.
+
+> [!NOTE]
+> ESP isn't supported for Kafka and HBase in this release.
+>
+
+![Icon showing end of support with text.](media/hdinsight-release-notes/new-icon-for-end-of-support.png)
+
+End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md#spark-versions-supported-in-azure-hdinsight)
+
+## Coming soon
+
+* Autoscale
+ * Autoscale with improved latency and several improvements
+* Cluster name change limitation
+ * The max length of cluster name will be changed to 45 from 59 in Public, Mooncake and Fairfax.
+* Cluster permissions for secure storage
+ * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account.
+* Non-ESP ABFS clusters [Cluster Permissions for World Readable]
+ * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates.
+* Open-source upgrades
+ * Apache Spark 3.3.0 and Hadoop 3.3.4 are under development on HDInsight 5.1 and will include several significant new features, performance and other improvements.
+
+ > [!NOTE]
+ > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
++ ## Release date: December 12, 2022 This release applies to HDInsight 4.0. and 5.0 HDInsight release is made available to all regions over several days.
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 02/28/2023 Last updated : 05/11/2023 # Azure HDInsight release notes
This article provides information about the **most recent** Azure HDInsight rele
## Summary Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure.
-Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-repo.md) for up-to-date information on HDInsight and all HDInsight versions. 
+Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-repo.md) for up-to-date information on HDInsight and all HDInsight versions.
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDInsight Releases](https://github.com/Azure/HDInsight/releases).
-## Release date: February 28, 2023
+## Release date: May 08, 2023
-This release applies to HDInsight 4.0. and 5.0, 5.1. HDInsight release will be available to all regions over several days. This release is applicable for image number **2302250400**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2304202354**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
For workload specific versions, see
* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md) * [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
-> [!IMPORTANT]
-> Microsoft has issued [CVE-2023-23408](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-23408), which is fixed on the current release and customers are advised to upgrade their clusters to latest image. 
-
-![Icon showing new features with text.](media/hdinsight-release-notes/new-icon-for-new-feature.png)
+![Icon showing update with text.](media/hdinsight-release-notes/new-icon-for-updated.png)
-**HDInsight 5.1**
+1. Azure HDInsight 5.1 updated with
-We have started rolling out a new version of HDInsight 5.1. All new open-source releases added as incremental releases on HDInsight 5.1.
+ 1. Apache HBase 2.4.11
+ 1. Apache Phoenix 5.1.2
+ 1. Apache Hive 3.1.2
+ 1. Apache Spark 3.3.1
+ 1. Apache Tez 0.9.1
+ 1. Apache Zeppelin 0.10.1
+ 1. Apache Livy 0.5
+ 1. Apache Kafka 3.2.0
-For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-versioning.md)
+ > [!NOTE]
+ > * All components are integrated with Hadoop 3.3.4 & ZK 3.6.3
+ > * All above upgraded components are now available in non-ESP clusters for public preview.
-![Icon showing update with text.](media/hdinsight-release-notes/new-icon-for-updated.png)
+![Icon showing new features with text.](media/hdinsight-release-notes/new-icon-for-new-feature.png)
-**Kafka 3.2.0 Upgrade (Preview)**
+1. **Enhanced Autoscale for HDInsight**
-* Kafka 3.2.0 includes several significant new features/improvements.
- * Upgraded Zookeeper to 3.6.3
- * Kafka Streams support
- * Stronger delivery guarantees for the Kafka producer enabled by default.
- * log4j 1.x replaced with reload4j.
- * Send a hint to the partition leader to recover the partition.
- * `JoinGroupRequest` and `LeaveGroupRequest` have a reason attached.
- * Added Broker count metrics8.
- * Mirror Maker2 improvements.
+ Azure HDInsight ESP for Apache Kafka 2.4.1 has been in public preview since April 2022. After notable improvements in CVE fixes and stability, Azure HDInsight ESP Kafka 2.4.1 now becomes generally available and ready for production workloads, learn more about [the new features and capabilities](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/enhanced-autoscale-capabilities-in-hdinsight-clusters/ba-p/3811271) added and how to get started.
+
+1. **Azure HDInsight ESP for Apache Kafka 2.4.1 is now Generally Available**.
-**HBase 2.4.11 Upgrade (Preview)**
-* This version has new features such as the addition of new caching mechanism types for block cache, the ability to alter `hbase:meta table` and view the `hbase:meta` table from the HBase WEB UI.
+ Azure HDInsight ESP for Apache Kafka 2.4.1 has been in public preview since April 2022. After notable improvements in CVE fixes and stability, Azure HDInsight ESP Kafka 2.4.1 now becomes generally available and ready for production workloads, learn the detail about the [how to configure](./domain-joined/apache-domain-joined-run-kafka.md) and [migrate](./kafk).
-**Phoenix 5.1.2 Upgrade (Preview)**
- * Phoenix version upgraded to 5.1.2 in this release. This upgrade includes the Phoenix Query Server. The Phoenix Query Server proxies the standard Phoenix JDBC driver and provides a backwards-compatible wire protocol to invoke that JDBC driver.
-
-**Ambari CVEs**
- * Multiple Ambari CVEs are fixed.
+1. **Quota Management for HDInsight**
-> [!NOTE]
-> ESP isn't supported for Kafka and HBase in this release.
->
+ HDInsight currently allocates quota to customer subscriptions at a regional level. The cores allocated to customers are generic and not classified at a VM family level (For example, Dv2, Ev3, Eav4, etc.).
+
+ HDInsight introduced an improved view, which provides a detail and classification of quotas for family-level VMs, this feature allows customers to view current and remaining quotas for a region at the VM family level. With the enhanced view, customers have richer visibility, for planning quotas, and a better user experience.
+
+ This feature is currently available on HDInsight 4.x and 5.x for East US EUAP region. Other regions to follow later.
-![Icon showing end of support with text.](media/hdinsight-release-notes/new-icon-for-end-of-support.png)
+ For more information, see [Cluster capacity planning in Azure HDInsight | Microsoft Learn](./hdinsight-capacity-planning.md#view-quota-management-for-hdinsight)
+
+![Icon showing new regions added with text.](media/hdinsight-release-notes/new-icon-for-new-regions-added.png)
-End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md#spark-versions-supported-in-azure-hdinsight)
+* Poland Central
## Coming soon
-* Autoscale
- * Autoscale with improved latency and several improvements
-* Cluster name change limitation
- * The max length of cluster name will be changed to 45 from 59 in Public, Mooncake and Fairfax.
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters.
* Cluster permissions for secure storage * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account.
-* Non-ESP ABFS clusters [Cluster Permissions for World Readable]
- * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates.
-* Open-source upgrades
- * Apache Spark 3.3.0 and Hadoop 3.3.4 are under development on HDInsight 5.1 and will include several significant new features, performance and other improvements.
+* In-line quota update.
+ * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase.
+* HDInsight Cluster Creation with Custom VNets.
+ * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs will need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this would be a mandatory check to avoid cluster creation failures.
+* Basic and Standard A-series VMs Retirement.
+ * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31 August 2024.
+
+If you have any more questions or concerns, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+Ask us more about HDInsight on [Azure HDInsight - Microsoft Q&A](https://learn.microsoft.com/answers/tags/168/azure-hdinsight)
+
+YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [twitter](https://twitter.com/AzureHDInsight)
> [!NOTE] > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
hdinsight Hdinsight Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-availability-zones.md
description: Learn how to create an Azure HDInsight cluster that uses Availabili
Previously updated : 01/16/2023 Last updated : 05/11/2023 # Create an HDInsight cluster that uses Availability Zones
HDInsight clusters can currently be created using availability zones in the foll
- Japan East - Korea Central - North Europe - Southeast Asia - South Central US - UK South
hdinsight Interactive Query Troubleshoot Inaccessible Hive View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-inaccessible-hive-view.md
Title: Apache Hive connections to Apache Zookeeper - Azure HDInsight
description: Apache Hive View inaccessible due to Apache Zookeeper issues in Azure HDInsight Previously updated : 04/07/2022 Last updated : 05/11/2023 # Scenario: Apache Hive fails to establish a connection to Apache Zookeeper in Azure HDInsight
hdinsight Interactive Query Troubleshoot Zookeeperhiveclientexception Hiveserver Configs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-zookeeperhiveclientexception-hiveserver-configs.md
Title: Apache Hive Zeppelin Interpreter error - Azure HDInsight
description: The Apache Zeppelin Hive JDBC Interpreter is pointing to the wrong URL in Azure HDInsight Previously updated : 04/08/2022 Last updated : 05/11/2023 # Scenario: Apache Hive Zeppelin Interpreter gives a Zookeeper error in Azure HDInsight
hdinsight Apache Kafka Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-scalability.md
description: Learn how to configure managed disks for Apache Kafka cluster on Az
Previously updated : 04/08/2022 Last updated : 05/11/2023 # Configure storage and scalability for Apache Kafka on HDInsight
hdinsight Apache Spark Jupyter Spark Sql Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-powershell.md
If you're using multiple clusters together, you'll want to create a virtual netw
## Prerequisite - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- The [PowerShell Az module](/powershell/azure/install-az-ps).
+- The [PowerShell Az module](/powershell/azure/install-azure-powershell).
## Create an Apache Spark cluster in HDInsight
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
An Azure account with an active subscription. [Create one for free](https://azur
# [PowerShell](#tab/PowerShell) * An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-* If you want to run the code locally, [Azure PowerShell](/powershell/azure/install-az-ps).
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/install-azure-powershell).
# [CLI](#tab/CLI)
healthcare-apis Fhir Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-bicep.md
In this article, you'll learn how to deploy FHIR service within the Azure Health
* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). * If you want to run the code locally:
- * [Azure PowerShell](/powershell/azure/install-az-ps).
+ * [Azure PowerShell](/powershell/azure/install-azure-powershell).
# [CLI](#tab/CLI)
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
An [ARM template](../../azure-resource-manager/templates/overview.md) is a JSON
* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). * If you want to run the code locally:
- * [Azure PowerShell](/powershell/azure/install-az-ps).
+ * [Azure PowerShell](/powershell/azure/install-azure-powershell).
# [CLI](#tab/CLI)
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
Create a new `POST` request:
- **client_secret**: `{{clientsecret}}` - **resource**: `{{fhirurl}}`
+ Note : In the scenarios where FHIR service audience parameter is not mapped to FHIR service endpoint url, the resource parameter value should be mapped to Audience value under FHIR Service Authentication blade.
+
3. Select the **Test** tab and enter in the text section: `pm.environment.set("bearerToken", pm.response.json().access_token);` To make the value available to the collection, use the pm.collectionVariables.set method. For more information on the set method and its scope level, see [Using variables in scripts](https://learning.postman.com/docs/sending-requests/variables/#defining-variables-in-scripts). 4. Select **Save** to save the settings. 5. Select **Send**. You should see a response with the Azure AD access token, which is saved to the variable `bearerToken` automatically. You can then use it in all FHIR service API requests.
healthcare-apis Using Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-curl.md
In this article, you'll learn how to access Azure Health Data Services with cURL
### PowerShell * An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
-* If you want to run the code locally, install [PowerShell](/powershell/module/powershellget/) and [Azure Az PowerShell](/powershell/azure/install-az-ps).
+* If you want to run the code locally, install [PowerShell](/powershell/module/powershellget/) and [Azure Az PowerShell](/powershell/azure/install-azure-powershell).
* Optionally, you can run the scripts in Visual Studio Code with the REST Client extension. For more information, see [Make a link to the REST Client doc](using-rest-client.md). * Download and install [cURL](https://curl.se/download.html).
healthcare-apis Deploy Bicep Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md
To begin your deployment and complete the quickstart, you must have the followin
* The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
-* [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
+* [Azure PowerShell](/powershell/azure/install-azure-powershell) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
* For Azure PowerShell, install the [Bicep CLI](../../azure-resource-manager/bicep/install.md#windows) to deploy the Bicep file used in this quickstart. When you have these prerequisites, you're ready to deploy the Bicep file.
healthcare-apis Deploy Json Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-json-powershell-cli.md
To begin your deployment and complete the quickstart, you must have the followin
* The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
-* [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
+* [Azure PowerShell](/powershell/azure/install-azure-powershell) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
When you have these prerequisites, you're ready to deploy the ARM template.
hpc-cache Hpc Cache Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-create.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps). If you choose to use Cloud Shell, see
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell). If you choose to use Cloud Shell, see
[Overview of Azure Cloud Shell](../cloud-shell/overview.md) for more information.
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
You need the following prerequisites to complete the steps in this guide:
- A GitHub account [GitHub](https://github.com/). - An Azure DevOps organization. To learn more, see [Create an Azure DevOps organization](/azure/devops/organizations/accounts/create-organization). - PowerShell 7 for Windows, Mac or Linux. [Get PowerShell](/powershell/scripting/install/installing-powershell).-- Azure Az PowerShell module installed in your PowerShell 7 environment. To learn more, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+- Azure Az PowerShell module installed in your PowerShell 7 environment. To learn more, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
- Visual Studio Code or other tool to edit PowerShell and JSON files.[Get Visual Studio Code](https://code.visualstudio.com/Download). - Git client. Download the latest version from [Git - Downloads (git-scm.com)](https://git-scm.com/downloads).
iot-develop Quickstart Devkit Stm B U585i Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md
ms.devlang: c Previously updated : 03/28/2023 Last updated : 05/10/2023
You complete the following tasks:
* Use the Azure Cloud Shell, an interactive shell that runs CLI commands in your browser. This option is recommended because you don't need to install anything. If you're using Cloud Shell for the first time, sign in to the [Azure portal](https://portal.azure.com). Follow the steps in [Cloud Shell quickstart](../cloud-shell/quickstart.md) to **Start Cloud Shell** and **Select the Bash environment**. * Optionally, run Azure CLI on your local machine. If Azure CLI is already installed, run `az upgrade` to upgrade the CLI and extensions to the current version. To install Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases): Cross-platform utility to monitor and manage Azure IoT
* Hardware * The [B-U585I-IOT02A](https://www.st.com/en/evaluation-tools/b-u585i-iot02a.html) (STM DevKit)
To install the tools:
cmake --version ```
-## Create the cloud components
-
-### Create an IoT hub
-
-You can use Azure CLI to create an IoT hub that handles events and messaging for your device.
-
-To create an IoT hub:
-
-1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
- - If you're using Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash), and select the option to open in a new tab.
- - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI.
-
-1. Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *azure-iot* extension to the current version.
-
- ```azurecli-interactive
- az extension add --upgrade --name azure-iot
- ```
-
-1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
-
- > [!NOTE]
- > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations).
-
- ```azurecli
- az group create --name MyResourceGroup --location centralus
- ```
-
-1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
-
- *YourIotHubName*. Replace this placeholder in the code with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
-
- The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
-
- ```azurecli
- az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} --sku F1 --partition-count 2
- ```
-
-1. After the IoT hub is created, view the JSON output in the console, and copy the `hostName` value to use in a later step. The `hostName` value looks like the following example:
-
- `{Your IoT hub name}.azure-devices.net`
-
-### Configure IoT Explorer
-
-In the rest of this quickstart, you use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you created, and to read plug and play models from the public model repository.
-
-To add a connection to your IoT hub:
-
-1. In your CLI app, run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get the connection string for your IoT hub.
-
- ```azurecli
- az iot hub connection-string show --hub-name {YourIoTHubName}
- ```
-
-1. Copy the connection string without the surrounding quotation characters.
-1. In Azure IoT Explorer, select **IoT hubs** on the left menu.
-1. Select **+ Add connection**.
-1. Paste the connection string into the **Connection string** box.
-1. Select **Save**.
-
- :::image type="content" source="media/quickstart-devkit-stm-b-u585i-iot-hub/iot-explorer-add-connection.png" alt-text="Screenshot of adding a connection in IoT Explorer.":::
-
-If the connection succeeds, IoT Explorer switches to the **Devices** view.
-
-To add the public model repository:
-
-1. In IoT Explorer, select **Home** to return to the home view.
-1. On the left menu, select **IoT Plug and Play Settings**, then select **+Add** and select **Public repository** from the drop-down menu.
-1. An entry appears for the public model repository at `https://devicemodels.azure.com`.
-
- :::image type="content" source="media/quickstart-devkit-stm-b-u585i-iot-hub/iot-explorer-add-public-repository.png" alt-text="Screenshot of adding the public model repository in IoT Explorer.":::
-
-1. Select **Save**.
-
-### Register a device
-
-In this section, you create a new device instance and register it with the IoT hub you created. You use the connection information for the newly registered device to securely connect your physical device in a later section.
-
-To register a device:
-
-1. From the home view in IoT Explorer, select **IoT hubs**.
-1. The connection you previously added should appear. Select **View devices in this hub** below the connection properties.
-1. Select **+ New** and enter a device ID for your device; for example, `mydevice`. Leave all other properties the same.
-1. Select **Create**.
-
- :::image type="content" source="media/quickstart-devkit-stm-b-u585i-iot-hub/iot-explorer-device-created.png" alt-text="Screenshot of Azure IoT Explorer device identity.":::
-
-1. Use the copy buttons to copy the **Device ID** and **Primary key** fields.
-
-Before continuing to the next section, save each of the following values retrieved from earlier steps, to a safe location. You use these values in the next section to configure your device.
-
-* `hostName`
-* `deviceId`
-* `primaryKey`
## Prepare the device
iot-hub How To Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-powershell.md
The procedures that are described in the article use the following resources:
### Azure PowerShell
-This article uses Azure PowerShell to work with IoT Hub and other Azure services. To use Azure PowerShell locally, install the [Azure PowerShell module](/powershell/azure/install-az-ps) on your computer. Alternatively, to use Azure PowerShell in a web browser, enable [Azure Cloud Shell](../cloud-shell/overview.md).
+This article uses Azure PowerShell to work with IoT Hub and other Azure services. To use Azure PowerShell locally, install the [Azure PowerShell module](/powershell/azure/install-azure-powershell) on your computer. Alternatively, to use Azure PowerShell in a web browser, enable [Azure Cloud Shell](../cloud-shell/overview.md).
### IoT hub
iot-hub Iot Hub Configure File Upload Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-configure-file-upload-powershell.md
To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.m
* If you prefer, [install](/powershell/scripting/install/installing-powershell) PowerShell locally.
- * [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps). (The module is installed by default in the Azure Cloud Shell PowerShell environment.)
+ * [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell). (The module is installed by default in the Azure Cloud Shell PowerShell environment.)
* Sign in to PowerShell by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command. To finish the authentication process, follow the steps displayed in your terminal. For additional sign-in options, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
You can use the [IoT Hub Resource](/rest/api/iothub/iothubresource) REST API to
## Prerequisites
-* [Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](../cloud-shell/overview.md)
+* [Azure PowerShell module](/powershell/azure/install-azure-powershell) or [Azure Cloud Shell](../cloud-shell/overview.md)
* [Postman](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman) or [cURL](https://curl.se/)
iot-hub Iot Hub Rm Template Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template-powershell.md
This article shows you how to use an Azure Resource Manager template to create a
## Prerequisites
-[Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](../cloud-shell/overview.md)
+[Azure PowerShell module](/powershell/azure/install-azure-powershell) or [Azure Cloud Shell](../cloud-shell/overview.md)
Azure Cloud Shell is useful if you don't want to install the PowerShell module locally, as Cloud Shell performs from a browser.
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
GlobalSignCA is now in the certificate authority list.
You can use Azure PowerShell to create and manage Azure resources by using commands or scripts. Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through the Azure portal in a browser.
-If you choose to install and use PowerShell locally, you need Azure AZ PowerShell module 1.0.0 or later to complete the procedures here. Type `$PSVersionTable.PSVersion` to determine the version. If you need to upgrade, see [Install Azure AZ PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure:
+If you choose to install and use PowerShell locally, you need Azure AZ PowerShell module 1.0.0 or later to complete the procedures here. Type `$PSVersionTable.PSVersion` to determine the version. If you need to upgrade, see [Install Azure AZ PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure:
```azurepowershell-interactive Connect-AzAccount
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive Connect-AzAccount
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-python.md
Get started with the Azure Key Vault certificate client library for Python. Foll
- [Python 3.7+](/azure/developer/python/configure-local-development-environment) - [Azure CLI](/cli/azure/install-azure-cli)
-This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) in a Linux terminal window.
## Set up your local environment
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/best-practices.md
Key vaults define security boundaries for stored secrets. Grouping secrets into
Encryption keys and secrets like certificates, connection strings, and passwords are sensitive and business critical. You need to secure access to your key vaults by allowing only authorized applications and users. [Azure Key Vault security features](security-features.md) provides an overview of the Key Vault access model. It explains authentication and authorization. It also describes how to secure access to your key vaults. Suggestions for controlling access to your vault are as follows:-- Lock down access to your subscription, resource group, and key vaults (role-based access control (RBAC)).
+- Lock down access to your subscription, resource group, and key vaults using role-based access control (RBAC).
- Create access policies for every vault. - Use the principle of least privilege access to grant access. - Turn on firewall and [virtual network service endpoints](overview-vnet-service-endpoints.md).
key-vault How To Azure Key Vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/how-to-azure-key-vault-network-security.md
Here's how to configure Key Vault firewalls and virtual networks by using the Az
Here's how to configure Key Vault firewalls and virtual networks by using PowerShell:
-1. Install the latest [Azure PowerShell](/powershell/azure/install-az-ps), and [sign in](/powershell/azure/authenticate-azureps).
+1. Install the latest [Azure PowerShell](/powershell/azure/install-azure-powershell), and [sign in](/powershell/azure/authenticate-azureps).
2. List available virtual network rules. If you have not set any rules for this key vault, the list will be empty. ```powershell
key-vault Key Vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/key-vault-recovery.md
For more information about Key Vault, see
## Prerequisites * An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [Azure PowerShell](/powershell/azure/install-az-ps).
+* [Azure PowerShell](/powershell/azure/install-azure-powershell).
* [Azure CLI](/cli/azure/install-azure-cli) * A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md) [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md) * The user will need the following permissions (at subscription level) to perform operations on soft-deleted vaults:
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-In this quickstart, you create a key vault with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+In this quickstart, you create a key vault with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive Connect-AzAccount
key-vault Rbac Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-access-policy.md
+
+ Title: Azure role-based access control (Azure RBAC) vs. access policies
+description: A comparison of Azure role-based access control (Azure RBAC) and access policies
+++++ Last updated : 05/08/2023+++
+# Azure role-based access control (Azure RBAC) vs. access policies
+
+Azure Key Vault offers two authorization systems: **Azure role-based access control (Azure RBAC)**, which operates on the management plane, and the **access policy model**, which operates on both the management plane and the data plane.
+
+Azure RBAC is built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) and provides fine-grained access management of Azure resources. With Azure RBAC you control access to resources by creating role assignments, which consist of three elements: a security principal, a role definition (predefined set of permissions), and a scope (group of resources or individual resource). For more information, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+The access policy model, on the other hand, is an existing authorization system built in Key Vault to provide access to keys, secrets, and certificates. You can control access by assigning individual permissions to security principals (user, group, service principal, managed identity) at Key Vault scope.
+
+## Data plane access control recommendation
+
+Azure RBAC is the recommended authorization system for the Azure Key Vault data plane.
+
+Azure RBAC offers several advantages over access policies:
+- A unified access control model for Azure resource-- it uses the same API across Azure services.
+- Centralized access management for administrators - manage all Azure resources in one view
+- Integration with [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md) for time-based access control
+- Deny assignments - ability to exclude security principals at a particular scope. For information, see [Understand Azure Deny Assignments](../../role-based-access-control/deny-assignments.md)
+- More stringent permissions -- users and service principals require Owner or User Access Administrator roles.
+
+However, it has three disadvantages when compared to access policies:
+- Latency -- it can take several minutes for role assignments to be applied. Vault access policies are assigned instantly.
+- Limited number of role assignments -- Azure RBAC allows only 2000 roles assignments across all services per subscription versus 1024 access policies per Key Vault
+- Less flexibility -- previously, automation and users could create a Key Vault and grant other service principals or managed identities access via access policies. However, this is no longer possible without the necessary permissions, which may not be suitable for some automated deployments (e.g., Azure Pipelines, Bicep, Terraform).
+
+To transition your Key Vault data plane access control from access policies to RBAC, see [Migrate from vault access policy to an Azure role-based access control permission model](rbac-migration.md).
+
+## Learn more
+
+- [Azure RBAC Overview](../../role-based-access-control/overview.md)
+- [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+- [Migrating from an access policy to RBAC](../../role-based-access-control/tutorial-custom-role-cli.md)
+- [Azure Key Vault best practices](best-practices.md)
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> [!NOTE] > Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignemnts for 'Microsoft Azure App Service' global indentity. - Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
-Azure RBAC allows users to manage Key, Secrets, and Certificates permissions. It provides one place to manage all permissions across all key vaults.
+Azure RBAC allows users to manage Key, Secrets, and Certificates permissions. It provides one place to manage all permissions across all key vaults.
The Azure RBAC model allows uses to set permissions on different scope levels: management group, subscription, resource group, or individual resources. Azure RBAC for key vault also allows users to have separate permissions on individual keys, secrets, and certificates
Our recommendation is to use a vault per application per environment
Individual keys, secrets, and certificates permissions should be used only for specific scenarios: -- Sharing individual secrets between multiple applications, for example, one application needs to access data from the other application
+- Sharing individual secrets between multiple applications, for example, one application needs to access data from the other application
More about Azure Key Vault management guidelines, see:
key-vault Rbac Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-migration.md
# Migrate from vault access policy to an Azure role-based access control permission model
-The vault access policy model is an existing authorization system built in Key Vault to provide access to keys, secrets, and certificates. You can control access by assigning individual permissions to security principals (user, group, service principal, managed identity) at Key Vault scope.
+Azure Key Vault offers two authorization systems: Azure role-based access control (Azure RBAC), and an access policy model. Azure RBAC is the default and recommended authorization system for Azure Key Vault. For a comparison of the two methods of authorization, see [Azure role-based access control (Azure RBAC) vs. access policies](rbac-access-policy.md).
-Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources. With Azure RBAC you control access to resources by creating role assignments, which consist of three elements: a security principal, a role definition (predefined set of permissions), and a scope (group of resources or individual resource). For more information, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-
-Before migrating to Azure RBAC, it's important to understand its benefits and limitations.
-
-Azure RBAC key benefits over vault access policies:
-- Provides a unified access control model for Azure resources by using the same API across Azure services-- Centralized access management for administrators - manage all Azure resources in one view-- Integrated with [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md) for time-based access control-- Deny assignments - ability to exclude security principals at a particular scope. For information, see [Understand Azure Deny Assignments](../../role-based-access-control/deny-assignments.md)-
-Azure RBAC disadvantages:
-- Latency for role assignments - it can take several minutes for role assignments to be applied. Vault access policies are assigned instantly.-- Limited number of role assignments - Azure RBAC allows only 2000 roles assignments across all services per subscription versus 1024 access policies per Key Vault
+This article provide the information necessary to migrate from a key vault from access policy authorization to an Azure RBAC model.
## Access policies to Azure roles mapping
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
A security principal is an object that represents a user, group, service, or app
For more information about authentication to Key Vault, see [Authenticate to Azure Key Vault](authentication.md).
-## Conditional access
+## Conditional access
Key Vault provides support for Azure Active Directory Conditional Access policies. By using Conditional Access policies, you can apply the right access controls to Key Vault when needed to keep your organization secure and stay out of your user's way when not needed.
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
If you don't have an Azure subscription, create a [free account](https://azure.m
For Windows, Mac, and Linux: * [Git](https://git-scm.com/downloads) * The [.NET Core 3.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core/3.1).
- * [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+ * [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
## Create resources and assign permissions
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive Connect-AzAccount
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-python.md
Get started with the Azure Key Vault client library for Python. Follow these ste
- [Python 3.7+](/azure/developer/python/configure-local-development-environment) - [Azure CLI](/cli/azure/install-azure-cli)
-This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) in a Linux terminal window.
## Set up your local environment
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
In this quickstart, you will create and activate an Azure Key Vault Managed HSM (Hardware Security Module) with PowerShell. Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs. For more information on Managed HSM, you may review the [Overview](overview.md).
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell-interactive Connect-AzAccount
key-vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/recovery.md
For more information, see [Managed HSM overview](overview.md).
## Prerequisites * An Azure subscription. [Create one for free](https://azure.microsoft.com/free/dotnet).
-* The [PowerShell module](/powershell/azure/install-az-ps).
+* The [PowerShell module](/powershell/azure/install-azure-powershell).
* Azure CLI 2.25.0 or later. Run `az --version` to determine which version you have. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). * A managed HSM. You can create one by using the [Azure CLI](./quick-create-cli.md) or [Azure PowerShell](./quick-create-powershell.md). * Users will need the following permissions to perform operations on soft-deleted HSMs or keys:
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
Key Vault is a Microsoft application that's pre-registered in all Azure AD tenan
To complete this guide, you must first do the following: -- [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+- [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
- [Create a key vault](quick-create-powershell.md) - [Create an Azure storage account](../../storage/common/storage-account-create.md?tabs=azure-powershell). The storage account name must use only lowercase letters and numbers. The length of the name must be between 3 and 24 characters.
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
For more information about Key Vault and secrets, see:
* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet) * [.NET 6 SDK or later](https://dotnet.microsoft.com/download)
-* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
* A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md) This quickstart is using `dotnet` and Azure CLI or Azure PowerShell.
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 5.0.0 or later. Type `Get-Module az -ListAvailable` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 5.0.0 or later. Type `Get-Module az -ListAvailable` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
```azurepowershell Connect-AzAccount
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-python.md
Get started with the Azure Key Vault secret client library for Python. Follow th
- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Python 3.7+](/azure/developer/python/configure-local-development-environment).-- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps).
+- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell).
-This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) in a Linux terminal window.
## Set up your local environment
lighthouse Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/architecture.md
Title: Azure Lighthouse architecture description: Learn about the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship. Previously updated : 08/26/2022 Last updated : 05/10/2023
lighthouse Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/enterprise.md
Title: Azure Lighthouse in enterprise scenarios description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Azure AD tenants. Previously updated : 06/09/2022 Last updated : 05/10/2023 # Azure Lighthouse in enterprise scenarios
-A common scenario for [Azure Lighthouse](../overview.md) is when a service provider manages resources in Azure Active Directory (Azure AD) tenants that belong to customers. The capabilities of Azure Lighthouse can also be used to simplify cross-tenant management within an enterprise that uses multiple Azure AD tenants.
+A common scenario for [Azure Lighthouse](../overview.md) involves a service provider that manages resources in in its customers' Azure Active Directory (Azure AD) tenants. The capabilities of Azure Lighthouse can also be used to simplify cross-tenant management within an enterprise that uses multiple Azure AD tenants.
## Single vs. multiple tenants
lighthouse Isv Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/isv-scenarios.md
Title: Azure Lighthouse in ISV scenarios description: The capabilities of Azure Lighthouse can be used by ISVs for more flexibility with customer offerings. Previously updated : 06/09/2022 Last updated : 05/10/2023 # Azure Lighthouse in ISV scenarios
-A common scenario for [Azure Lighthouse](../overview.md) is when a service provider manages resources in its customers' Azure Active Directory (Azure AD) tenants. The capabilities of Azure Lighthouse can also be used by Independent Software Vendors (ISVs) using SaaS-based offerings with their customers. Azure Lighthouse can be especially useful for ISVs who are offering managed services or support that require access to the subscription scope.
+A typical scenario for [Azure Lighthouse](../overview.md) involves a service provider that manages resources in its customers' Azure Active Directory (Azure AD) tenants. However, the capabilities of Azure Lighthouse can also be used by Independent Software Vendors (ISVs) using SaaS-based offerings with their customers. Azure Lighthouse can be especially useful for ISVs who are offering managed services or support that require access to the subscription scope.
## Managed Service offers in Azure Marketplace
For more information, see [Azure Lighthouse and Azure managed applications](mana
## SaaS-based multi-tenant offerings
-An additional scenario is where the ISV hosts the resources in a subscription in their own tenant, then uses Azure Lighthouse to let customers access these resources. The customer can then log in to their own tenant and access these resources as needed. The ISV maintains their IP in their own tenant, and can use their own support plan to raise tickets related to the solution hosted in their tenant, rather than the customer's plan. Since the resources are in the ISV's tenant, all actions can be performed directly by the ISV, such as logging into VMs, installing apps, and performing maintenance tasks.
+An additional scenario is where the ISV hosts resources in a subscription in their own tenant, then uses Azure Lighthouse to let customers access those specific resources. Once this access is granted, the customer can log in to their own tenant and access the resources as needed. The ISV maintains their IP in their own tenant, and can use their own support plan to raise tickets related to the solution hosted in their tenant, rather than the customer's plan. Since the resources are in the ISV's tenant, all actions can be performed directly by the ISV, such as logging into VMs, installing apps, and performing maintenance tasks.
-In this scenario, users in the customerΓÇÖs tenant are essentially granted access as a "managing tenant", even though the customer is not managing the ISV's resources. Because they are accessing the ISV's tenant directly, itΓÇÖs important to grant only the minimum permissions necessary, so that customers cannot inadvertently make changes to the solution or other ISV resources.
+In this scenario, users in the customerΓÇÖs tenant are essentially granted access as a "managing tenant", even though the customer is not managing the ISV's resources. Because they are accessing the ISV's tenant directly, itΓÇÖs important to grant only the minimum permissions necessary, so that customers can't inadvertently make changes to the solution or other ISV resources.
-To enable this architecture, the ISV needs to obtain the object ID for a user group in the customerΓÇÖs Azure AD tenant, along with their tenant ID. The ISV then builds an ARM template granting this user group the appropriate permissions, and [deploys it on the ISV's subscription](../how-to/onboard-customer.md) that contains the resources which the customer will access.
+To enable this architecture, the ISV needs to obtain the object ID for a user group in the customer's Azure AD tenant, along with their tenant ID. The ISV then builds an ARM template granting this user group the appropriate permissions, and [deploys it on the ISV's subscription](../how-to/onboard-customer.md) that contains the resources that the customer will access.
## Next steps
lighthouse Managed Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-applications.md
Title: Azure Lighthouse and Azure managed applications description: Understand how Azure Lighthouse and Azure managed applications can be used together. Previously updated : 06/09/2022 Last updated : 05/10/2023 # Azure Lighthouse and Azure managed applications
-Both Azure managed applications and Azure Lighthouse work by enabling a service provider to access resources that reside in the customer's tenant. It can be helpful to understand the differences in the way that they work and the scenarios that they help to enable, and how they can be used together.
+Both Azure managed applications and Azure Lighthouse work by enabling a service provider to access resources that reside in the customer's tenant. It can be helpful to understand the differences in the way that they work, the scenarios that they help to enable, and how they can be used together.
> [!TIP] > Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](enterprise.md) can use the same processes and tools. ## Comparing Azure Lighthouse and Azure managed applications
-This table illustrates some high-level differences that may impact whether you might choose to use Azure Lighthouse or Azure managed applications. As noted below, you can also design a solution that uses them together.
+This table illustrates some high-level differences that may impact whether you might choose to use Azure Lighthouse or Azure managed applications. In some cases, you may want to design a solution that uses them together.
|Consideration |Azure Lighthouse |Azure managed applications | ||||
This table illustrates some high-level differences that may impact whether you m
### Azure Lighthouse
-With [Azure Lighthouse](../overview.md), a service provider can perform a wide range of management tasks directly on a customer's subscription (or resource group). This access is achieved through a logical projection, allowing service providers to sign in to their own tenant and access resources that belong to the customer's tenant. The customer can determine which subscriptions or resource groups to delegate to the service provider, and the customer maintains full access to those resources. They can also remove the service provider's access at any time.
+With [Azure Lighthouse](../overview.md), a service provider can perform a wide range of management tasks directly on a customer's subscription (or resource group). This access is achieved through a [logical projection](architecture.md#logical-projection), allowing service providers to sign in to their own tenant and access resources that belong to the customer's tenant. The customer can determine which subscriptions or resource groups to delegate to the service provider, and the customer maintains full access to those resources. They can also remove the service provider's access at any time.
To use Azure Lighthouse, customers are onboarded either by [deploying ARM templates](../how-to/onboard-customer.md) or through a [Managed Service offer in Azure Marketplace](managed-services-offers.md). You can track your impact on customer engagements by [linking your partner ID](../how-to/partner-earned-credit.md).
-Azure Lighthouse is typically used when a service provider will perform management tasks for a customer on an ongoing basis. To learn more about how Azure Lighthouse works at a technical level, see [Azure Lighthouse architecture](architecture.md).
+Azure Lighthouse is typically used when a service provider will perform management tasks for a customer on an ongoing basis. To learn more about how Azure Lighthouse works at a technical level, see [Azure Lighthouse architecture](architecture.md).
### Azure managed applications
lighthouse Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/overview.md
Title: What is Azure Lighthouse? description: Azure Lighthouse lets service providers deliver managed services for their customers with higher automation and efficiency at scale. Previously updated : 06/20/2022 Last updated : 05/10/2023
There are no additional costs associated with using Azure Lighthouse to manage A
## Cross-region and cloud considerations
-Azure Lighthouse is a non-regional service. You can manage delegated resources that are located in different [regions](../availability-zones/az-overview.md#regions). However, delegation of subscriptions across a [national cloud](../active-directory/develop/authentication-national-cloud.md) and the Azure public cloud, or across two separate national clouds, isn't supported.
+Azure Lighthouse is a non-regional service. You can manage delegated resources that are located in different [regions](../availability-zones/az-overview.md#regions). However, you can't delegate resources across a [national cloud](../active-directory/develop/authentication-national-cloud.md) and the Azure public cloud, or across two separate national clouds.
## Support for Azure Lighthouse
load-balancer Quickstart Basic Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-powershell.md
Get started with Azure Load Balancer by using Azure PowerShell to create an inte
- Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-balancer Quickstart Basic Public Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-powershell.md
Get started with Azure Load Balancer by using Azure PowerShell to create a publi
- Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-powershell.md
To deploy a dual stack (IPV4 + IPv6) application using Standard Load Balancer, s
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-balancer Ipv6 Add To Existing Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md
This article shows you how to add IPv6 connectivity to an existing IPv
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Prerequisites
load-balancer Ipv6 Dual Stack Standard Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-dual-stack-standard-internal-load-balancer-powershell.md
The changes that make the above an internal load balancer front-end configuratio
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
It is important to note that probes also have a timeout period. For example, if
* If you have multiple interfaces configured in your virtual machine, ensure you respond to the probe on the interface you received it on. You may need to source network address translate this address in the VM on a per interface basis.
-* Don't enable [TCP timestamps](https://tools.ietf.org/html/rfc1323). TCP timestamps can cause health probes to fail due to the VM's guest OS TCP stack dropping TCP packets. The dropped packets can cause the load balancer to mark the endpoint as down. TCP timestamps are routinely enabled by default on security hardened VM images and must be disabled.
- * Note that a probe definition is not mandatory or checked for when using Azure PowerShell, Azure CLI, Templates or API. Probe validation tests are only done when using the Azure Portal. * If the health probe fluctuates, the load balancer waits longer before it puts the backend endpoint back in the healthy state. This extra wait time protects the user and the infrastructure and is an intentional policy.
load-balancer Load Balancer Nat Pool Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md
The migration process will create a new Backend Pool for each Inbound NAT Pool e
* In order to migrate a Load Balancer's NAT Pools to NAT Rules, the Load Balancer SKU must be 'Standard'. To automate this upgrade process, see the steps provided in [Upgrade a basic load balancer used with Virtual Machine Scale Sets](upgrade-basic-standard-virtual-machine-scale-sets.md). * Virtual Machine Scale Sets associated with the target Load Balancer must use either a 'Manual' or 'Automatic' upgrade policy--'Rolling' upgrade policy is not supported. For more information, see [Virtual Machine Scale Sets Upgrade Policies](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) * Install the latest version of [PowerShell](/powershell/scripting/install/installing-powershell)
-* Install the [Azure PowerShell modules](/powershell/azure/install-az-ps)
+* Install the [Azure PowerShell modules](/powershell/azure/install-azure-powershell)
### Install the 'AzureLoadBalancerNATPoolMigration' module
load-balancer Load Balancer Tcp Idle Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-idle-timeout.md
To set the idle timeout and tcp reset, set values in the following load-balancin
* **IdleTimeoutInMinutes** * **EnableTcpReset**
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
Replace the following examples with the values from your resources:
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
In this article, you learn how to add and remove an inbound NAT rule for both ty
## Prerequisites - A standard public load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**.-- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
Get started with Azure Load Balancer by using Azure PowerShell to create an inte
- Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
Get started with Azure Load Balancer by using Azure PowerShell to create a publi
- Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-balancer Tutorial Cross Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-powershell.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
- Azure PowerShell installed locally or Azure Cloud Shell.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create cross-region load balancer
load-balancer Tutorial Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-powershell.md
In this tutorial, you learn how to:
- For the purposes of this tutorial, the existing load balancer in the examples is named **myLoadBalancer**. - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
The PowerShell module performs the following functions:
- Install the latest version of [PowerShell](/powershell/scripting/install/installing-powershell) - Determine whether you have the latest Az PowerShell module installed (8.2.0)
- - Install the latest [Az PowerShell module](/powershell/azure/install-az-ps)
+ - Install the latest [Az PowerShell module](/powershell/azure/install-azure-powershell)
## Install the 'AzureBasicLoadBalancerUpgrade' module
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application usin
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
load-testing How To Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md
Title: Configure customer-managed keys for encryption
-description: Learn how to configure customer-managed keys for your Azure Load Testing resource with Azure Key Vault
+description: Learn how to configure customer-managed keys for your Azure load testing resource with Azure Key Vault
Previously updated : 05/10/2022 Last updated : 05/09/2023
-# Configure customer-managed keys for your Azure Load Testing resource with Azure Key Vault
+# Configure customer-managed keys for Azure Load Testing with Azure Key Vault
Azure Load Testing automatically encrypts all data stored in your load testing resource with keys that Microsoft provides (service-managed keys). Optionally, you can add a second layer of security by also providing your own (customer-managed) keys. Customer-managed keys offer greater flexibility for controlling access and using key-rotation policies.
-The keys you provide are stored securely using [Azure Key Vault](../key-vault/general/overview.md). You can create a separate key for each Azure Load Testing resource you enable with customer-managed keys.
+The keys you provide are stored securely using [Azure Key Vault](../key-vault/general/overview.md). You can create a separate key for each Azure load testing resource you enable with customer-managed keys.
Azure Load Testing uses the customer-managed key to encrypt the following data in the load testing resource:
Azure Load Testing uses the customer-managed key to encrypt the following data i
## Limitations -- Customer-managed keys are only available for new Azure Load Testing resources. You should configure the key during resource creation.
+- Customer-managed keys are only available for new Azure load testing resources. You should configure the key during resource creation.
- Azure Load Testing can't automatically rotate the customer-managed key to use the latest version of the encryption key. You should update the key URI in the resource after the key is rotated in the Azure Key Vault. - Once customer-managed key encryption is enabled on a resource, it can't be disabled.
-## Configure your Azure Key Vault
-You can use a new or existing key vault to store customer-managed keys. The Azure Load Testing resource and key vault may be in different regions or subscriptions in the same tenant.
+## Configure your Azure key vault
-You have to set the **Soft Delete** and **Purge Protection** properties on your Azure Key Vault instance to use customer-managed keys with Azure Load Testing. Soft delete is enabled by default when you create a new key vault and can't be disabled. You can enable purge protection at any time.
+To use customer-managed encryption keys with Azure Load Testing, you need to store the key in Azure Key Vault. You can use an existing or create a new key vault. The load testing resource and key vault may be in different regions or subscriptions in the same tenant.
+
+You have to set the *Soft Delete* and *Purge Protection* properties on your key vault to use customer-managed keys with Azure Load Testing. Soft delete is enabled by default when you create a new key vault and can't be disabled. You can enable purge protection at any time. Learn more about [soft delete and purge protection in Azure Key Vault](/azure/key-vault/general/soft-delete-overview).
# [Azure portal](#tab/portal)
-To learn how to create a key vault with the Azure portal, see [Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). When you create the key vault, select **Enable purge protection**, as shown in the following image.
+Follow these steps to [verify if soft delete is enabled and enable it on a key vault](/azure/key-vault/general/key-vault-recovery?tabs=azure-portal#verify-if-soft-delete-is-enabled-on-a-key-vault-and-enable-soft-delete). Soft delete is abled by default when you create a new key vault.
+
+You can enable purge protection when you [create a new key vault](/azure/key-vault/general/quick-create-portal) by selecting the **Enable purge protection** settings.
To enable purge protection on an existing key vault, follow these steps:
To create a new key vault with PowerShell, install version 2.0.0 or later of the
The following example creates a new key vault with both soft delete and purge protection enabled. Remember to replace the placeholder values in brackets with your own values. ```azurepowershell
-$keyVault = New-AzKeyVault -Name <key-vault> `
- -ResourceGroupName <resource_group> `
+$keyVault = New-AzKeyVault -Name <key-vault-name> `
+ -ResourceGroupName <resource-group> `
-Location <location> ` -EnablePurgeProtection ```
-To learn how to enable purge protection on an existing key vault with PowerShell, see [Azure Key Vault recovery overview](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell).
+To enable purge protection on an existing key vault with PowerShell:
+
+```azurepowershell
+Update-AzKeyVault -VaultName <key-vault-name> -ResourceGroupName <resource-group> -EnablePurgeProtection
+```
# [Azure CLI](#tab/azure-cli)
-To create a new key vault using Azure CLI, call [az keyvault create](/cli/azure/keyvault#az-keyvault-create). Remember to replace the placeholder values in brackets with your own values:
+To create a new key vault using Azure CLI, call [az keyvault create](/cli/azure/keyvault#az-keyvault-create). Soft delete is enabled by default when you create a new key vault.
+
+The following example creates a new key vault with both soft delete and purge protection enabled. Remember to replace the placeholder values in brackets with your own values.
```azurecli az keyvault create \
- --name <key-vault> \
- --resource-group <resource_group> \
+ --name <key-vault-name> \
+ --resource-group <resource-group> \
--location <region> \ --enable-purge-protection ```
-To learn how to enable purge protection on an existing key vault with Azure CLI, see [Azure Key Vault recovery overview](../key-vault/general/key-vault-recovery.md?tabs=azure-cli).
+To enable purge protection on an existing key vault with Azure CLI:
+
+```azurecli
+az keyvault update --subscription <subscription-id> -g <resource-group> -n <key-vault-name> --enable-purge-protection true
+```
## Add a key
-Next, add a key to the key vault. Azure Load Testing encryption supports RSA keys. For more information about supported key types, see [About keys](../key-vault/keys/about-keys.md).
+Next, add a key to the key vault. Azure Load Testing encryption supports RSA keys. For more information about supported key types in Azure Key Vault, see [About keys](/azure/key-vault/keys/about-keys).
# [Azure portal](#tab/portal)
To learn how to add a key with the Azure portal, see [Set and retrieve a key fro
To add a key with PowerShell, call [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey). Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples. ```azurepowershell
-$key = Add-AzKeyVaultKey -VaultName $keyVault.VaultName `
- -Name <key> `
+$key = Add-AzKeyVaultKey -VaultName <key-vault-name> `
+ -Name <key-name> `
-Destination 'Software' ```
To add a key with Azure CLI, call [az keyvault key create](/cli/azure/keyvault/k
```azurecli az keyvault key create \
- --name <key> \
- --vault-name <key-vault>
+ --name <key-name> \
+ --vault-name <key-vault-name>
```
-## Add an access policy to your Azure Key Vault
+## Add an access policy to your key vault
+
+The user-assigned managed identity for accessing the customer-managed keys in Azure Key Vault must have appropriate permissions to access the key vault.
-The user-assigned managed identity that you use to configure customer-managed keys on Azure Load Testing resource must have appropriate permissions to access the key vault.
+1. In the [Azure portal](https://portal.azure.com), go to the Azure key vault instance that you plan to use to host your encryption keys.
-1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select **Access Policies** from the left menu:
+1. Select **Access Policies** from the left menu.
- :::image type="content" source="media/how-to-configure-customer-managed-keys/access-policies-azure-key-vault.png" alt-text="Screenshot that shows access policies option in Azure Key Vault.":::
+ :::image type="content" source="media/how-to-configure-customer-managed-keys/access-policies-azure-key-vault.png" alt-text="Screenshot that shows the access policies option for a key vault in the Azure portal.":::
1. Select **+ Add Access Policy**.
-1. Under the **Key permissions** drop-down menu, select **Get**, **Unwrap Key**, and **Wrap Key** permissions:
+1. In the **Key permissions** drop-down menu, select **Get**, **Unwrap Key**, and **Wrap Key** permissions.
:::image type="content" source="media/how-to-configure-customer-managed-keys/azure-key-vault-permissions.png" alt-text="Screenshot that shows Azure Key Vault permissions.":::
-1. Under **Select principal**, select **None selected**.
+1. In **Select principal**, select **None selected**.
-1. Search for the user-assigned managed identity you created, and then select it from the list.
+1. Search for the user-assigned managed identity you created earlier, and select it from the list.
1. Choose **Select** at the bottom. 1. Select **Add** to add the new access policy.
-1. Select **Save** on the Key Vault instance to save all changes.
+1. Select **Save** on the key vault instance to save all changes.
-## Configure customer-managed keys for a new Azure Load Testing resource
+## Configure customer-managed keys for a new load testing resource
-To configure customer-managed keys for a new Azure Load Testing resource, follow these steps:
+To configure customer-managed keys for a new load testing resource, follow these steps:
# [Azure portal](#tab/portal)
-1. In the Azure portal, navigate to the **Azure Load Testing** page, and select the **Create** button to create a new resource.
-
-1. Follow the steps outlined in [create an Azure Load Testing resource](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource) to fill out the fields on the **Basics** tab.
+1. Follow these steps to [create an Azure load testing resource in the Azure portal](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource) and fill out the fields on the **Basics** tab.
-1. Go to the **Encryption** tab. In the **Encryption type** field, select **Customer-managed keys (CMK)**.
+1. Go to the **Encryption** tab, and then select *Customer-managed keys (CMK)* for the **Encryption type** field.
1. In the **Key URI** field, paste the URI/key identifier of the Azure Key Vault key including the key version.
To configure customer-managed keys for a new Azure Load Testing resource, follow
1. Select **Review + create** to validate and create the new resource. # [PowerShell](#tab/powershell)
You can deploy an ARM template using PowerShell to automate the creation of your
```
-For example, an Azure Load Testing resource might look like the following:
+The following code sample shows an ARM template for creating a load testing resource with customer-managed keys enabled:
```json {
For example, an Azure Load Testing resource might look like the following:
} ```
-Deploy the above template to a resource group, using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
+Deploy the above template to a resource group by using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
```azurepowershell New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateFile <path-to-template>
You can deploy an ARM template using Azure CLI to automate the creation of your
} ```
-For example, an Azure Load Testing resource might look like the following:
+The following code sample shows an ARM template for creating a load testing resource with customer-managed keys enabled:
```json {
For example, an Azure Load Testing resource might look like the following:
} ```
-Deploy the above template to a resource group, using [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create):
+Deploy the above template to a resource group by using [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create):
```azurecli-interactive az deployment group create --resource-group <resource-group-name> --template-file <path-to-template>
az deployment group create --resource-group <resource-group-name> --template-fil
## Change the managed identity
-You can change the managed identity for customer-managed keys for an existing Azure Load Testing resource at any time.
+You can change the managed identity for customer-managed keys for an existing load testing resource at any time.
-1. Navigate to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
1. On the **Settings** page, select **Encryption**.
- The **Encryption type** shows the encryption type you selected at resource creation time.
+ The **Encryption type** shows the encryption type that was used for creating the load testing resource.
1. If the encryption type is **Customer-managed keys**, select the type of identity to use to authenticate to the key vault. The options include **System-assigned** (the default) or **User-assigned**.
You can change the managed identity for customer-managed keys for an existing Az
1. Save your changes. > [!NOTE] > The selected managed identity should have access granted on the Azure Key Vault.
You can change the managed identity for customer-managed keys for an existing Az
You can change the key that you're using for Azure Load Testing encryption at any time. To change the key with the Azure portal, follow these steps:
-1. Navigate to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
1. On the **Settings** page, select **Encryption**. The **Encryption type** shows the encryption selected for the resource while creation.
-1. If the selected encryption type is *Customer-managed keys*, you can edit the key URI field with the new key URI.
+1. If the selected encryption type is *Customer-managed keys*, you can edit the **Key URI** field with the new key URI.
1. Save your changes. ## Key rotation
-You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. To rotate a key, in Azure Key Vault, update the key version or create a new key. You can then update the Azure Load Testing resource to [encrypt data using the new key URI](#change-the-key).
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. To rotate a key, in Azure Key Vault, update the key version or create a new key. You can then update the load testing resource to [encrypt data using the new key URI](#change-the-key).
## Frequently asked questions
-### Is there an additional charge to enable customer-managed keys?
+### Is there an extra charge to enable customer-managed keys?
No, there's no charge to enable this feature.
-### Are customer-managed keys supported for existing Azure Load Testing resources?
+### Are customer-managed keys supported for existing Azure load testing resources?
-This feature is currently only available for new Azure Load Testing resources.
+This feature is currently only available for new Azure load testing resources.
-### How can I tell if customer-managed keys are enabled on my Azure Load Testing account?
+### How can I tell if customer-managed keys are enabled on my Azure load testing resource?
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
1. Go to the **Encryption** item in the left navigation bar. 1. You can verify the **Encryption type** on your resource. ### How do I revoke an encryption key?
-You can revoke a key by disabling the latest version of the key in Azure Key Vault. Alternatively, to revoke all keys from an Azure Key Vault instance, you can delete the access policy granted to the managed identity of the Azure Load Testing resource.
+You can revoke a key by disabling the latest version of the key in Azure Key Vault. Alternatively, to revoke all keys from a key vault instance, you can delete the access policy granted to the managed identity of the load testing resource.
When you revoke the encryption key you may be able to run tests for about 10 minutes, after which the only available operation is resource deletion. It's recommended to rotate the key instead of revoking it to manage resource security and retain your data.
load-testing How To Create Manage Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test-runs.md
+
+ Title: Create and manage tests runs
+
+description: Learn how to create and manage tests runs in Azure Load Testing with the Azure portal.
++++ Last updated : 05/10/2023+++
+# Create and manage tests runs in Azure Load Testing
+
+When you run a load test, Azure Load Testing creates a test run associated with the test. Learn how to manage [tests runs](./concept-load-testing-concepts.md#test-run) for a load test in Azure Load Testing.
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An Azure load testing resource. To create a load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
+
+## View test runs
+
+Test runs are associated with a load test in Azure Load Testing. To view the test runs for a test in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, to view the list of tests.
+
+1. View the test runs for a test by selecting the test name in the list.
+
+ :::image type="content" source="media/how-to-create-manage-test-runs/view-test-runs.png" alt-text="Screenshot that shows the list of test runs for a load test in the Azure portal.":::
+
+1. Select **ellipsis (...)** for a test run perform more actions on the test run.
+
+ :::image type="content" source="media/how-to-create-manage-test-runs/test-run-context-menu.png" alt-text="Screenshot that shows the test run context menu in the Azure portal to download input files, results file, and share a link.":::
+
+ - Select **Download input file** to download all input files for running the test, such as the JMeter test script, input data files, and user property files. The download also contains the [load test configuration YAML file](./reference-test-config-yaml.md).
+
+ > [!TIP]
+ > You can use the downloaded test configuration YAML file for [setting up automated load testing in a CI/CD pipeline](./tutorial-identify-performance-regression-with-cicd.md).
+
+ - Select **Download results file** to download the JMeter test results CSV file. This file contains an entry for each web request. Learn more about [exporting load test results](./how-to-export-test-results.md).
+
+ - Select **Share** to get a direct link to the test run dashboard in the Azure portal. To view the test run dashboard, you need to have access granted to the load testing resource. Learn more about [users and roles in Azure Load Testing](./how-to-assign-roles.md).
+
+## Edit a test run
+
+You can modify a test run by adding or removing Azure app components or resource metrics. You can't update the other test configuration settings.
+
+To view the test runs for a test in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, to view the list of tests.
+
+1. Go to the test details by selecting the test name in the list tests.
+
+1. Go to the test run dashboard by selecting a test run name in the list of runs.
+
+1. Select **App Components** or **Configure metrics** to add or remove app components or resource metrics.
+
+ The test run dashboard automatically reflects the updates to app components and metrics.
+
+ :::image type="content" source="media/how-to-create-manage-test-runs/test-run-app-components-metrics.png" alt-text="Screenshot that shows how to configure app components and resource metrics for a test run in the Azure portal.":::
+
+## Rerun a test run
+
+When you rerun a test run, Azure Load Testing uses the test configuration that is associated with the *test run*. If you've made changes to the configuration of the *test* afterwards, those changes aren't taken into account for rerunning the test run.
+
+To rerun a test run in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, to view the list of tests.
+
+1. Go to the test details by selecting the test name in the list tests.
+
+1. Go to the test run dashboard by selecting a test run name in the list of runs.
+
+1. Select **Rerun**.
+
+1. In the **Rerun** page, optionally update the test run description and test parameters.
+
+1. Select **Rerun** to start the load test.
+
+## Stop a test run
+
+To stop a test run in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, to view the list of tests.
+
+1. Go to the test details by selecting the test name in the list tests.
+
+1. Select one or more test runs from the list by checking the corresponding checkboxes.
+
+1. Select **Delete test runs**
+
+ Alternately, go to the test run dashboard by selecting a test run name in the list of runs, and then select **Delete test run**.
+
+1. In the **Delete test run** page, select **Delete** to delete the test run.
+
+## Delete a test run
+
+To delete a test run in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, to view the list of tests.
+
+1. Go to the test details by selecting the test name in the list tests.
+
+1. Select **ellipsis (...)** > **Stop** to stop a running test run.
+
+## Compare test runs
+
+To identify performance degradation over time, you can visually compare up to five test runs in the dashboard. Learn more about [comparing test runs in Azure Load Testing](./how-to-compare-multiple-test-runs.md).
+
+## Next steps
+
+- [Create and manage load tests](./how-to-create-manage-test.md)
+- [Set up automated load testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md)
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
Previously updated : 05/30/2022 Last updated : 05/10/2023 <!-- Intent: As a user I want to configure the test plan for a load test, so that I can successfully run a load test --> # Create and manage tests in Azure Load Testing
-Learn how to create and manage [tests](./concept-load-testing-concepts.md#test) in your Azure Load Testing resource.
+Learn how to create and manage [load tests](./concept-load-testing-concepts.md#test) in your Azure load testing resource.
## Prerequisites * An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
+* An Azure load testing resource. To create a load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
## Create a test
-There are two options to create a load test for Azure Load Testing resource in the Azure portal:
+There are two options to create a load test in the Azure portal:
-- Create a quick test by using a web application URL.
+- Create a quick test by using a web application URL (URL-based test).
- Create a test by uploading a JMeter test script (JMX). :::image type="content" source="media/how-to-create-manage-test/create-test-dropdown.png" alt-text="Screenshot that shows the options to create a new test in the Azure portal."::: ### Create a quick test by using a URL
-To load test a single web endpoint, use the quick test experience in the Azure portal. Specify the application endpoint URL and basic load parameters to create and run a load test. For more information, see our [quickstart for creating and running a test by using a URL](./quickstart-create-and-run-load-test.md).
+To load test a single HTTP endpoint, you can use the quick test experience in the Azure portal, also known as a *URL-based load test*. Create a load test without prior knowledge of JMeter scripting by entering the target URL and basic load parameters.
-1. In the [Azure portal](https://portal.azure.com), and go to your Azure Load Testing resource.
+When you create a quick test, Azure Load Testing generates the corresponding JMeter script, determines the load test configuration, and runs the load test.
+
+To specify the target load, choose from two options. For each option, you can then enter different settings to define the application load.
+
+| Load type | Description | Load settings |
+|-|-|--|
+| **Virtual users** | The load test simulates the target number of virtual users. The target is reached in increments during the ramp-up time. Azure Load Testing configures the total number of test engine instances as follows:<br/> `#instances = #virtual users / 250`<br/><br/>Each test engine instance then simulates (#total virtual users / #test engines) virtual users.<br/><br/>The maximum number of virtual users for a quick test is 11250. | - Number of virtual users<br/>- Test duration in seconds<br/>- Ramp-up time in seconds |
+| **Requests per second** | The load test simulates a target number of requests per second (RPS), given an estimated endpoint response time.<br/>Azure Load Testing determines the total number of virtual users for the load test based on the RPS and response time: <br/>`#virtual users = (RPS * response time) / 1000`<br/><br/>The service then configures the number of test engine instances and virtual users per instance based on the total number of virtual users. | - Requests per second<br/>- Response time in milliseconds<br/>- Test duration in seconds<br/>- Ramp-up time in seconds |
+
+To create a quick test in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
1. Select **Quick test** on the **Overview** page. Alternately, select **Tests** in the left pane, select **+ Create**, and then select **Create a quick test**.
-1. Enter the URL and load parameters.
+1. Enter the target URL and load parameters.
:::image type="content" source="media/how-to-create-manage-test/create-quick-test.png" alt-text="Screenshot that shows the page for creating a quick test in the Azure portal."::: 1. Select **Run test** to start the load test. Azure Load Testing automatically generates a JMeter test script, and configures your test to scale across multiple test engines, based on your load parameters.
-
- You can edit the test configuration at time after creating it. For example to [monitor server-side metrics](./how-to-monitor-server-side-metrics.md), [configure high scale load](./how-to-high-scale-load.md), or to edit the generated JMX file.
+
+After running a quick test, you can further [edit the load test configuration](#edit-a-test). For example, you can add app components to [monitor server-side metrics](./how-to-monitor-server-side-metrics.md), [configure high scale load](./how-to-high-scale-load.md), or to edit the generated JMeter script.
### Create a test by using a JMeter script
To reuse an existing JMeter test script, or for more advanced test scenarios, cr
If you're not familiar with creating a JMeter script, see [Getting started with Apache JMeter](https://jmeter.apache.org/usermanual/get-started.html).
-1. In the [Azure portal](https://portal.azure.com), and go to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
1. Select **Create** on the **Overview** page.
If you're not familiar with creating a JMeter script, see [Getting started with
:::image type="content" source="media/how-to-create-manage-test/create-jmeter-test.png" alt-text="Screenshot that shows the page for creating a test with a J Meter script in the Azure portal.":::
-## Test plan
+#### Test plan
-The test plan contains all files that are needed for running your load test. At a minimum, the test plan should contain one `*.jmx` JMeter script. Azure Load Testing only supports one JMX file per load test. In addition, you can include a user property file, configuration files, or input data files.
+The test plan contains all files that are needed for running your load test. At a minimum, the test plan should contain one `*.jmx` JMeter script. Azure Load Testing only supports one JMX file per load test.
+
+Alongside the test script, you can upload a user property file, configuration files, or input data files, such as CSV files.
1. Go to the **Test plan**.+ 1. Select all files from your local machine, and upload them to Azure. :::image type="content" source="media/how-to-create-manage-test/test-plan-upload-files.png" alt-text="Screenshot that shows the test plan page for creating a test in the Azure portal, highlighting the upload functionality.":::
-<!-- 1. Optionally, upload a zip archive instead of uploading the individual data and configuration files.
+ Azure Load Testing stores all files in a single repository. If your test script references configuration or data files, make sure to remove any relative path names in the JMX file.
- Azure Load Testing will unpack the zip archive on the test engine(s) when provisioning the test.
-
- > [!IMPORTANT]
- > The JMX file and user properties file can't be placed in the zip archive.
- >
- > The maximum upload size for a zip archive is 50 MB.
+1. If your test uses CSV input data, you can choose to enable **Split CSV evenly between test engines**.
- :::image type="content" source="media/how-to-create-manage-test/test-plan-upload-zip.png" alt-text="Screenshot that shows the test plan page for creating a test in the Azure portal, highlighting an uploaded zip archive.":::
- -->
-If you've previously created a quick test, you can edit the test plan at any time. You can add files to the test plan, or download and edit the generated JMeter script. Download a file by selecting the file name in the list.
+ By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file is split evenly.
-### Split CSV input data across test engines
+ For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance processes 1/10th of the customers. Learn more about how to [read a CSV file in your load test](./how-to-read-csv-data.md).
-By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file will be split evenly.
+ :::image type="content" source="media/how-to-create-manage-test/configure-test-split-csv.png" alt-text="Screenshot that shows the checkbox to enable splitting input C S V files when configuring a test in the Azure portal.":::
-For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
+> [!TIP]
+> You can download a file from the **Test plan** tab by selecting the file name in the list. For example, you might download the generated JMeter script for a quick test, modify it, and then upload the file again.
-Azure Load Testing doesn't preserve the header row in your CSV file when splitting a CSV file. For more information about how to configure your JMeter script and CSV file, see [Read data from a CSV file](./how-to-read-csv-data.md).
+#### Parameters
+You can use parameters to make your test plan configurable instead of hard-coding values in the JMeter script. Specify key-value pairs in the load test configuration, and reference the value in the JMeter script by using the parameter name. For more information, see [Parameterize a load test with environment variables and secrets](./how-to-parameterize-load-tests.md).
-## Parameters
-You can use parameters to make your test plan configurable. Specify key-value pairs in the load test configuration, and then reference their value in the JMeter script by using the parameter name.
+1. Specify environment variables to pass nonsensitive parameters to your test script.
-There are two types of parameters:
+ For example, you could use an environment variable to pass the target domain name or port number to the test script. Learn more about [using environment variables in a load test](./how-to-parameterize-load-tests.md).
-- Environment variables. For example, to specify the domain name of the web application.-- Secrets, backed by Azure Key Vault. For example, to pass an authentication token in an HTTP request.
+1. Add references to secrets, backed by Azure Key Vault.
-You can specify the managed identity to use for accessing your key vault.
+ Use secrets to pass sensitive parameters, such as passwords or authentication tokens, to the test script. You store the secret values in your Azure key vault, and add a reference to the key in the load test configuration. You can then reference the key in your script by using the parameter name. Azure Load Testing then retrieves the key value from Azure Key Vault.
-For more information, see [Parameterize a load test with environment variables and secrets](./how-to-parameterize-load-tests.md).
+ Learn more about [using secrets in a load test](./how-to-parameterize-load-tests.md).
+1. Add references to client certificates, backed by Azure Key Vault.
+
+ If you're load testing application endpoints that use certificate-based authentication, you can add the certificates to your Azure key vault, and add a reference to the certificate in the load test configuration. Azure Load Testing automatically injects the certificates in the web requests in your JMeter script.
-## Load
+ Learn more about [using certificate-based authentication with Azure Load Testing](./how-to-test-secured-endpoints.md#authenticate-with-client-certificates).
-Configure the number of test engine instances, and Azure Load Testing automatically scales your load test across all instances. You configure the number of virtual users, or threads, in the JMeter script and the engine instances then run the script in parallel. For more information, see [Configure a test for high-scale load](./how-to-high-scale-load.md).
+1. Select the managed identity that is used for accessing your key vault for secrets or certificates.
+
+ Learn more about [using managed identities with Azure Load Testing](./how-to-use-a-managed-identity.md).
+
+#### Load
+
+1. Specify the number of test engine instances.
+
+ Azure Load Testing automatically scales your load test across all instances. The JMeter test script is run in parallel across all instances. The total number of simulated users equals the number of virtual users (threads) you specify in the JMeter script, multipled by the number of test engine instances. For more information, see [Configure a test for high-scale load](./how-to-high-scale-load.md).
+
+1. Configure virtual network connectivity.
+
+ You can connect your load test to an Azure virtual network for load testing privately hosted or on-premises endpoints. Learn more about [scenarios for deploying Azure Load Testing in a virtual network](./concept-azure-load-testing-vnet-injection.md).
+
+ To connect to a virtual network, select the *Private* **Traffic mode**, and then select the **Virtual network** and **Subnet**.
:::image type="content" source="media/how-to-create-manage-test/configure-test-engine-instances.png" alt-text="Screenshot that shows how to configure the number of test engine instances when creating a test in the Azure portal.":::
-## Test criteria
+#### Test criteria
+
+1. Specify test failure criteria based on client metrics.
+
+ When the load test surpasses the threshold for a metric, the load test gets the failed status. Azure Load Testing currently supports the following client-side metrics for fail criteria:
-You can specify test failure criteria based on client metrics. When a load test surpasses the threshold for a metric, the load test has a **Failed** status. For more information, see [Configure test failure criteria](./how-to-define-test-criteria.md).
+ - Response time
+ - Requests per second
+ - Total number of requests
+ - Latency
+ - Error percentage
-You can use the following client metrics:
+ You can specify fail criteria for the entire load test, or assign them to specific requests in the JMeter script. For example, to validate that the home page response time doesn't exceed a specific response time. For more information, see [Configure test fail criteria](./how-to-define-test-criteria.md).
-- Average **Response time**.-- **Error** percentage.
+1. Configure auto stop criteria.
+
+ Azure Load Testing can automatically stop a load test run when the error rate surpasses a given threshold. You can enable or disable this functionality, and configure the specific error rate threshold and time window. Learn more about [configuring auto stop criteria](./how-to-define-test-criteria.md#auto-stop-configuration).
:::image type="content" source="media/how-to-create-manage-test/configure-test-criteria.png" alt-text="Screenshot that shows how to configure test criteria when creating a test in the Azure portal.":::
-## Monitoring
+#### Monitoring
-For Azure-hosted applications, Azure Load Testing can capture detailed resource metrics for the Azure app components. These metrics enable you to [analyze application performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md).
+For Azure-hosted applications, add Azure app components to monitor during the load test run. Azure Load Testing captures detailed resource metrics for the selected Azure app components. Use these metrics to [identify potential performance bottlenecks in your application](./tutorial-identify-bottlenecks-azure-portal.md).
-When you edit a load test, you can select the Azure app component that you want to monitor. Azure Load Testing selects the most relevant resource metrics. You can add or remove resource metrics for each of the app components at any time.
+When you add an app component, Azure Load Testing automatically selects the most relevant resource metrics for the component. You can add or remove resource metrics for each of the app components at any time.
:::image type="content" source="media/how-to-create-manage-test/configure-monitoring.png" alt-text="Screenshot that shows how to configure the Azure app components to monitor when creating a test in the Azure portal.":::
When the load test finishes, the test result dashboard shows a graph for each of
For more information, see [Configure server-side monitoring](./how-to-monitor-server-side-metrics.md).
-## Manage
+## Run a test
+
+When you run or rerun a load test, Azure Load Testing uses the most recent load test configuration settings to create a new test run. If you [edit a test](#edit-a-test) configuration and check the **Run test after applying changes** checkbox, the load test automatically starts after saving the changes.
+
+To run a load test in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, and go to the test details by selecting the test name in the list.
+
+1. Select **Run**.
+
+1. On the **Run** page, you can choose to enter a test run description and override load test parameters.
++
+1. Select **Run** to start the load test.
+
+ The service creates a new test run, with the description you provided.
+
+## Edit a test
+
+When you edit the load test configuration settings, these settings are used for *future* test runs. When you rerun a previous *test run*, the settings of the test run are used and not the updated settings of the test.
+
+To edit a test in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, and select a test from the list by checking the corresponding checkbox.
+
+1. Select **Edit** to update the test configuration settings.
+
+ :::image type="content" source="media/how-to-create-manage-test/edit-load-test.png" alt-text="Screenshot that shows how to select and edit a load test in the Azure portal.":::
+
+ Alternately, select the test from the list, and then select **Configure** > **Test**.
+
+1. Select **Apply** in the **Edit test** page to save the changes.
+
+ The next time you [run the test](#run-a-test), the updated test configuration settings are used.
+
+## Delete a test
+
+To delete a test in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, and select a test from the list by checking the corresponding checkbox.
-If you already have a load test, you can start a new run, delete the load test, edit the test configuration, or compare test runs.
+1. Select **Delete test** to delete the test.
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
-1. On the left pane, select **Tests** to view the list of load tests, and then select your test.
+ :::image type="content" source="media/how-to-create-manage-test/delete-load-test.png" alt-text="Screenshot that shows how to select and delete a load test in the Azure portal.":::
+ Alternately, select the test from the list, and then select **Delete test** on the test details page.
-You can perform the following actions:
+1. On the **Delete test** page, select **Delete** to confirm the deletion of the test.
-- Refresh the list of test runs.-- Start a new test run. The run uses the current test configuration settings.-- Delete the load test. All test runs for the load test are also deleted.-- Configure the test configuration:
- - Configure the test plan. You can add or remove any of the files for the load test. If you want to update a file, first remove it and then add the updated version.
- - Add or remove Azure app components.
- - Configure resource metrics for the app components. Azure Load Testing automatically selects the relevant resource metrics for each app component. Add or remove metrics for any of the app components in the load test.
-- [Compare test runs](./how-to-compare-multiple-test-runs.md). Select two or more test runs in the list to visually compare them in the results dashboard.
+> [!CAUTION]
+> When you delete a test, all test runs, logs, results, and metrics data are also deleted.
## Next steps
+- [Create and manage test runs](./how-to-create-manage-test-runs.md)
- [Identify performance bottlenecks with Azure Load Testing in the Azure portal](./quickstart-create-and-run-load-test.md) - [Set up automated load testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md)
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
Previously updated : 05/23/2022 Last updated : 05/09/2023
-zone_pivot_groups: load-testing-config
# Read data from a CSV file in JMeter with Azure Load Testing
-In this article, you'll learn how to read data from a comma-separated value (CSV) file in JMeter with Azure Load Testing. You can use the JMeter [CSV Data Set Config element](https://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) in your test script.
+In this article, you learn how to read data from a comma-separated value (CSV) file in JMeter with Azure Load Testing. You can use the JMeter [CSV Data Set Config element](https://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) in your test script.
-Use data from an external CSV file to make your JMeter test script configurable. For example, you might invoke an API for each entry in a customers CSV file.
+Use data from an external CSV file to make your JMeter test script configurable. For example, you might iterate over all customers in a CSV file to pass the customer details into API request.
Get started by [cloning or downloading the samples project from GitHub](https://github.com/Azure-Samples/azure-load-testing-samples/tree/main/jmeter-read-csv).
In this article, you learn how to:
## Prerequisites * An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
+* An Azure load testing resource. To create a load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
* An Apache JMeter test script (JMX). * (Optional) Apache JMeter GUI to author your test script. To install Apache JMeter, see [Apache JMeter Getting Started](https://jmeter.apache.org/usermanual/get-started.html). ## Configure your JMeter script
-In this section, you'll configure your Apache JMeter script to reference the external CSV file. You'll use a [CSV Data Set Config element](https://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) to read data from a CSV file.
+In this section, you configure your Apache JMeter script to reference the external CSV file. You use a [CSV Data Set Config element](https://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) to read data from a CSV file.
-Azure Load Testing uploads the JMX file and all related files in a single folder. When you reference an external file in your JMeter script, verify that your only use the file name and remove any file path references.
+> [!IMPORTANT]
+> Azure Load Testing uploads the JMX file and all related files in a single folder. When you reference an external file in your JMeter script, verify that your only use the file name and *remove any file path references*.
To edit your JMeter script by using the Apache JMeter GUI:
- 1. Select the **CSV Data Set Config** element in your test plan.
-
- 1. Update the **Filename** information and remove any file path reference.
-
- 1. Optionally, enter the CSV field names in **Variable Names**, when you split the CSV file across test engines.
-
- Azure Load Testing doesn't preserve the header row when splitting your CSV file. Provide the variable names in the **CSV Data Set Config** element instead of using a header row.
-
- :::image type="content" source="media/how-to-read-csv-data/update-csv-data-set-config.png" alt-text="Screenshot that shows the JMeter UI to configure a C S V Data Set Config element.":::
-
- 1. Repeat the previous steps for every **CSV Data Set Config** element in the script.
-
- 1. Save the JMeter script and add it to your [test plan](./how-to-create-manage-test.md#test-plan).
-
-To edit your JMeter script by using Visual Studio Code or your editor of preference:
-
- 1. Open the JMX file in Visual Studio Code.
+1. Select the **CSV Data Set Config** element in your test script.
- 1. For each `CSVDataSet`:
+1. Update the **Filename** information and remove any file path reference.
- 1. Update the `filename` element and remove any file path reference.
+1. Optionally, enter the CSV field names in **Variable Names**, when you split the CSV file across test engines.
- :::code language="xml" source="~/azure-load-testing-samples/jmeter-read-csv/read-from-csv.jmx" range="30-41" highlight="4":::
+ Azure Load Testing doesn't preserve the header row when splitting your CSV file. Provide the variable names in the **CSV Data Set Config** element instead of using a header row.
- 1. Add the CSV field names as a comma-separated list in `variableNames`.
+ :::image type="content" source="media/how-to-read-csv-data/update-csv-data-set-config.png" alt-text="Screenshot that shows the JMeter UI to configure a C S V Data Set Config element.":::
- :::code language="xml" source="~/azure-load-testing-samples/jmeter-read-csv/read-from-csv.jmx" range="30-41" highlight="10":::
+1. Repeat the previous steps for every **CSV Data Set Config** element in the script.
- 1. Save the JMeter script and add it to your [test plan](./how-to-create-manage-test.md#test-plan).
+1. Save the JMeter script and [upload the script to your load test](./how-to-create-manage-test.md#test-plan).
-## Add a CSV file to your load test
+## Add the CSV file to your load test
-When you reference an external file in your JMeter script, upload this file to your load test. When the load starts, Azure Load Testing copies all files to a single folder on each of the test engines instances.
+When you run a load test with Azure Load Testing, upload all external files alongside the JMeter test script. When the load test starts, Azure Load Testing copies all files to a single folder on each of the test engines instances.
> [!IMPORTANT] > Azure Load Testing doesn't preserve the header row when splitting your CSV file. Before you add the CSV file to the load test, remove the header row from the file.
+# [Azure portal](#tab/portal)
To add a CSV file to your load test by using the Azure portal:
- 1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+ 1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
- 1. On the left pane, select **Tests** to view a list of tests.
+ 1. On the left pane, select **Tests** to view a list of tests.
- >[!TIP]
- > To limit the number of tests to display in the list, you can use the search box and the **Time range** filter.
-
1. Select your test from the list by selecting the checkbox, and then select **Edit**. :::image type="content" source="media/how-to-read-csv-data/edit-test.png" alt-text="Screenshot that shows the list of load tests and the 'Edit' button.":::
To add a CSV file to your load test by using the Azure portal:
1. Select **Apply** to modify the test and to use the new configuration when you rerun it.
+# [Azure Pipelines / GitHub Actions](#tab/pipelines+github)
If you run a load test within your CI/CD workflow, you can add a CSV file to the test configuration YAML file. For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-identify-performance-regression-with-cicd.md).
To add a CSV file to your load test:
1. Add the CSV file to the `configurationFiles` setting. You can use wildcards or specify multiple individual files.
- ```yaml
- testName: MyTest
- testPlan: SampleApp.jmx
- description: Run a load test for my sample web app
- engineInstances: 1
- configurationFiles:
- - search-params.csv
- ```
- > [!NOTE]
- > If you store the CSV file in a separate folder, specify the file with a relative path name. For more information, see the [Test configuration YAML syntax](./reference-test-config-yaml.md).
-
+ ```yaml
+ testName: MyTest
+ testPlan: SampleApp.jmx
+ description: Run a load test for my sample web app
+ engineInstances: 1
+ configurationFiles:
+ - search-params.csv
+ ```
+ > [!NOTE]
+ > If you store the CSV file in a separate folder, specify the file with a relative path name. For more information, see the [Test configuration YAML syntax](./reference-test-config-yaml.md).
+
1. Save the YAML configuration file and commit it to your source control repository.
- The next time the CI/CD workflow runs, it will use the updated configuration.
+ The next time the CI/CD workflow runs, it will use the updated configuration.
+ ## Split CSV input data across test engines
-By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file will be split evenly.
+By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. By default, each test engine processes the entire CSV file. Alternately, Azure Load Testing enables you to split the CSV input data evenly across all engine instances. If you have multiple CSV files, each file is split evenly.
-For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
+For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance processes 1/10th of the customers.
> [!IMPORTANT] > Azure Load Testing doesn't preserve the header row when splitting your CSV file.
For example, if you have a large customer CSV input file, and the load test runs
To configure your load test to split input CSV files:
+# [Azure portal](#tab/portal)
+
+1. Go to the **Test plan** tab for your load test.
-1. Go to the **Test plan** page for your load test.
1. Select **Split CSV evenly between Test engines**. :::image type="content" source="media/how-to-read-csv-data/configure-test-split-csv.png" alt-text="Screenshot that shows the checkbox to enable splitting input C S V files when configuring a test in the Azure portal.":::
To configure your load test to split input CSV files:
1. Select **Apply** to confirm the configuration changes. The next time you run the test, Azure Load Testing splits and processes the CSV file evenly across the test engines.
+# [Azure Pipelines / GitHub Actions](#tab/pipelines+github)
1. Open your YAML test configuration file in Visual Studio Code or your editor of choice.
To configure your load test to split input CSV files:
1. Save the YAML configuration file and commit it to your source control repository. The next time you run the test, Azure Load Testing splits and processes the CSV file evenly across the test engines.++ ## Next steps
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
Previously updated : 05/13/2022 Last updated : 05/09/2023 adobe-target: true
Azure Load Testing is a fully managed load-testing service that enables you to g
Quickly [create a load test for your web application by using a URL](./quickstart-create-and-run-load-test.md), and without prior knowledge of testing tools. Azure Load Testing abstracts the complexity and infrastructure to run your load test at scale.
-For Azure-based applications, Azure Load Testing collects detailed resource metrics to help you [identify performance bottlenecks](#identify-performance-bottlenecks-by-using-high-scale-load-tests) across your Azure application components.
+For more advanced load testing scenarios, you can [create a load test by reusing an existing Apache JMeter test script](how-to-create-and-run-load-test-with-jmeter-script.md), a popular open-source load and performance tool. For example, your test plan might consist of multiple application requests, you want to call non-HTTP endpoints, or you're using input data and parameters to make the test more dynamic.
+
+If your application is hosted on Azure, Azure Load Testing collects detailed resource metrics to help you [identify performance bottlenecks](#identify-performance-bottlenecks-by-using-high-scale-load-tests) across your Azure application components.
-You can [automate regression testing](#enable-automated-load-testing) by running load tests as part of your continuous integration and continuous deployment (CI/CD) workflow.
+To capture application performance regressions early, add your load test in your [continuous integration and continuous deployment (CI/CD) workflow](#enable-automated-load-testing). Leverage test fail criteria to define and validate your application quality requirements.
Azure Load Testing enables you to test private application endpoints or applications that you host on-premises. For more information, see the [scenarios for deploying Azure Load Testing in a virtual network](./concept-azure-load-testing-vnet-injection.md).
-For more advanced load testing scenarios, you can [create a load test by reusing an existing Apache JMeter test script](how-to-create-and-run-load-test-with-jmeter-script.md), a popular open-source load and performance tool. For example, your test plan might consist of multiple application requests, you want to call non-HTTP endpoints, or you're using input data and parameters to make the test more dynamic.
+The following diagram shows an architecture overview of Azure Load Testing.
++
+> [!NOTE]
+> The overview image shows how Azure Load Testing uses Azure Monitor to capture metrics for app components. Learn more about the [supported Azure resource types](./resource-supported-azure-resource-types.md).
Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+## Usage scenarios
+
+Azure Load Testing uses Apache JMeter and supports a wide range of application types and communication protocols. The following list provides examples of supported application or endpoint types:
+
+- Web applications, using HTTP or HTTPS
+- REST APIs
+- Databases via JDBC
+- TCP-based endpoints
+
+By [using JMeter plugins](./how-to-use-jmeter-plugins.md) in your test script, you can load test more application types.
+
+With the quick test experience you can [test a single URL-based HTTP endpoint](./quickstart-create-and-run-load-test.md). By [uploading a JMeter script](how-to-create-and-run-load-test-with-jmeter-script.md), you can use all JMeter-supported communication protocols.
+ ## Identify performance bottlenecks by using high-scale load tests Performance problems often remain undetected until an application is under load. You can start a high-scale load test in the Azure portal to learn sooner how your application behaves under stress. While the test is running, the Azure Load Testing dashboard provides a live update of the client and server-side metrics. After the load test finishes, you can use the dashboard to analyze the test results and identify performance bottlenecks. For Azure-hosted applications, the dashboard shows detailed resource metrics of the Azure application components. Get started with a tutorial to [identify performance bottlenecks for Azure-hosted applications](./tutorial-identify-bottlenecks-azure-portal.md).
-Azure Load Testing keeps a history of test runs and allows you to visually [compare multiple runs](./how-to-compare-multiple-test-runs.md) to detect performance regressions.
+Azure Load Testing keeps a history of test runs and allows you to visually [compare multiple runs](./how-to-compare-multiple-test-runs.md) to detect performance regressions over time.
You might also [download the test results](./how-to-export-test-results.md) for analysis in a third-party tool.
You can trigger Azure Load Testing from Azure Pipelines or GitHub Actions workfl
## How does Azure Load Testing work?
-Azure Load Testing test engines abstract the required infrastructure for [running a high-scale load test](./how-to-high-scale-load.md). The test engines run the Apache JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. When you create a load test based on a URL, Azure Load Testing automatically generates a JMeter test script for you. To scale out the load test, you can configure the number of test engines.
- Azure Load Testing uses Apache JMeter version 5.5 for running load tests. You can use Apache JMeter plugins from https://jmeter-plugins.org or [upload your own plugin code](./how-to-use-jmeter-plugins.md). Azure Load Testing supports all communication protocols that JMeter supports. For example, to load test a database connection or message queue.
-The application can be hosted anywhere: in Azure, on-premises, or in other clouds. To load test services that have no public endpoint, [deploy Azure Load Testing in a virtual network](./how-to-test-private-endpoint.md).
+The Azure Load Testing test engines abstract the required infrastructure for [running a high-scale load test](./how-to-high-scale-load.md). Each test engine instance runs your JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. When you create a load test based on a URL (*quick test*), Azure Load Testing automatically generates a JMeter test script for you. To scale out the load test, you can configure the number of test engines.
+
+You can host the application under load anywhere: in Azure, on-premises, or in other clouds. To run a load test for services that have no public endpoint, [deploy Azure Load Testing in a virtual network](./how-to-test-private-endpoint.md).
-During the load test, the service collects the following resource metrics and displays them in a dashboard:
+During the load test, Azure Load Testing collects the following resource metrics and displays them in a dashboard:
- *Client-side metrics* give you details reported by the test engine. These details include the number of virtual users, the request response time, or the number of requests per second.
During the load test, the service collects the following resource metrics and di
Azure Load Testing automatically incorporates best practices for Azure networking to help make sure that your tests run securely and reliably. Load tests are automatically stopped if the application endpoints or Azure components start throttling requests.
-Data stored in your Azure Load Testing resource is automatically encrypted with keys managed by Microsoft (service-managed keys). This data includes, for example, your Apache JMeter script.
--
-> [!NOTE]
-> The overview image shows how Azure Load Testing uses Azure Monitor to capture metrics for app components. Learn more about the [supported Azure resource types](./resource-supported-azure-resource-types.md).
+The service automatically encrypts all data stored in your load testing resource with keys managed by Microsoft (service-managed keys). For example, this data includes your Apache JMeter script, configuration files, and more. Alternately, you can also [configure the service to use customer-managed keys](./how-to-configure-customer-managed-keys.md).
## In-region data residency
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
To create a Load Testing resource:
## Create a load test
-Azure Load Testing enables you to quickly create a load test from the Azure portal. You'll specify the web application URL and the basic load testing parameters. Azure Load Testing abstracts the complexity of creating the load test script and provisioning the compute infrastructure.
+Azure Load Testing enables you to quickly create a load test from the Azure portal by specifying the target web application URL and the basic load testing parameters. The service abstracts the complexity of creating the load test script and provisioning the compute infrastructure.
+
+You can specify the target load with a quick test by using either of two options:
+
+- Virtual users: simulate a total number of virtual users for the specified load test duration.
+- Requests per second: simulate a total number of requests per second, based on an estimated response time.
+
+## [Virtual users](#tab/virtual-users)
+ 1. Go to the **Overview** page of your Azure Load Testing resource.
Azure Load Testing enables you to quickly create a load test from the Azure port
1. On the **Quickstart test** page, enter the **Test URL**. Enter the complete URL that you would like to run the test for. For example, `https://www.example.com/login`.
+
+1. Select **Virtual users** load specification method.
1. (Optional) Update the **Number of virtual users** to the total number of virtual users.
Azure Load Testing enables you to quickly create a load test from the Azure port
1. Select **Run test** to create and start the load test.
- :::image type="content" source="media/quickstart-create-and-run-load-test/quickstart-test.png" alt-text="Screenshot that shows quickstart test page.":::
+ :::image type="content" source="media/quickstart-create-and-run-load-test/quickstart-test-virtual-users.png" alt-text="Screenshot that shows the quick test page in the Azure portal, highlighting the option for specifying virtual users.":::
+
+## [Requests per second (RPS)](#tab/rps)
++
+1. Go to the **Overview** page of your Azure Load Testing resource.
+
+1. On the **Get started** tab, select **Quick test**.
+
+ :::image type="content" source="media/quickstart-create-and-run-load-test/quick-test-resource-overview.png" alt-text="Screenshot that shows the quick test button on the resource overview page.":::
+
+1. On the **Quickstart test** page, enter the **Test URL**.
+
+ Enter the complete URL that you would like to run the test for. For example, `https://www.example.com/login`.
+
+1. Select **Requests per second** load specification method.
+
+1. (Optional) Update the **Target Requests per second (RPS)** to the load that you want to generate.
+
+ The maximum load that the service can generate depends on the response time of the endpoint during the load test. Azure Load Testing uses the response time to provision multiple test engines and configure the target number of virtual users needed to generate the required load. The number of virtual users is calculated using the formula: Virtual users = (RPS * max response time) / 1000
+
+1. (Optional) Update the **Response time (milliseconds)** to the estimated response time of the endpoint.
+
+ The endpoint response time during the load test is expected to be higher than normal. Provide a value higher than the maximum observed response time for the endpoint.
+
+1. (Optional) Update the **Test duration** and **Ramp up time** for the test.
+
+1. Select **Run test** to create and start the load test.
+
+ :::image type="content" source="media/quickstart-create-and-run-load-test/quickstart-test-requests-per-second.png" alt-text="Screenshot that shows the quick test page in the Azure portal, highlighting the option for specifying requests per second.":::
+++ > [!NOTE] > Azure Load Testing auto-generates an Apache JMeter script for your load test.
logic-apps Business Continuity Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/business-continuity-disaster-recovery-guidance.md
ms.suite: integration Previously updated : 08/20/2022 Last updated : 05/11/2023 # Business continuity and disaster recovery for Azure Logic Apps
For this task, create a basic health-check logic app that performs these tasks:
To monitor the primary instance's health and call the health-check logic app, create a "watchdog" logic app in an *alternate location*. For example, you can set up the watchdog logic app so that if calling the health-check logic fails, the watchdog can send an alert to your operations team so that they can investigate the failure and why the primary instance doesn't respond. > [!IMPORTANT]
-> Make sure that your watchdog logic app is in a *location that differs from primary location*. If the
-> Logic Apps service in the primary location experiences problems, your watchdog logic app might not run.
+> Make sure that your watchdog logic app is in a *location that differs from primary location*. If Azure
+> Logic Apps in the primary location experiences problems, your watchdog logic app workflow might not run.
For this task, in the secondary location, create a watchdog logic app that performs these tasks:
For this task, in the secondary location, create a watchdog logic app that perfo
You can set the recurrence to a value that below the tolerance level for your recovery time objective (RTO).
-1. Call the health-check logic app in the primary location by using the HTTP action, for example:
+1. Call the health-check logic app workflow in the primary location by using the HTTP action.
+
+You might also create a more sophisticated watchdog logic app, which after a number of failures, that calls another logic app that automatically handles switching to the secondary location when the primary fails.
<a name="activate-secondary"></a>
logic-apps Logic Apps Create Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-azure-resource-manager-templates.md
These samples show how to create and deploy logic apps by using Azure Resource M
### Install PowerShell modules
-1. If you haven't already, install [Azure PowerShell](/powershell/azure/install-az-ps).
+1. If you haven't already, install [Azure PowerShell](/powershell/azure/install-azure-powershell).
1. For the easiest way to install the LogicAppTemplate module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/LogicAppTemplate), run this command:
logic-apps Quickstart Logic Apps Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-powershell.md
Last updated 08/20/2022
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using [Azure PowerShell](/powershell/azure/install-az-ps). From PowerShell, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running the cmdlets in the [Az.LogicApp](/powershell/module/az.logicapp/) PowerShell module.
+This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using [Azure PowerShell](/powershell/azure/install-azure-powershell). From PowerShell, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running the cmdlets in the [Az.LogicApp](/powershell/module/az.logicapp/) PowerShell module.
> [!NOTE] >
If you're new to Azure Logic Apps, learn how to create your first Consumption lo
* An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The [Az PowerShell](/powershell/azure/install-az-ps) module installed on your local computer.
+* The [Az PowerShell](/powershell/azure/install-azure-powershell) module installed on your local computer.
* An [Azure resource group](#examplecreate-resource-group) in which to create your logic app.
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
You can create three data asset types:
|**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a Folder is a simple mapping to the compute target filesystem. | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. | |**Table**<br> Reference a data table | `mltable` | `TabularDataset` | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. This storage location meant that `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to AzureML* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. Read [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. |
+> [!IMPORTANT]
+> If you are migrating your V1 datasets to V2 data assets. It's required that you rename the V2 data asset to a different name compared with the V1 dataset.
+>
++ ## Supported paths When you create an Azure Machine Learning data asset, you must specify a `path` parameter that points to the data asset location. Supported paths include:
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Compute instances can run jobs securely in a [virtual network environment](how-t
## Prerequisites
-* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). In the storage account, the "Allow storage account key access" option must be enabled for compute instance creation to be successful.
* The [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install), [Azure Machine Learning Python SDK (v2)](https://aka.ms/sdk-v2-install), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
You can create compute instance with managed identity from Azure Machine Learnin
1. Select **System-assigned** or **User-assigned** under **Identity type**. 1. If you selected **User-assigned**, select subscription and name of the identity.
-You can use V2 CLI to create compute instance with assign system-assigned managed identity:
+You can use SDK V2 to create a compute instance with assign system-assigned managed identity:
+
+```python
+from azure.ai.ml import MLClient
+from azure.identity import ManagedIdentityCredential
+client_id = os.environ.get("DEFAULT_IDENTITY_CLIENT_ID", None)
+credential = ManagedIdentityCredential(client_id=client_id)
+ml_client = MLClient(credential, sub_id, rg_name, ws_name)
+data = ml_client.data.get(name=data_name, version="1")
+```
+
+You can also use SDK V1:
+
+```python
+from azureml.core.authentication import MsiAuthentication
+from azureml.core import Workspace
+client_id = os.environ.get("DEFAULT_IDENTITY_CLIENT_ID", None)
+auth = MsiAuthentication(identity_config={"client_id": client_id})
+workspace = Workspace.get("chrjia-eastus", auth=auth, subscription_id="381b38e9-9840-4719-a5a0-61d9585e1e91", resource_group="chrjia-rg", location="East US")
+```
+
+You can use V2 CLI to create a compute instance with assign system-assigned managed identity:
```azurecli az ml compute create --name myinstance --identity-type SystemAssigned --type ComputeInstance --resource-group my-resource-group --workspace-name my-workspace
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
When using an Azure Machine Learning workspace with a private endpoint, there ar
- Familiarity with [Azure Private DNS](../dns/private-dns-privatednszone.md) -- Optionally, [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps).
+- Optionally, [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell).
## Automated DNS server integration
machine-learning How To Manage Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-powershell.md
You can also manage workspaces [using the Azure CLI](how-to-manage-workspace-cli
## Prerequisites - An **Azure subscription**. If you don't have one, try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).-- The [Azure PowerShell module](https://www.powershellgallery.com/packages/Az). To make sure you have the latest version, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+- The [Azure PowerShell module](https://www.powershellgallery.com/packages/Az). To make sure you have the latest version, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
> [!IMPORTANT] > While the **Az.MachineLearningServices** PowerShell module is in preview, you must install it
mariadb Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-powershell.md
specified in the storage section of the
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-powershell.md)
mariadb Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-using-powershell.md
PowerShell. A subset of engine configurations is exposed at the server-level and
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-powershell.md)
mariadb Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-powershell.md
You can create and manage read replicas using PowerShell.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed
locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-powershell.md)
mariadb Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-powershell.md
the restart.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-powershell.md)
mariadb Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-powershell.md
server.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-powershell.md)
mariadb Quickstart Create Mariadb Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-powershell.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!IMPORTANT] > While the Az.MariaDb PowerShell module is in preview, you must install it separately from the Az
mariadb Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-powershell.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!IMPORTANT] > While the Az.MariaDb PowerShell module is in preview, you must install it separately from the Az
migrate How To Automate Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md
The Azure Migrate VMware migration automation scripts are available for download
- [Complete the discovery tutorial](tutorial-discover-vmware.md) to prepare Azure and VMware for migration. - We recommend that you complete the second tutorial to [assess VMware VMs](./tutorial-assess-vmware-azure-vm.md) before migrating them to Azure.-- You must have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+- You must have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [guide to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Install Azure Migrate PowerShell module
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
Before you begin this tutorial, you should:
1. Complete the [Tutorial: Discover VMware VMs with Server Assessment](tutorial-discover-vmware.md) to prepare Azure and VMware for migration. 2. Complete the [Tutorial: Assess VMware VMs for migration to Azure VMs](./tutorial-assess-vmware-azure-vm.md) before migrating them to Azure.
-3. [Install the Az PowerShell module](/powershell/azure/install-az-ps)
+3. [Install the Az PowerShell module](/powershell/azure/install-azure-powershell)
## 2. Install Azure Migrate PowerShell module
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
Title: Read replicas - Azure Database for MySQL - Flexible Server
-description: 'Learn about read replicas in Azure Database for MySQL - Flexible Server: creating replicas, connecting to replicas, monitoring replication, and stopping replication.'
+description: "Learn about read replicas in Azure Database for MySQL Flexible Server: creating replicas, connecting to replicas, monitoring replication, and stopping replication."
+++ Last updated : 05/10/2023 -- Previously updated : 05/24/2022 # Read replicas in Azure Database for MySQL - Flexible Server
Last updated 05/24/2022
MySQL is one of the popular database engines for running internet-scale web and mobile applications. Many of our customers use it for their online education services, video streaming services, digital payment solutions, e-commerce platforms, gaming services, news portals, government, and healthcare websites. These services are required to serve and scale as the traffic on the web or mobile application increases.
-On the applications side, the application is typically developed in Java or PHP and migrated to run on Azure virtual machine scale sets or Azure App Services or are containerized to run on Azure Kubernetes Service (AKS). With virtual machine scale set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
+On the applications side, the application is typically developed in Java or PHP and migrated to run on Azure Virtual Machine Scale Sets or Azure App Services or are containerized to run on Azure Kubernetes Service (AKS). With Virtual Machine Scale Set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
The read replica feature allows you to replicate data from an Azure Database for MySQL - Flexible Server to a read-only server. You can replicate from the source server to up to **10** replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-Replicas are new servers that you manage similar to your source Azure Database for MySQL - Flexible Servers. You will incur billing charges for each read replica based on the provisioned compute in vCores and storage in GB/ month. For more information, see [pricing](./concepts-compute-storage.md#pricing).
+Replicas are new servers that you manage similar to your source Azure Database for MySQL - Flexible Servers. You'll incur billing charges for each read replica based on the provisioned compute in vCores and storage in GB/ month. For more information, see [pricing](./concepts-compute-storage.md#pricing).
-> [!NOTE]
+> [!NOTE]
> The read replica feature is only available for Azure Database for MySQL - Flexible servers in the General Purpose or Business Critical pricing tiers. Ensure the source server is in one of these pricing tiers. To learn more about MySQL replication features and issues, see the [MySQL replication documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html).
-> [!NOTE]
+> [!NOTE]
> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. ## Common use cases for read replica
The read replica feature helps to improve the performance and scale of read-inte
Common scenarios are:
-* Scaling read-workloads coming from the application by using lightweight connection proxy like [ProxySQL](https://aka.ms/ProxySQLLoadBalanceReplica) or using microservices-based pattern to scale out your read queries coming from the application to read replicas
-* BI or analytical reporting workloads can use read replicas as data source for reporting
-* For IoT or Manufacturing scenario where telemetry information is ingested into MySQL database engine while multiple read replicas are use for reporting of data
+- Scaling read-workloads coming from the application by using lightweight connection proxy like [ProxySQL](https://aka.ms/ProxySQLLoadBalanceReplica) or using microservices-based pattern to scale out your read queries coming from the application to read replicas
+- BI or analytical reporting workloads can use read replicas as data source for reporting
+- For IoT or Manufacturing scenario where telemetry information is ingested into MySQL database engine while multiple read replicas are use for reporting of data
Because replicas are read-only, they don't directly reduce write-capacity burdens on the source. This feature isn't targeted at write-intensive workloads.
-The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay.
+The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There's a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay.
## Create a replica
-If a source server has no existing replica servers, the source will first restart to prepare itself for replication.
+If a source server has no existing replica servers, the source first restarts to prepare itself for replication.
When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours.
-> [!NOTE]
+> [!NOTE]
> Read replicas are created with the same server configuration as the source. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group, same location and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the source. Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md). ## Connect to a replica
-At creation, a replica inherits the connectivity method of the source server. You cannot change the connectivity method of the replica. For example if source server has **Private access (VNet Integration)** then replica cannot be in **Public access (allowed IP addresses)**.
+At creation, a replica inherits the connectivity method of the source server. You can't change the connectivity method of the replica. For example if source server has **Private access (VNet Integration)** then replica can't be in **Public access (allowed IP addresses)**.
The replica inherits the admin account from the source server. All user accounts on the source server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the source server.
Azure Database for MySQL - Flexible Server provides the **Replication lag in sec
If you see increased replication lag, refer to [troubleshooting replication latency](./../howto-troubleshoot-replication-latency.md) to troubleshoot and understand possible causes.
->[!IMPORTANT]
->Read Replica on HA server uses storage based replication technology, which no longer uses 'SLAVE_IO_RUNNING' metric available in MySQL's 'SHOW SLAVE STATUS' command. The value of it will always be displayed as "No" and is not indicative of replication status. To know the correct status of replication, please refer to replication metrics - **Replica IO Status** and **Replica SQL Status** under monitoring blade.
+> [!IMPORTANT]
+> Read Replica on HA server uses storage based replication technology, which no longer uses 'SLAVE_IO_RUNNING' metric available in MySQL's 'SHOW SLAVE STATUS' command. The value of it always be displayed as "No" and is not indicative of replication status. To know the correct status of replication, please refer to replication metrics - **Replica IO Status** and **Replica SQL Status** under monitoring blade.
## Stop replication You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server.
-When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There is no automated failover between a source and its replica.
+When you choose to stop replication to a replica, it loses all links to its previous source and other replicas. There's no automated failover between a source and its replica.
-> [!IMPORTANT]
->The standalone server can't be made into a replica again.
+> [!IMPORTANT]
+> The standalone server can't be made into a replica again.
> Before you stop replication on a read replica, ensure the replica has all the data that you require. Learn how to [stop replication to a replica](how-to-read-replicas-portal.md). ## Failover
-There is no automated failover between source and replica servers.
+There's no automated failover between source and replica servers.
-Read replicas is meant for scaling of read intensive workloads and is not designed to meet high availability needs of a server. Stopping the replication on read replica to bring it online in read write mode is the means by which this manual failover is performed.
+Read replicas is meant for scaling of read intensive workloads and isn't designed to meet high availability needs of a server. Stopping the replication on read replica to bring it online in read write mode is the means by which this manual failover is performed.
-Since replication is asynchronous, there is lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
+Since replication is asynchronous, there's lag between the source and the replica. The amount of lag can be influenced by many factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
-> [!Tip]
-> If you failover to the replica, the lag at the time you delink the replica from the source will indicate how much data is lost.
+> [!TIP]
+> If you failover to the replica, the lag at the time you delink the replica from the source indicates how much data is lost.
-After you have decided you want to failover to a replica:
+After you've decided you want to fail over to a replica:
1. Stop replication to the replica<br/>
- This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
+ This step is necessary to make the replica server able to accept writes. As part of this process, the replica server is delinked from the source. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
-2. Point your application to the (former) replica<br/>
+1. Point your application to the (former) replica<br/>
Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
-After your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above.
+After your application is successfully processing reads and writes, you've completed the failover. The amount of downtime your application experiences depend on when you detect an issue and complete steps 1 and 2 above.
## Global transaction identifier (GTID)
Global transaction identifier (GTID) is a unique identifier created with each co
The following server parameters are available for configuring GTID:
-|**Server parameter**|**Description**|**Default Value**|**Values**|
-|--|--|--|--|
-|`gtid_mode`|Indicates if GTIDs are used to identify transactions. Changes between modes can only be done one step at a time in ascending order (ex. `OFF` -> `OFF_PERMISSIVE` -> `ON_PERMISSIVE` -> `ON`)|`OFF*`|`OFF`: Both new and replication transactions must be anonymous <br> `OFF_PERMISSIVE`: New transactions are anonymous. Replicated transactions can either be anonymous or GTID transactions. <br> `ON_PERMISSIVE`: New transactions are GTID transactions. Replicated transactions can either be anonymous or GTID transactions. <br> `ON`: Both new and replicated transactions must be GTID transactions.|
-|`enforce_gtid_consistency`|Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. |`OFF*`|`OFF`: All transactions are allowed to violate GTID consistency. <br> `ON`: No transaction is allowed to violate GTID consistency. <br> `WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. |
+| **Server parameter** | **Description** | **Default Value** | **Values** |
+| | | | |
+| `gtid_mode` | Indicates if GTIDs are used to identify transactions. Changes between modes can only be done one step at a time in ascending order (ex. `OFF` -> `OFF_PERMISSIVE` -> `ON_PERMISSIVE` -> `ON`) | `OFF*` | `OFF`: Both new and replication transactions must be anonymous<br />`OFF_PERMISSIVE`: New transactions are anonymous. Replicated transactions can either be anonymous or GTID transactions.<br />`ON_PERMISSIVE`: New transactions are GTID transactions. Replicated transactions can either be anonymous or GTID transactions.<br />`ON`: Both new and replicated transactions must be GTID transactions. |
+| `enforce_gtid_consistency` | Enforces GTID consistency by allowing execution of only those statements that can be logged in a transactionally safe manner. This value must be set to `ON` before enabling GTID replication. | `OFF*` | `OFF`: All transactions are allowed to violate GTID consistency.<br />`ON`: No transaction is allowed to violate GTID consistency.<br />`WARN`: All transactions are allowed to violate GTID consistency, but a warning is generated. |
**For Azure Database for MySQL-Flexible servers having High Availability feature enabled the default value is set to `ON`*
-> [!NOTE]
->
-> * After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
->
-> * To change GTID's from one value to another can only be one step at a time in ascending order of modes. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to ON_PERMISSIVE but not to ON.
->
-> * To keep replication consistent, you cannot update it for a master/replica server.
->
-> * Recommended to SET enforce_gtid_consistency to ON before you can set gtid_mode=ON
+> [!NOTE]
+>
+> - After GTID is enabled, you cannot turn it back off. If you need to turn GTID OFF, please contact support.
+>
+> - To change GTID's from one value to another can only be one step at a time in ascending order of modes. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to ON_PERMISSIVE but not to ON.
+>
+> - To keep replication consistent, you cannot update it for a master/replica server.
+>
+> - Recommended to SET enforce_gtid_consistency to ON before you can set gtid_mode=ON
To enable GTID and configure the consistency behavior, update the `gtid_mode` and `enforce_gtid_consistency` server parameters using the [Azure portal](how-to-configure-server-parameters-portal.md), [Azure CLI](how-to-configure-server-parameters-cli.md).
-If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID replication. In order to make sure that the replication is consistent, `gtid_mode` cannot be changed once the master or replica server(s) is created with GTID enabled.
+If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas also have GTID enabled and use GTID replication. In order to make sure that the replication is consistent, `gtid_mode` can't be changed once the master, or replica server(s) is created with GTID enabled.
## Considerations and limitations | Scenario | Limitation/Consideration |
-|:-|:-|
-| Replica on server in Burstable Pricing Tier| Not supported |
-| Cross region read replication | Not supported |
+| :- | :- |
+| Replica on server in Burstable Pricing Tier | Not supported |
| Pricing | The cost of running the replica server is based on the region where the replica server is running | | Source server restart | When you create a replica for a source that has no existing replicas, the source will first restart to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period | | New replicas | A read replica is created as a new Azure Database for MySQL - Flexible Server. An existing server can't be made into a replica. You can't create a replica of another read replica |
If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas
| Deleted source and standalone servers | When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted. | | User accounts | Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server. | | Server parameters | To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. <br> The following server parameters are locked on both the source and replica servers:<br> - [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html) <br> - [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) <br> The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers. <br> To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas.
-|Session level parameters | When configuring session level parameters such as ΓÇÿforeign_keys_checksΓÇÖ on the read replica, ensure the parameter values being set on the read replica are consistent with that of the source server.|
-|Adding AUTO_INCREMENT Primary Key column to the existing table in the source server.|We donΓÇÖt recommend altering table with AUTO_INCREMENT post read replica creation, as it breaks the replication. But in case you would like to add the auto increment column post creating a replica server. We recommend these two approaches: <br> - Create a new table with the same schema of table you want to modify. In the new table alter the column with AUTO_INCREMENT and then from the original table restore the data. Drop old table and rename it in the source, this doesnΓÇÖt need us to delete the replica server but may need large insert cost to creating backup table. <br> - The other quicker method is to recreate the replica after adding all auto increment columns.|
-| Other | - Creating a replica of a replica is not supported. <br> - In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information. <br>- Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.<br>- Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html) |
+| Session level parameters | When configuring session level parameters such as 'foreign_keys_checks' on the read replica, ensure the parameter values being set on the read replica are consistent with that of the source server. |
+| Adding AUTO_INCREMENT Primary Key column to the existing table in the source server. | We don't recommend altering table with AUTO_INCREMENT post read replica creation, as it breaks the replication. But in case you would like to add the auto increment column post creating a replica server. We recommend these two approaches:<br />- Create a new table with the same schema of table you want to modify. In the new table, alter the column with AUTO_INCREMENT and then from the original table restore the data. Drop old table and rename it in the source, this doesn't need us to delete the replica server but may need large insert cost to creating backup table.<br />- The other quicker method is to recreate the replica after adding all auto increment columns. |
+| Other | - Creating a replica of a replica isn't supported.<br />- In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information.<br />- Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.<br />- Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html) |
## Next steps
-* Learn how to [create and manage read replicas using the Azure portal](how-to-read-replicas-portal.md)
-* Learn how to [create and manage read replicas using the Azure CLI](how-to-read-replicas-cli.md)
+- Learn how to [create and manage read replicas using the Azure portal](how-to-read-replicas-portal.md)
+- Learn how to [create and manage read replicas using the Azure CLI](how-to-read-replicas-cli.md)
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
description: This article describes how to set up Data-in replication for Azure
Previously updated : 05/03/2023 Last updated : 12/30/2022
This article describes how to set up [Data-in replication](concepts-data-in-repl
> [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-To create a replica in the Azure Database for MySQL Flexible service, [Data-in replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in replication is based on the binary log (binlog) file position-based. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+To create a replica in the Azure Database for MySQL Flexible service, [Data-in replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in replication can be configured using either binary log (binlog) file position-based replication OR GTID based replication. To learn more about binlog replication, see the [MySQL Replication](https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html).
Review the [limitations and requirements](concepts-data-in-replication.md#limitations-and-considerations) of Data-in replication before performing the steps in this article.
Review the [limitations and requirements](concepts-data-in-replication.md#limita
## Configure the source MySQL server
-The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in replication. This server is the "source" for Data-in replication.
+The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in replication. This server is the "source" for Data-in replication.
1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding. 1. Networking Requirements
- * Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN).
- * If private access is in use, make sure that you have connectivity between Source server and the Vnet in which the replica server is hosted.
- * Make sure we provide site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../../expressroute/expressroute-introduction.md) or [VPN](../../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
- * If private access is used in replica server and your source is Azure VM make sure that VNet to VNet connectivity is established. VNet-Vnet peering is supported. You can also use other connectivity methods to communicate between VNets across different regions like VNet to VNet Connection. For more information you can, see [VNet-to-VNet VPN gateway](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)
- * Ensure that your virtual network Network Security Group rules don't block the outbound port 3306 (Also inbound if the MySQL is running on Azure VM). For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md).
- * Configure your source server's firewall rules to allow the replica server IP address.
+ - Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN).
-1. Turn on binary logging.
+ - If private access is in use, make sure that you have connectivity between Source server and the Vnet in which the replica server is hosted.
+
+ - Make sure we provide site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../../expressroute/expressroute-introduction.md) or [VPN](../../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
+
+ - If private access is used in replica server and your source is Azure VM make sure that VNet to VNet connectivity is established. VNet-Vnet peering is supported. You can also use other connectivity methods to communicate between VNets across different regions like VNet to VNet Connection. For more information you can, see [VNet-to-VNet VPN gateway](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)
+
+ - Ensure that your virtual network Network Security Group rules don't block the outbound port 3306 (Also inbound if the MySQL is running on Azure VM). For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md).
+
+ - Configure your source server's firewall rules to allow the replica server IP address.
+
+1. Follow appropriate steps based on if you want to use bin-log position or GTID based data-in replication.
+
+ #### [Bin-log position-based replication](#tab/bash)
Check to see if binary logging has been enabled on the source by running the following command: ```sql SHOW VARIABLES LIKE 'log_bin'; ```-
+
If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.-
+
If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the following steps:
+
1. Locate your MySQL configuration file (my.cnf) in the source server. For example: /etc/my.cnf
+
1. Open the configuration file to edit it and locate **mysqld** section in the file.
+
1. In the mysqld section, add following line:-
- ```config
+
+ ```bash
log-bin=mysql-bin.log ```-
- 1. Restart the MySQL service on source server (or Restart) for the changes to take effect.
- 1. After the server is restarted, verify that binary logging is enabled by running the same query as before:
-
+
+ 1. Restart the MySQL service on source server (or Restart) for the changes to take effect.
+
+ 1. After the server is restarted, verify that binary logging is enabled by running the same query as before:
+
+ ```sql
+ SHOW VARIABLES LIKE 'log_bin';
+ ```
+
+ #### [GTID based replication](#tab/shell)
+
+ The Master server needs to be started with GTID mode enabled by setting the gtid_mode variable to ON. It's also essential that the enforce_gtid_consistency variable is enabled to make sure that only the statements, which are safe for MySQL GTIDs Replication are logged.
+
+ SET @@GLOBAL.ENFORCE_GTID_CONSISTENCY = ON;
+
+ SET @@GLOBAL.GTID_MODE = ON;
+
+ If the master server is another Azure Database for MySQL Flexible Server, then these server parameters can also be updated from the portal by navigating to server parameter page.
+
+
+1. Configure the source server settings.
+
+ Data-in replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL - Flexible Server.
+
```sql
- SHOW VARIABLES LIKE 'log_bin';
+ SET GLOBAL lower_case_table_names = 1;
```-
-1. Configure the source server settings.
-
- Data-in replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL - Flexible Server.
-
- ```sql
- SET GLOBAL lower_case_table_names = 1;
- ```
-
-1. Create a new replication role and set up permission.
-
- Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
-
- In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
+
+5. Create a new replication role and set up permission.
+
+ Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
+
+ In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
#### [SQL Command](#tab/command-line)
SET GLOBAL read_only = ON;
1. Get binary log file name and offset.
-Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
-
-```sql
- show master status;
-```
+ Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
+
+ ```sql
+ show master status;
+ ```
The results should appear similar to the following. Make sure to note the binary file name for use in later steps.
The results should appear similar to the following. Make sure to note the binary
> [!NOTE] > If you want to avoid setting the database to read only when you dump and restore, you can use [mydumper/myloader](../concepts-migrate-mydumper-myloader.md).
+## Retrieve gtid information from the source server dump
+
+1. Skip the step if using bin-log position-based replication
+
+2. GTID information from the dump file taken from the source is required to reset GTID history of the target (replica) server.
+
+3. GTID information from the source server can be retrieved using the following statement:
+
+ ```sql
+ show global variables like 'gtid_executedΓÇÖ;
+ UNLOCK TABLES;
+ ```
+4. Use this GTID information from the source to execute GTID reset on the replica server using the following CLI command:
+
+ ```azurecli-interactive
+ az mysql flexible-server gtid reset --resource-group <resource group> --server-name <source server name> --gtid-set <gtid set from the source server> --subscription <subscription id>
+ ```
+
+ For more details refer [GTID Reset](/cli/azure/mysql/flexible-server/gtid).
+
++ ## Link source and replica servers to start Data-in replication 1. Set the source server. All Data-in replication functions are done by stored procedures. You can find all procedures at [Data-in replication Stored Procedures](../reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
- To link two servers and start replication, login to the target replica server in the Azure Database for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure Database for MySQL server.
+To link two servers and start replication, login to the target replica server in the Azure Database for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure Database for MySQL server.
```sql CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>'); ```
+ ```sql
+ CALL mysql.az_replication_change_master_with_gtid('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>');
+ ```
+ - master_host: hostname of the source server - master_user: username for the source server - master_password: password for the source server
The results should appear similar to the following. Make sure to note the binary
- master_log_pos: binary log position from running `show master status` - master_ssl_ca: CA certificate's context. If not using SSL, pass in empty string.
- It's recommended to pass this parameter in as a variable. For more information, visit the following examples.
+ It's recommended to pass this parameter in as a variable. For more information, visit the following examples.
> [!NOTE]
- > * If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](how-to-manage-firewall-portal.md).
- > * If you used mydumper/myloader to dump the database then you can get the master_log_file and master_log_pos from the */backup/metadata* file.
+ > - If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](how-to-manage-firewall-portal.md).
+ > - If you used mydumper/myloader to dump the database then you can get the master_log_file and master_log_pos from the */backup/metadata* file.
**Examples**
The results should appear similar to the following. Make sure to note the binary
The variable `@cert` is created by running the following MySQL commands:
- ```sql
- SET @cert = '--BEGIN CERTIFICATE--
- PLACE YOUR PUBLIC KEY CERTIFICATE'`S CONTEXT HERE
- --END CERTIFICATE--'
- ```
+ ```sql
+ SET @cert = '--BEGIN CERTIFICATE--
+ PLACE YOUR PUBLIC KEY CERTIFICATE'`S CONTEXT HERE
+ --END CERTIFICATE--'
+ ```
Replication with SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL - Flexible Server. This stored procedure is run on the replica.
- ```sql
- CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, @cert);
- ```
+ ```sql
+ CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, @cert);
+ ```
*Replication without SSL* Replication without SSL is set up between a source server hosted in the domain "companya.com" and a replica server hosted in Azure Database for MySQL - Flexible Server. This stored procedure is run on the replica.
- ```sql
- CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, '');
- ```
+ ```sql
+ CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, '');
+ ```
1. Start replication.
The results should appear similar to the following. Make sure to note the binary
To stop replication between the source and replica server, use the following stored procedure:
-```sql
-CALL mysql.az_replication_stop;
-```
+ ```sql
+ CALL mysql.az_replication_stop;
+ ```
### Remove replication relationship To remove the relationship between source and replica server, use the following stored procedure:
-```sql
-CALL mysql.az_replication_remove_master;
-```
+ ```sql
+ CALL mysql.az_replication_remove_master;
+ ```
### Skip replication error To skip a replication error and allow replication to continue, use the following stored procedure:
-```sql
-CALL mysql.az_replication_skip_counter;
-```
+ ```sql
+ CALL mysql.az_replication_skip_counter;
+ ```
-```sql
-SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos][LIMIT [offset,] row_count]
-```
+ ```sql
+ SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos][LIMIT [offset,] row_count]
+ ```
:::image type="content" source="./media/how-to-data-in-replication/show-binary-log.png" alt-text="Show binary log results"::: ## Next steps - Learn more about [Data-in replication](concepts-data-in-replication.md) for Azure Database for MySQL - Flexible Server.
+
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for MySQL - Flexible Server description: Learn how to set up and manage read replicas in Azure Database for MySQL - Flexible Server using the Azure portal.+++ Last updated : 05/10/2023 -- Previously updated : 06/17/2021 # How to create and manage read replicas in Azure Database for MySQL - Flexible Server using the Azure portal [[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL - Flexible Server using the Azure portal.
+In this article, you'll learn how to create and manage read replicas in the Azure Database for MySQL - Flexible Server using the Azure portal.
-> [!Note]
+> [!NOTE]
>
-> * If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
+> If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
## Prerequisites -- An [Azure Database for MySQL server Flexible Server](quickstart-create-server-portal.md) that will be used as the source server.
+- An [Azure Database for MySQL server Flexible Server](quickstart-create-server-portal.md) that is used as the source server.
## Create a read replica > [!IMPORTANT]
->When you create a replica for a source that has no existing replicas, the source will first restart to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
+>When you create a replica for a source that has no existing replicas, the source first restarts to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
A read replica server can be created using the following steps:
-1. Sign into the [Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
2. Select the existing Azure Database for MySQL - Flexible Server that you want to use as a source. This action opens the **Overview** page.
-3. Select **Replication** from the menu, under **SETTINGS**.
+1. Select **Replication** from the menu, under **SETTINGS**.
-4. Select **Add Replica**.
+1. Select **Add Replica**.
- :::image type="content" source="./media/how-to-read-replica-portal/add-replica.png" alt-text="Azure Database for MySQL - Replication":::
+ :::image type="content" source="./media/how-to-read-replica-portal/add-replica.png" alt-text="Screenshot of adding a replica." lightbox="./media/how-to-read-replica-portal/add-replica.png":::
-5. Enter a name for the replica server. If your region support Availability Zones, you can select Availability zone of your choice.
+1. Enter a name for the replica server. If your region supports Availability Zones, you can select the Availability zone of your choice.
- :::image type="content" source="./media/how-to-read-replica-portal/replica-name.png" alt-text="Azure Database for MySQL - Replica name":::
+ :::image type="content" source="./media/how-to-read-replica-portal/replica-name.png" alt-text="Screenshot of adding a replica name." lightbox="./media/how-to-read-replica-portal/replica-name.png":::
-6. Select **OK** to confirm creation of the replica.
+1. Enter location based on your need to create an in-region or cross-region read-replica.
-> [!NOTE]
-> Read replicas are created with the same server configuration as the source. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group, same location and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the source.
+ :::image type="content" source="media/how-to-read-replica-portal/select-cross-region.png" alt-text="Screenshot of selecting a cross region.":::
+
+1. Select **OK** to confirm the creation of the replica.
+
+> [!NOTE]
+> Read replicas are created with the same server configuration as the source. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group and the same subscription as the source server. Suppose you want to create a replica server for a different resource group or different subscription. In that case, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica can keep up with the source.
-Once the replica server has been created, it can be viewed from the **Replication** blade.
+Once the replica server has been created, it can be viewed from the **Replication** page.
- [:::image type="content" source="./media/how-to-read-replica-portal/list-replica.png" alt-text="Azure Database for MySQL - List replicas":::](./media/how-to-read-replica-portal/list-replica.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/list-replica.png" alt-text="Screenshot of a list of replicas." lightbox="./media/how-to-read-replica-portal/list-replica.png":::
## Stop replication to a replica server
-> [!IMPORTANT]
->Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports both read and writes. This server cannot be made into a replica again.
+> [!IMPORTANT]
+> Stopping replication to a server is irreversible. Once replication has stopped between a source and replica, it cannot be undone. The replica server then becomes a standalone server and now supports read and write This server cannot be made into a replica again.
To stop replication between a source and a replica server from the Azure portal, use the following steps: 1. In the Azure portal, select your source Azure Database for MySQL - Flexible Server.
-2. Select **Replication** from the menu, under **SETTINGS**.
+1. Select **Replication** from the menu, under **SETTINGS**.
-3. Select the replica server you wish to stop replication for.
+1. Select the replica server you wish to stop replication.
- [:::image type="content" source="./media/how-to-read-replica-portal/stop-replication-select.png" alt-text="Azure Database for MySQL - Stop replication select server":::](./media/how-to-read-replica-portal/stop-replication-select.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/list-replica.png" alt-text="Screenshot of a list of replicas." lightbox="./media/how-to-read-replica-portal/list-replica.png":::
-4. Select **Stop replication**.
+1. Select Promote. Promote action caused replication to stop and convert the replica into an independent, standalone read-writeable server.
- [:::image type="content" source="./media/how-to-read-replica-portal/stop-replication.png" alt-text="Azure Database for MySQL - Stop replication":::](./media/how-to-read-replica-portal/stop-replication.png#lightbox)
+ :::image type="content" source="media/how-to-read-replica-portal/promote-action.png" alt-text="Screenshot of selecting promote." lightbox="media/how-to-read-replica-portal/promote-action.png":::
-5. Confirm you want to stop replication by clicking **OK**.
+1. Confirm you want to stop replication by selecting **Promote**.
- [:::image type="content" source="./media/how-to-read-replica-portal/stop-replication-confirm.png" alt-text="Azure Database for MySQL - Stop replication confirm":::](./media/how-to-read-replica-portal/stop-replication-confirm.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/stop-replication-confirm.png" alt-text="Screenshot of stopping replication by selecting promote." lightbox="./media/how-to-read-replica-portal/stop-replication-confirm.png":::
## Delete a replica server
To delete a read replica server from the Azure portal, use the following steps:
1. In the Azure portal, select your source Azure Database for MySQL - Flexible Server.
-2. Select **Replication** from the menu, under **SETTINGS**.
+1. Select **Replication** from the menu, under **SETTINGS**.
-3. Select the replica server you wish to delete.
+1. Select the replica server you wish to delete.
- [:::image type="content" source="./media/how-to-read-replica-portal/delete-replica-select.png" alt-text="Azure Database for MySQL - Delete replica select server":::](./media/how-to-read-replica-portal/delete-replica-select.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/delete-replica-select.png" alt-text="Screenshot of deleting a selected server replica." lightbox="./media/how-to-read-replica-portal/delete-replica-select.png":::
-4. Select **Delete replica**
+1. Select **Delete replica**
- :::image type="content" source="./media/how-to-read-replica-portal/delete-replica.png" alt-text="Azure Database for MySQL - Delete replica":::
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-replica.png" alt-text="Screenshot of deleting a replica." lightbox="./media/how-to-read-replica-portal/delete-replica.png":::
-5. Type the name of the replica and click **Delete** to confirm deletion of the replica.
+1. Type the name of the replica and select **Delete** to confirm the deletion of the replica.
- :::image type="content" source="./media/how-to-read-replica-portal/delete-replica-confirm.png" alt-text="Azure Database for MySQL - Delete replica confirm":::
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-replica-confirm.png" alt-text="Screenshot of confirmation of deleting a replica." lightbox="./media/how-to-read-replica-portal/delete-replica-confirm.png":::
## Delete a source server
-> [!IMPORTANT]
->Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes.
+> [!IMPORTANT]
+> Deleting a source server stops replication to all replica servers and deletes the source server itself. Replica servers become standalone servers that now support both read and writes.
To delete a source server from the Azure portal, use the following steps: 1. In the Azure portal, select your source Azure Database for MySQL - Flexible Server.
-2. From the **Overview**, select **Delete**.
+1. From the **Overview**, select **Delete**.
- [:::image type="content" source="./media/how-to-read-replica-portal/delete-master-overview.png" alt-text="Azure Database for MySQL - Delete source":::](./media/how-to-read-replica-portal/delete-master-overview.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/delete-master-overview.png" alt-text="Screenshot of deleting the source." lightbox="./media/how-to-read-replica-portal/delete-master-overview.png":::
-3. Type the name of the source server and click **Delete** to confirm deletion of the source server.
+1. Type the name of the source server and select **Delete** to confirm the deletion of the source server.
- :::image type="content" source="./media/how-to-read-replica-portal/delete-master-confirm.png" alt-text="Azure Database for MySQL - Delete source confirm":::
+ :::image type="content" source="./media/how-to-read-replica-portal/delete-master-confirm.png" alt-text="Screenshot of deleting the source confirmed.":::
## Monitor replication 1. In the [Azure portal](https://portal.azure.com/), select the replica Azure Database for MySQL - Flexible Server you want to monitor.
-2. Under the **Monitoring** section of the sidebar, select **Metrics**:
+1. Under the **Monitoring** section of the sidebar, select **Metrics**:
-3. Select **Replication lag in seconds** from the dropdown list of available metrics.
+1. Select **Replication lag in seconds** from the dropdown list of available metrics.
- [:::image type="content" source="./media/how-to-read-replica-portal/monitor-select-replication-lag.png" alt-text="Select Replication lag":::](./media/how-to-read-replica-portal/monitor-select-replication-lag.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/monitor-select-replication-lag.png" alt-text="Screenshot of selecting the replication lag." lightbox="./media/how-to-read-replica-portal/monitor-select-replication-lag.png":::
-4. Select the time range you wish to view. The image below selects a 30 minute time range.
+1. Select the time range you wish to view. The image below selects a 30-minute time range.
- [:::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range.png" alt-text="Select time range":::](./media/how-to-read-replica-portal/monitor-replication-lag-time-range.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range.png" alt-text="Screenshot of selecting time range." lightbox="./media/how-to-read-replica-portal/monitor-replication-lag-time-range.png":::
-5. View the replication lag for the selected time range. The image below displays the last 30 minutes.
+1. View the replication lag for the selected time range. The image below displays the last 30 minutes.
- [:::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range-thirty-mins.png" alt-text="Select time range 30 minutes":::](./media/how-to-read-replica-portal/monitor-replication-lag-time-range-thirty-mins.png#lightbox)
+ [:::image type="content" source="./media/how-to-read-replica-portal/monitor-replication-lag-time-range-thirty-mins.png" alt-text="Screenshot of selecting time range 30 minutes." lightbox="./media/how-to-read-replica-portal/monitor-replication-lag-time-range-thirty-mins.png":::
## Next steps - Learn more about [read replicas](concepts-read-replicas.md) - You can also monitor the replication latency by following the steps mentioned [here](../single-server/how-to-troubleshoot-replication-latency.md#monitoring-replication-latency).-- To troubleshoot high replication latency observed in Metrics, visit the [link](../single-server/how-to-troubleshoot-replication-latency.md#common-scenarios-for-high-replication-latency).
+- To troubleshoot high replication latency observed in Metrics, visit the [link](../single-server/how-to-troubleshoot-replication-latency.md#common-scenarios-for-high-replication-latency).
mysql How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md
This feature enables customers to perform in-place upgrades of their MySQL 5.7 s
## Prerequisites - Read Replicas with MySQL version 5.7 should be upgraded before Primary Server for replication to be compatible between different MySQL versions, read more on [Replication Compatibility between MySQL versions](https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-compatibility.html).-- Before you upgrade your production servers, we strongly recommend you to test your application compatibility and verify your database compatibility with features [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals)/[deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations) in the new MySQL version.
+- Before you upgrade your production servers, we **strongly recommend** you use the official Oracle [MySQL Upgrade checker tool](https://go.microsoft.com/fwlink/?linkid=2230525) to test your database schema compatibility and perform necessary regression test to verify application compatibility with features [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals)/[deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations) in the new MySQL version.
- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.
To perform a major version upgrade of an Azure Database for MySQL 5.7 server to
The server will be unavailable during the upgrade process, so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. -- **When will this upgrade feature be GA?**-
- GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production.
- - **What happens to my backups after upgrade?** All backups (automated/on-demand) taken before major version upgrade, when used for restoration will always restore to a server with older version (5.7). All the backups (automated/on-demand) taken after major version upgrade will restore to server with upgraded version (8.0). It's highly recommended to take on-demand backup before you perform the major version upgrade for an easy rollback.
+- **I'm currently using Burstable SKU, does Microsoft plan to support major version upgrade for this SKU in the future?**
+
+ Burstable SKU is not able to support major version upgrade due to the performance limitation of this SKU.Microsoft is still working on a way to make this SKU available for major version upgrade
+
+ If you need to perform a major version upgrade on your Azure MySQL Flexible Server and are currently using Burstable SKU, one temporary solution would be to upgrade to General Purpose or Business Critical SKU, perform the upgrade, and then switch back to Burstable SKU.
+
+ Please note that upgrading to a higher SKU may involve a change in pricing and may result in increased costs for your deployment. However, since the upgrade process is not expected to take a long time, the additional costs should not be significant.
+
+ ## Next steps - Learn more about [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL - Flexible Server.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## May 2023
+
+- **Read-Replica in Geo-Paired Region on Azure Database for MySQL- Flexible Server**
+
+ Azure Database for MySQL - Flexible server now supports cross region read-replica in a geo-paired region. The feature allows you to replicate your data from an instance of Azure Database for MySQL Flexible Server to a read-only server in geo-paired region. [Learn more](how-to-read-replicas-portal.md)
+
+- **Support for data-in replication using GTID**
+
+ Flexible Server now also supports [Data-in Replication](concepts-data-in-replication.md) using GTID based replication. You can also use this feature to configure data-in replication for HA enabled servers as well. To learn more - see [how to configure data-in replication using GTID](how-to-data-in-replication.md)
+ ## April 2023 - **Known issues**
mysql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md
specified in the storage section of the
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
mysql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-powershell.md
PowerShell. A subset of engine configurations is exposed at the server-level and
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
mysql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-powershell.md
You can create and manage read replicas using PowerShell.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
the restart.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
mysql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-powershell.md
server.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-powershell.md)
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-powershell.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!IMPORTANT] > While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-powershell.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!IMPORTANT] > While the Az.MySql PowerShell module is in preview, you must install it separately from the Az
nat-gateway Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/manage-nat-gateway.md
To use Azure PowerShell for this article, you need:
- Azure PowerShell installed locally or Azure Cloud Shell.
- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+ If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
If you run PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
nat-gateway Quickstart Create Nat Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-powershell.md
This quickstart shows you how to use the Azure NAT Gateway service. You'll creat
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
nat-gateway Quickstart Create Nat Gateway Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
# [**Azure CLI**](#tab/create-nat-cli)
network-watcher Diagnose Vm Network Routing Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this quickstart requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this quickstart requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a VM
network-watcher Enable Network Watcher Flow Log Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/enable-network-watcher-flow-log-settings.md
Select the following options, as shown in the picture:
Repeat the previous steps for any other NSGs for which you wish to enable traffic analytics for. Data from flow logs is sent to the workspace, so ensure that the local laws and regulations in your country/region permit data storage in the region where the workspace exists. If you have set different processing intervals for different NSGs, data will be collected at different intervals. For example, You can choose to enable processing interval of 10 mins for critical VNETs and 1 hour for noncritical VNETs.
-You can also configure traffic analytics using the [Set-AzNetworkWatcherConfigFlowLog](/powershell/module/az.network/set-aznetworkwatcherconfigflowlog) PowerShell cmdlet in Azure PowerShell. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+You can also configure traffic analytics using the [Set-AzNetworkWatcherConfigFlowLog](/powershell/module/az.network/set-aznetworkwatcherconfigflowlog) PowerShell cmdlet in Azure PowerShell. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
## View traffic analytics
network-watcher Network Watcher Alert Triggered Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-alert-triggered-packet-capture.md
By using Network Watcher alerting and functions from within the Azure ecosystem,
## Prerequisites
-* The latest version of [Azure PowerShell](/powershell/azure/install-Az-ps).
+* The latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell).
* An existing instance of Network Watcher. If you don't already have one, [create an instance of Network Watcher](network-watcher-create.md). * An existing virtual machine in the same region as Network Watcher with the [Windows extension](../virtual-machines/extensions/network-watcher-windows.md) or [Linux virtual machine extension](../virtual-machines/extensions/network-watcher-linux.md).
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
The account that you use must have the necessary [permissions](required-rbac-per
You can run the commands in the steps that follow: - In the Azure Cloud Shell, by selecting **Try It** at the top right of any command. The Azure Cloud Shell is a free interactive shell that has common Azure tools preinstalled and configured to use with your account.-- By running PowerShell from your computer. If you run PowerShell from your computer, this article requires the Azure PowerShell `Az` module. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+- By running PowerShell from your computer. If you run PowerShell from your computer, this article requires the Azure PowerShell `Az` module. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
The account that you use must have the necessary [permissions](required-rbac-permissions.md).
networking Check Usage Against Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/check-usage-against-limits.md
In this article, you learn how to see the number of each network resource type t
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to Azure.
+You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to Azure.
View your usage against limits with [Get-AzNetworkUsage](/powershell/module/az.network/get-aznetworkusage). The following example gets the usage for resources where at least one resource is deployed in the East US location:
networking Troubleshoot Failed State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/troubleshoot-failed-state.md
The easiest way to achieve this task is to use Azure PowerShell. Issue a resourc
### Preliminary operations
-1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
2. Open your PowerShell console with elevated privileges, and connect to your account. Use the following example to help you connect:
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy
* A pull secret for your Azure Red Hat OpenShift cluster. [Download the pull secret file from the Red Hat OpenShift Cluster Manager web site](https://cloud.redhat.com/openshift/install/azure/aro-provisioned).
-* If you want to run the Azure PowerShell code locally, [Azure PowerShell](/powershell/azure/install-az-ps).
+* If you want to run the Azure PowerShell code locally, [Azure PowerShell](/powershell/azure/install-azure-powershell).
* If you want to run the Azure CLI code locally: * A Bash shell (such as Git Bash, which is included in [Git for Windows](https://gitforwindows.org)).
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md
You'll also need sufficient Azure Active Directory permissions (either a member
> [!NOTE] > ARO pull secret doesn't change the cost of the RH OpenShift license for ARO.
-A Red Hat pull secret enables your cluster to access Red Hat container registries along with other content. This step is optional but recommended. The field `cloud.openshift.com` is removed from your secret even if your pull-secret contains that field. This field enables an extra monitoring feature, which sends data to RedHat and is thus disabled by default. To enable this feature, see https://docs.openshift.com/container-platform/4.11/support/remote_health_monitoring/enabling-remote-health-reporting.html .
+A Red Hat pull secret enables your cluster to access Red Hat container registries, along with other content such as operators from [OperatorHub](https://operatorhub.io/). This step is optional but recommended. If you decide to add the pull secret later, follow [this guidance](howto-add-update-pull-secret.md). The field `cloud.openshift.com` is removed from your secret even if your pull-secret contains that field. This field enables an extra monitoring feature, which sends data to RedHat and is thus disabled by default. To enable this feature, see https://docs.openshift.com/container-platform/4.11/support/remote_health_monitoring/enabling-remote-health-reporting.html .
1. [Navigate to your Red Hat OpenShift cluster manager portal](https://console.redhat.com/openshift/install/azure/aro-provisioned) and sign-in.
orbital Virtual Rf Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/virtual-rf-tutorial.md
On the uplink side, the user must provide a DIFI stream to Azure Orbital through
* The signal sample rate is set through the DIFI stream (even though a bandwidth is provided as part of the Contact Profile, it's purely for network configuration under the hood) * The bit depth is set through the DIFI stream but Azure Orbital expects 8 bits * The DIFI stream ID should be set to 0
-* The MTU size should be 1500 for S-Band and up to 3650 for X-Band
+* Similar to the downlink, the MTU size should be 1500 for S-Band and **up to** 3650 for X-Band (your choice)
* No spectral inversion is used * No frequency offset is used
peering-service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you decide to install and use PowerShell locally instead, this article requires you to use Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. For installation and upgrade information, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you decide to install and use PowerShell locally instead, this article requires you to use Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. For installation and upgrade information, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Finally, if you're running PowerShell locally, you'll also need to run `Connect-AzAccount`. That command creates a connection with Azure.
postgresql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-powershell.md
specified in the storage section of the [Azure Database for PostgreSQL pricing t
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser
- An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md) If you choose to use PowerShell locally, connect to your Azure account using the
postgresql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-powershell.md
PowerShell. A subset of engine configurations is exposed at the server-level and
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
postgresql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-powershell.md
You can create and manage read replicas using PowerShell.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed
locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
postgresql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-powershell.md
The server restart is blocked if the service is busy. For example, the service m
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed locally or
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed locally or
[Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
postgresql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-powershell.md
server.
To complete this how-to guide, you need: -- The [Az PowerShell module](/powershell/azure/install-az-ps) installed
+- The [Az PowerShell module](/powershell/azure/install-azure-powershell) installed
locally or [Azure Cloud Shell](https://shell.azure.com/) in the browser - An [Azure Database for PostgreSQL server](quickstart-create-postgresql-server-database-using-azure-powershell.md)
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-azure-powershell.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!IMPORTANT] > While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
postgresql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-powershell.md
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
+[Install Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!IMPORTANT] > While the Az.PostgreSql PowerShell module is in preview, you must install it separately from the Az
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
The Azure Private 5G Core private mobile network requires a custom location and
--resource-group "$RESOURCE_GROUP_NAME" \ --cluster-type connectedClusters \ --extension-type "Microsoft.Azure.MobileNetwork.PacketCoreMonitor" \
- --release-train preview \
+ --release-train stable \
--auto-upgrade true ```
private-link Configure Asg Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/configure-asg-private-endpoint.md
Azure Private endpoints support application security groups for network security
If you don't have the latest version of the Azure CLI, update it by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create private endpoint with an ASG
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
You can create private endpoints for various Azure services, such as Azure SQL a
- The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
private-link Create Private Link Service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-powershell.md
Get started creating a Private Link service that refers to your service. Give P
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure App Configuration | Microsoft.Appconfiguration/configurationStores | configurationStores | | Azure Automation | Microsoft.Automation/automationAccounts | Webhook, DSCAndHybridWorker | | Azure Cosmos DB | Microsoft.AzureCosmosDB/databaseAccounts | SQL, MongoDB, Cassandra, Gremlin, Table |
+| Azure Cosmos DB for PostgreSQL | Microsoft.DBforPostgreSQL/serverGroupsv2 | coordinator |
| Azure Batch | Microsoft.Batch/batchAccounts | batchAccount, nodeManagement | | Azure Cache for Redis | Microsoft.Cache/Redis | redisCache | | Azure Cache for Redis Enterprise | Microsoft.Cache/redisEnterprise | redisEnterprise |
private-link Tutorial Private Endpoint Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-powershell.md
In this tutorial, you learn how to:
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+* If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
purview Concept Best Practices Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-collections.md
Consider deploying collections in Microsoft Purview to fulfill the following req
- When you run a new scan, by default, the scan is deployed in the same collection as the data source. You can optionally select a different subcollection to run the scan. As a result, the assets will belong under the subcollection. -- Currently, moving data sources across collections isn't allowed. If you need to move a data source under a different collection, you need to delete all assets, remove the data source from the original collection, and re-register the data source under the destination collection.
+- Moving data sources across collections is allowed if the user is granted the Data Source Admin role for the source and destination collections.
- Moving assets across collections is allowed if the user is granted the Data Curator role for the source and destination collections.
Consider deploying collections in Microsoft Purview to fulfill the following req
- Data sources, scans, and assets must belong to a collection if they exist in the Microsoft Purview data map.
-<!--
-- Moving data sources across collections is allowed if the user is granted the Data Source Admin role for the source and destination collections. --- Moving assets across collections is allowed if the user is granted the Data Curator role for the source and destination collections. ->- ## Define an authorization model Microsoft Purview data-plane roles are managed in Microsoft Purview. After you deploy a Microsoft Purview account, the creator of the Microsoft Purview account is automatically assigned the following roles at the root collection. You can use [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) or a programmatic method to directly assign and manage roles in Microsoft Purview.
purview Concept Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md
Previously updated : 03/20/2023 Last updated : 05/11/2023 # What can I accomplish with Microsoft Purview DevOps policies?
SQL dynamic metadata includes a list of more than 700 DMVs/DMFs. We list here as
|||[sys.dm_audit_class_type_map](/sql/relational-databases/system-dynamic-management-views/sys-dm-audit-class-type-map-transact-sql) | ||||
-For more on these DMVs/DMFs you can check these docs
-- [Monitoring Microsoft Azure SQL Database performance using dynamic management views](/azure/azure-sql/database/monitoring-with-dmvs)-- [Security-Related Dynamic Management Views and Functions](/sql/relational-databases/system-dynamic-management-views/security-related-dynamic-management-views-and-functions-transact-sql)
+Check these documents for more on what you IT support personnel can do when granted access via these Purview roles:
+- SQL Performance Monitor: [Use Microsoft Purview to provide at-scale access to performance data in Azure SQL and SQL Server](https://techcommunity.microsoft.com/t5/azure-sql-blog/use-microsoft-purview-to-provide-at-scale-access-to-performance/ba-p/3812839)
+- SQL Security Auditor: [Security-Related Dynamic Management Views and Functions](/sql/relational-databases/system-dynamic-management-views/security-related-dynamic-management-views-and-functions-transact-sql)
## More info - DevOps policies can be created, updated and deleted by any user holding *Policy Author* role at root collection level in Microsoft Purview.
purview How To Policies Devops Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-authoring-generic.md
Previously updated : 04/27/2023 Last updated : 05/11/2023 # Create, list, update and delete Microsoft Purview DevOps policies
SELECT * FROM [databaseName].schemaName.tableName
This section contains a reference of how relevant Microsoft Purview data policy roles map to specific actions in SQL data sources. >[!NOTE]
-> The roles below may be expanded in the future to include additional actions that become available as long as they are consistent with the spirit of the role.
-
-| **Microsoft Purview policy role definition** | **Data source specific actions** |
-|-|--|
-| | |
-| *SQL Performance Monitor* |Microsoft.Sql/Sqlservers/Connect |
-||Microsoft.Sql/Sqlservers/Databases/Connect |
-||Microsoft.Sql/Sqlservers/Databases/SystemViewsAndFunctions/DatabasePerformanceState/rows/select |
-||Microsoft.Sql/Sqlservers/Databases/SystemViewsAndFunctions/ServerPerformanceState/rows/select |
-|||
-| *SQL Security Auditor* |Microsoft.Sql/Sqlservers/Connect |
-||Microsoft.Sql/Sqlservers/Databases/Connect |
-||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityState/rows/select |
-||Microsoft.Sql/Sqlservers/Databases/SystemViewsAndFunctions/DatabaseSecurityState/rows/select |
-||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityMetadata/rows/select |
-||Microsoft.Sql/Sqlservers/Databases/SystemViewsAndFunctions/DatabaseSecurityMetadata/rows/select |
-|||
-
->[!NOTE]
-> The role definition for SQL Performance Monitor will be expanded in the first half of May 2023, to include the following SQL side specific actions.
+> The role definitions below may be expanded in the future to include additional actions that become available as long as they are consistent with the spirit of the role.
| **Microsoft Purview policy role definition** | **Data source specific actions** | |-|--|
This section contains a reference of how relevant Microsoft Purview data policy
||Microsoft.Sql/Sqlservers/ExtendedEventSessions/Target/Add | ||Microsoft.Sql/Sqlservers/ExtendedEventSessions/Target/Drop | |||
+| *SQL Security Auditor* |Microsoft.Sql/Sqlservers/Connect |
+||Microsoft.Sql/Sqlservers/Databases/Connect |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityState/rows/select |
+||Microsoft.Sql/Sqlservers/Databases/SystemViewsAndFunctions/DatabaseSecurityState/rows/select |
+||Microsoft.Sql/sqlservers/SystemViewsAndFunctions/ServerSecurityMetadata/rows/select |
+||Microsoft.Sql/Sqlservers/Databases/SystemViewsAndFunctions/DatabaseSecurityMetadata/rows/select |
+|||
## Next steps Check the blogs, videos and related documents
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-data-sources.md
After you've registered your source, you can move it to another collection that
:::image type="content" source="media/manage-data-sources/select-collection.png" alt-text="Screenshot of the Move collection window, showing the drop down selection of collections.":::
-1. Your collection has been moved. It can take up to an hour for results to be fully seen across your Microsoft Purview environment. Your scans will move with your resource, but assets will remain in their original collection until your next scan, then they'll move to the new collection.
+1. Your data source has been moved. It can take up to an hour for results to be fully seen across your Microsoft Purview environment. Your scans will move with your resource, but assets will remain in their original collection until your next scan, then they'll move to the new collection.
>[!NOTE] >If any of the assets from your source were moved manually to a different collection before the source was migrated, the scan won't take them to the new collection. They will remain in the collection you moved them to.
remote-rendering Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/blob-storage.md
To start converting a model, you need to upload it, using one of the following o
- [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) - a convenient UI to upload/download/manage files on Azure blob storage - [Azure command line](../../../storage/blobs/storage-quickstart-blobs-cli.md)-- [Azure PowerShell module](/powershell/azure/install-az-ps)
+- [Azure PowerShell module](/powershell/azure/install-azure-powershell)
- see the [Example PowerShell scripts](../../samples/powershell-example-scripts.md) - [Using a storage SDK (Python, C# ... )](../../../storage/index.yml) - [Using the Azure Storage REST APIs](/rest/api/storageservices/blob-service-rest-api)
role-based-access-control Custom Roles Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-powershell.md
For a step-by-step tutorial on how to create a custom role, see [Tutorial: Creat
To create custom roles, you need: - Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator)-- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
## List custom roles
role-based-access-control Deny Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/deny-assignments-powershell.md
To get information about a deny assignment, you must have: - `Microsoft.Authorization/denyAssignments/read` permission, which is included in most [Azure built-in roles](built-in-roles.md)-- [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
## List deny assignments
role-based-access-control Role Assignments External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-external-users.md
ms.devlang:
Previously updated : 08/26/2022 Last updated : 05/10/2023
In Azure RBAC, to grant access, you assign a role. To assign a role to a guest u
The Add role assignment page opens.
-1. On the **Roles** tab, select a role such as **Virtual Machine Contributor**.
+1. On the **Role** tab, select a role such as **Virtual Machine Contributor**.
![Screenshot of Add role assignment page with Roles tab.](./media/shared/roles.png)
If the guest user is not yet in your directory, you can invite the user directly
The Add role assignment page opens.
-1. On the **Roles** tab, select a role such as **Virtual Machine Contributor**.
+1. On the **Role** tab, select a role such as **Virtual Machine Contributor**.
1. On the **Members** tab, select **User, group, or service principal**.
role-based-access-control Role Assignments List Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-powershell.md
## Prerequisites -- [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
## List role assignments for the current subscription
role-based-access-control Role Assignments Portal Subscription Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal-subscription-admin.md
Previously updated : 10/15/2021 Last updated : 05/10/2023
To make a user an administrator of an Azure subscription, assign them the [Owner
The [Owner](built-in-roles.md#owner) role grant full access to manage all resources, including the ability to assign roles in Azure RBAC. You should have a maximum of 3 subscription owners to reduce the potential for breach by a compromised owner.
-1. On the **Roles** tab, select the **Owner** role.
+1. On the **Role** tab, select the **Privileged administrator roles** tab.
- You can search for a role by name or by description. You can also filter roles by type and category.
+ ![Screenshot of Add role assignment page with Privileged administrator roles tab selected.](./media/shared/privileged-administrator-roles.png)
- ![Screenshot of Add role assignment page with Roles tab.](./media/shared/roles.png)
+1. Select the **Owner** role.
1. Click **Next**.
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal.md
If you need to assign administrator roles in Azure Active Directory, see [Assign
## Step 3: Select the appropriate role
-1. On the **Roles** tab, select a role that you want to use.
+1. On the **Role** tab, select a role that you want to use.
You can search for a role by name or by description. You can also filter roles by type and category.
- ![Screenshot of Add role assignment page with Roles tab.](./media/shared/roles.png)
+ ![Screenshot of Add role assignment page with Role tab.](./media/shared/roles.png)
+
+1. If you want to assign a privileged administrator role, select the **Privileged administrator roles** tab to select the role.
+
+ Privileged administrator roles are roles that grant privileged administrator access, such as the ability to manage Azure resources or assign roles to other users. You should avoid assigning a privileged administrator role when a job function role can be assigned instead. If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource. For more information, see [Privileged administrator roles](./role-assignments-steps.md#privileged-administrator-roles).
+
+ ![Screenshot of Add role assignment page with Privileged administrator roles tab selected.](./media/shared/privileged-administrator-roles.png)
1. In the **Details** column, click **View** to get more details about a role.
role-based-access-control Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-powershell.md
To assign roles, you must have: - `Microsoft.Authorization/roleAssignments/write` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)-- [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
- The account you use to run the PowerShell command must have the Microsoft Graph `Directory.Read.All` permission. ## Steps to assign an Azure role
role-based-access-control Role Assignments Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md
Previously updated : 09/13/2022 Last updated : 05/10/2023
Permissions are grouped together into a *role definition*. It's typically just c
![Role definition for a role assignment](./media/shared/rbac-role-definition.png)
-The following lists four fundamental built-in roles. The first three apply to all resource types.
+Roles are organized into job function roles and privileged administrator roles.
-- [Owner](built-in-roles.md#owner) - Has full access to all resources including the right to delegate access to others.-- [Contributor](built-in-roles.md#contributor) - Can create and manage all types of Azure resources but can't grant access to others.-- [Reader](built-in-roles.md#reader) - Can view existing Azure resources.-- [User Access Administrator](built-in-roles.md#user-access-administrator) - Lets you manage user access to Azure resources.
+### Job function roles
-The rest of the built-in roles allow management of specific Azure resources. For example, the [Virtual Machine Contributor](built-in-roles.md#virtual-machine-contributor) role allows a user to create and manage virtual machines.
+Job function roles allow management of specific Azure resources. For example, the [Virtual Machine Contributor](built-in-roles.md#virtual-machine-contributor) role allows a user to create and manage virtual machines. To select the appropriate job function role, use these steps:
1. Begin with the comprehensive article, [Azure built-in roles](built-in-roles.md). The table at the top of the article is an index into the details later in the article.
The rest of the built-in roles allow management of specific Azure resources. For
1. If you don't find a suitable role, you can create a [custom role](custom-roles.md).
+### Privileged administrator roles
+
+Privileged administrator roles are roles that grant privileged administrator access, such as the ability to manage Azure resources or assign roles to other users. The following roles are considered privileged and apply to all resource types.
+
+| Role | Description |
+| | |
+| [Owner](built-in-roles.md#owner) | Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. |
+| [Contributor](built-in-roles.md#contributor) | Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. |
+| [User Access Administrator](built-in-roles.md#user-access-administrator) | Lets you manage user access to Azure resources. |
+
+It's a best practice to grant users the least privilege to get their work done. You should avoid assigning a privileged administrator role when a job function role can be assigned instead. If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource, instead of a broader scope, such as management group or subscription.
+ ## Step 3: Identify the needed scope *Scope* is the set of resources that the access applies to. In Azure, you can specify a scope at four levels: [management group](../governance/management-groups/overview.md), subscription, [resource group](../azure-resource-manager/management/overview.md#resource-groups), and resource. Scopes are structured in a parent-child relationship. Each level of hierarchy makes the scope more specific. You can assign roles at any of these levels of scope. The level you select determines how widely the role is applied. Lower levels inherit role permissions from higher levels.
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions.md
To view and work with data actions, you must have the correct versions of the to
| Tool | Version | |||
-| [Azure PowerShell](/powershell/azure/install-az-ps) | 1.1.0 or later |
+| [Azure PowerShell](/powershell/azure/install-azure-powershell) | 1.1.0 or later |
| [Azure CLI](/cli/azure/install-azure-cli) | 2.0.30 or later | | [Azure for .NET](/dotnet/azure/) | 2.8.0-preview or later | | [Azure SDK for Go](/azure/go/azure-sdk-go-install) | 15.0.0 or later |
role-based-access-control Tutorial Custom Role Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-custom-role-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
To complete this tutorial, you will need: - Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator)-- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
## Sign in to Azure PowerShell
sap Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-powershell.md
This content only applies to the Azure Monitor for SAP solutions (classic) versi
- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. -- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module. You'll also need to connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps). Alternately, you can use [Azure Cloud Shell](../../cloud-shell/overview.md).
+- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module. You'll also need to connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). Alternately, you can use [Azure Cloud Shell](../../cloud-shell/overview.md).
- While the **Az.HanaOnAzure** PowerShell module is in preview, you must install it separately using the `Install-Module` cmdlet. Once this PowerShell module becomes generally available, it becomes part of future Az PowerShell module releases and available natively from within Azure Cloud Shell.
sap Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-new.md
### <a name="604bcec2-8b6e-48d2-a944-61b0f5dee2f7"></a>Deploy Azure PowerShell cmdlets
-Follow the steps described in the article [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+Follow the steps described in the article [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell)
Check frequently for updates to the PowerShell cmdlets, which usually are updated monthly. Follow the steps described in [this](/powershell/azure/install-az-ps#update-the-azure-powershell-module) article. Unless stated otherwise in SAP Note [1928533] or SAP Note [2015553], we recommend that you work with the latest version of Azure PowerShell cmdlets.
sap Vm Extension For Sap Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-standard.md
### <a name="604bcec2-8b6e-48d2-a944-61b0f5dee2f7"></a>Deploy Azure PowerShell cmdlets
-Follow the steps described in the article [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+Follow the steps described in the article [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell)
Check frequently for updates to the PowerShell cmdlets, which usually are updated monthly. Follow the steps described in [this](/powershell/azure/install-az-ps#update-the-azure-powershell-module) article. Unless stated otherwise in SAP Note [1928533] or SAP Note [2015553], we recommend that you work with the latest version of Azure PowerShell cmdlets.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
Last updated 01/13/2023
# Cloud feature availability for commercial and US Government customers
-This article describes feature availability in the Microsoft Azure and Azure Government clouds for the following security
+This article describes feature availability in the Microsoft Azure and Azure Government clouds. Features are listed as **GA** (Generally Available), **Public Preview**, or **Not Available** for the following security
- [Azure Information Protection](#azure-information-protection) - [Microsoft Defender for Cloud](#microsoft-defender-for-cloud)
The following tables display the current Microsoft Sentinel feature availability
|- [Large watchlists from Azure Storage](../../sentinel/watchlists.md) | Public Preview | Not Available | |- [Watchlist templates](../../sentinel/watchlists.md) | Public Preview | Not Available | | **Workspace Manager** | | |
-| - [Workspace manager](../../sentinel/workspace-manager.md) | Public preview | Public preview |
+| - [Workspace manager](../../sentinel/workspace-manager.md) | Public Preview | Public Preview |
| **Hunting** | | | | - [Hunting](../../sentinel/hunting.md) | GA | GA |
-| - [Hunts](../../sentinel/hunts.md) | Public preview | Not Available |
+| - [Hunts](../../sentinel/hunts.md) | Public Preview | Not Available |
| **Content and content management** | | |
-| - [Content hub](../../sentinel/sentinel-solutions.md) and [solutions](../../sentinel/sentinel-solutions-catalog.md) | Public preview | Public preview |
-| - [Repositories](../../sentinel/ci-cd.md?tabs=github) | Public preview | Not Available |
+| - [Content hub](../../sentinel/sentinel-solutions.md) and [solutions](../../sentinel/sentinel-solutions-catalog.md) | Public Preview | Public Preview |
+| - [Repositories](../../sentinel/ci-cd.md?tabs=github) | Public Preview | Not Available |
| **Data collection** | | | | - [Advanced SIEM Information Model (ASIM)](../../sentinel/normalization.md) | Public Preview | Not Available | | **Threat intelligence support** | | |
sentinel Audit Track Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/audit-track-tasks.md
+
+ Title: Audit and track changes to incident tasks in Microsoft Sentinel
+description: This article explains how you, as a SOC manager, can audit the history of Microsoft Sentinel incident tasks, and track changes to them, in order to gauge your task assignments and their contribution to your SOC's efficiency and effectiveness.
+++ Last updated : 05/08/2023++
+# Audit and track changes to incident tasks in Microsoft Sentinel
+
+[Incident tasks](incident-tasks.md) ensure comprehensive and uniform treatment of incidents across all SOC personnel. Task lists are typically defined according to determinations made by senior analysts or SOC managers, and put into practice using automation rules or playbooks.
+
+Your analysts can see the list of tasks they need to perform for a particular incident on the incident details page, and mark them complete as they go. Analysts can also create their own tasks on the spot, manually, right from within the incident.
+
+This article explains how you, as a SOC manager, can audit the history of Microsoft Sentinel incident tasks, and track the changes made to them throughout their life cycle, in order to gauge the efficacy of your task assignments and their contribution to your SOC's efficiency and proper functioning.
+
+> [!IMPORTANT]
+>
+> The **Incident tasks** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Structure of Tasks array in the SecurityIncident table
+
+The *SecurityIncident* table is an audit table&mdash;it stores not the incidents themselves, but rather records of the life of an incident: its creation and any changes made to it. Any time an incident is created or a change is made to an incident, a record is generated in this table showing the now-current state of the incident.
+
+The addition of tasks details to the schema of this table allows you to audit tasks in greater depth.
+
+The detailed information added to the **Tasks** field consists of key-value pairs taking the following structure:
+
+| Key | Value description |
+| | -- |
+| **createdBy** | The identity that created the task:<br>**- email**: email address of identity<br>**- name**: name of the identity<br>**- objectId**: GUID of the identity<br>**- userPrincipalName**: UPN of the identity |
+| **createdTimeUtc** | Time the task was created, in UTC. |
+| **lastCompletedTimeUtc** | Time the task was marked complete, in UTC.
+| **lastModifiedBy** | The identity that last modified the task:<br>**- email**: email address of identity<br>**- name**: name of the identity<br>**- objectId**: GUID of the identity<br>**- userPrincipalName**: UPN of the identity |
+| **lastModifiedTimeUtc** | Time the task was last modified, in UTC.
+| **status** | Current status of the task: New, Completed, Deleted. |
+| **taskId** | Resource ID of the task. |
+| **title** | Friendly name given to the task by its creator. |
+
+## View incident tasks in the SecurityIncident table
+
+Apart from the **Incident tasks workbook**, you can audit task activity by querying the *SecurityIncident* table in **Logs**. The rest of this article shows you how to do this, as well as how to read and understand the query results to get task activity information.
+
+1. In the **Logs** page, enter the following query in the query window and run it. This query will return all the incidents that have any tasks assigned.
+
+ ```kusto
+ SecurityIncident
+ | where array_length( Tasks) > 0
+ ```
+
+ You can add any number of statements to the query to filter and narrow down the results. To demonstrate how to view and understand the results, we're going to add statements to filter the results so that we only see the tasks for a single incident, and we'll also add a `project` statement so that we see only those fields that will be useful for our purposes, without a lot of clutter.
+
+ [Learn more about using Kusto Query Language](kusto-overview.md).
+
+ ```kusto
+ SecurityIncident
+ | where array_length( Tasks) > 0
+ | where IncidentNumber == "405211"
+ | sort by LastModifiedTime desc
+ | project IncidentName, Title, LastModifiedTime, Tasks
+ ```
+
+1. Let's look at the most recent record for this incident, and find the list of tasks associated with it.
+ 1. Select the expander next to the top row in the query results (which have been sorted in descending order of recency).
+
+ :::image type="content" source="media/audit-track-tasks/incident-with-tasks-query-1.png" alt-text="Screenshot of query results showing an incident with its tasks." lightbox="media/audit-track-tasks/incident-with-tasks-query-1.png":::
+
+ 1. The *Tasks* field is an array of the current state of all the tasks in this incident. Select the expander to view each item in the array in its own row.
+
+ :::image type="content" source="media/audit-track-tasks/incident-with-tasks-query-2.png" alt-text="Screenshot of query results showing an incident with its tasks expanded." lightbox="media/audit-track-tasks/incident-with-tasks-query-2.png":::
+
+ 1. Now you see that there are two tasks in this incident. Each one is represented in turn by an expandable array. Select a single task's expander to view its information.
+
+ :::image type="content" source="media/audit-track-tasks/incident-with-tasks-query-3.png" alt-text="Screenshot of query results showing an incident with a single task expanded." lightbox="media/audit-track-tasks/incident-with-tasks-query-3.png":::
+
+ 1. Here you see the details for the first task in the array ("0" being the index position of the task in the array). The *title* field shows the name of the task as displayed in the incident.
+
+### View tasks added to the list
+
+1. Let's add a task to the incident, and then we'll come back here, run the query again, and see the changes in the results.
+
+ 1. On the **Incidents** page, enter the incident ID number in the Search bar.
+ 1. Open the incident details page and select **Tasks (Preview)** from the toolbar.
+ 1. Add a new task, give it the name "This task is a test task!", then select **Save**. The last task shown below is what you should end up with:
+
+ :::image type="content" source="media/audit-track-tasks/incident-task-list-task-added.png" alt-text="Screenshot shows incident tasks panel.":::
+
+1. Now let's return to the **Logs** page and run our query again.
+
+ In the results you'll see that there's a **new record in the table** for this same incident (note the timestamps). Expand the record and you'll see that while the record we saw before had two tasks in its *Tasks* array, the new one has three. The newest task is the one we just added, as you can see by its title.
+
+ :::image type="content" source="media/audit-track-tasks/incident-with-tasks-query-5.png" alt-text="Screenshot of query results showing an incident with its newly created task." lightbox="media/audit-track-tasks/incident-with-tasks-query-5.png":::
+
+### View status changes to tasks
+
+Now, if we go back to that new task in the incident details page and mark it as complete, and then come back to **Logs** and rerun the query again, we'll see yet another new record for the same incident, this time showing our task's new status as **Completed**.
++
+### View deletion of tasks
+
+Let's go back to the task list in the incident details page and delete the task we added earlier.
+
+When we come back to **Logs** and run the query yet again, we'll see another new record, only this time the status for our task&mdash;the one titled "This task is a test task!"&mdash;will be **Deleted**.
+
+**However**&mdash; once the task has appeared one such time in the array (with a **Deleted** status), it will no longer appear in the **Tasks** array in new records for that incident in the **SecurityIncident** table. The existing records, like those we saw above, will continue to preserve the evidence that this task once existed.
+
+## View active tasks belonging to a closed incident
+
+The following query allows you to see if an incident was closed but not all its assigned tasks were completed. This knowledge can help you verify that any remaining loose ends in your investigation were brought to a conclusion&mdash;all relevant parties were notified, all comments were entered, all responses were verified, and so on.
+
+```kusto
+SecurityIncident
+| summarize arg_max(TimeGenerated, *) by IncidentNumber
+| where Status == 'Closed'
+| mv-expand Tasks
+| evaluate bag_unpack(Tasks)
+| summarize arg_max(lastModifiedTimeUtc, *) by taskId
+| where status !in ('Completed', 'Deleted')
+| project TaskTitle = ['title'], TaskStatus = ['status'], createdTimeUtc, lastModifiedTimeUtc = column_ifexists("lastModifiedTimeUtc", datetime(null)), TaskCreator = ['createdBy'].name, lastModifiedBy, IncidentNumber, IncidentOwner = Owner.userPrincipalName
+| order by lastModifiedTimeUtc desc
+```
++
+## Next steps
+
+- Learn more about [incident tasks](incident-tasks.md).
+- Learn how to [investigate incidents](investigate-cases.md).
+- Learn how to add tasks to groups of incidents automatically using [automation rules](create-tasks-automation-rule.md) or [playbooks](create-tasks-playbook.md), and [when to use which](incident-tasks.md#use-automation-rules-or-playbooks-to-add-tasks).
+- Learn more about [automation rules](automate-incident-handling-with-automation-rules.md) and how to [create them](./create-manage-use-automation-rules.md).
+- Learn more about [playbooks](automate-responses-with-playbooks.md) and how to [create them](tutorial-respond-threats-playbook.md).
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
See these [important announcements](#announcements) about recent changes to feat
## May 2023 - [Use Hunts to conduct end-to-end proactive threat hunting in Microsoft Sentinel](#use-hunts-to-conduct-end-to-end-proactive-threat-hunting)
+- [Audit and track incident task activity](#audit-and-track-incident-task-activity)
### Use Hunts to conduct end-to-end proactive threat hunting
Take your hunts to the next level. Stay organized and keep track of new, active
Learn more about [Hunts (Preview)](hunts.md).
+### Audit and track incident task activity
+
+Thanks to newly available information in the *SecurityIncident* table, you can now inspect the history and status of open tasks in your incidents, even on incidents that have been closed. Use the information to ensure your SOC's efficient and proper functioning.
+
+Learn more about [auditing and tracking incident tasks](audit-track-tasks.md).
+ ## April 2023 - [RSA announcements](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/rsac-2023-microsoft-sentinel-empowering-the-soc-with-next-gen/ba-p/3803613)
sentinel Work With Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-tasks.md
You can also add tasks for yourself, on the spot, to an incident's task list. Th
- Learn more about [incident tasks](incident-tasks.md). - Learn how to [investigate incidents](investigate-cases.md). - Learn how to add tasks to groups of incidents automatically using [automation rules](create-tasks-automation-rule.md) or [playbooks](create-tasks-playbook.md), and [when to use which](incident-tasks.md#use-automation-rules-or-playbooks-to-add-tasks).
+- Learn about [keeping track of your tasks](audit-track-tasks.md).
- Learn more about [automation rules](automate-incident-handling-with-automation-rules.md) and how to [create them](./create-manage-use-automation-rules.md). - Learn more about [playbooks](automate-responses-with-playbooks.md) and how to [create them](tutorial-respond-threats-playbook.md).
service-bus-messaging Service Bus Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-quickstart-powershell.md
This quickstart shows you how to create a Service Bus namespace and a queue usin
To complete this quickstart, make sure you have an Azure subscription. If you don't have an Azure subscription, you can create a [free account][] before you begin.
-In this quickstart, you use Azure Cloud Shell that you can launch after sign into the Azure portal. For details about Azure Cloud Shell, see [Overview of Azure Cloud Shell](../cloud-shell/overview.md). You can also [install](/powershell/azure/install-Az-ps) and use Azure PowerShell on your machine.
+In this quickstart, you use Azure Cloud Shell that you can launch after sign into the Azure portal. For details about Azure Cloud Shell, see [Overview of Azure Cloud Shell](../cloud-shell/overview.md). You can also [install](/powershell/azure/install-azure-powershell) and use Azure PowerShell on your machine.
## Provision resources
service-fabric Quickstart Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-cluster-template.md
To complete this quickstart, you'll need to:
* Install the [Service Fabric SDK and PowerShell module](service-fabric-get-started.md).
-* Install [Azure PowerShell](/powershell/azure/install-az-ps).
+* Install [Azure PowerShell](/powershell/azure/install-azure-powershell).
### Download the sample template and certificate helper script
service-fabric Service Fabric Cluster Creation Via Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-via-arm.md
The default template used is available here for [Windows](https://github.com/Azu
The following commands can create either Windows or Linux clusters, depending on how you specify the OS parameter. Both PowerShell/CLI commands output the certificate in the specified *CertificateOutputFolder* (make sure the certificate folder location you specify already exists before running the command!). > [!NOTE]
-> The following PowerShell command only works with the Azure PowerShell `Az` module. To check the current version of Azure Resource Manager PowerShell version, run the following PowerShell command "Get-Module Az". Follow [this link](/powershell/azure/install-Az-ps) to upgrade your Azure Resource Manager PowerShell version.
+> The following PowerShell command only works with the Azure PowerShell `Az` module. To check the current version of Azure Resource Manager PowerShell version, run the following PowerShell command "Get-Module Az". Follow [this link](/powershell/azure/install-azure-powershell) to upgrade your Azure Resource Manager PowerShell version.
Deploy the cluster using PowerShell:
service-fabric Service Fabric Diagnostics Oms Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-setup.md
Azure Resource Manager detects that this command is an update to an existing res
## Deploy Azure Monitor logs with Azure PowerShell
-You can also deploy your log analytics resource via PowerShell by using the `New-AzOperationalInsightsWorkspace` command. To use this method, make sure you have installed [Azure PowerShell](/powershell/azure/install-az-ps). Use this script to create a new Log Analytics workspace and add the Service Fabric solution to it:
+You can also deploy your log analytics resource via PowerShell by using the `New-AzOperationalInsightsWorkspace` command. To use this method, make sure you have installed [Azure PowerShell](/powershell/azure/install-azure-powershell). Use this script to create a new Log Analytics workspace and add the Service Fabric solution to it:
```powershell
service-fabric Service Fabric Tutorial Create Vnet And Windows Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster.md
Before you begin this tutorial:
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Install the [Service Fabric SDK and PowerShell module](service-fabric-get-started.md).
-* Install [Azure PowerShell](/powershell/azure/install-az-ps).
+* Install [Azure PowerShell](/powershell/azure/install-azure-powershell).
* Review the key concepts of [Azure clusters](service-fabric-azure-clusters-overview.md). * [Plan and prepare](service-fabric-cluster-azure-deployment-preparation.md) for a production cluster deployment.
service-fabric Service Fabric Tutorial Deploy Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md
This article shows you how to set up [Azure API Management](../api-management/ap
Before you begin: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* Install [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+* Install [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
* Create a secure [Windows cluster](service-fabric-tutorial-create-vnet-and-windows-cluster.md) in a network security group. * If you deploy a Windows cluster, set up a Windows development environment. Install [Visual Studio 2019](https://www.visualstudio.com) and the **Azure development**, **ASP.NET and web development**, and **.NET Core cross-platform development** workloads. Then set up a [.NET development environment](service-fabric-get-started.md).
service-fabric Service Fabric Tutorial Monitor Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitor-cluster.md
In this tutorial series you learn how to:
Before you begin this tutorial: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* Install [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+* Install [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
* Create a secure [Windows cluster](service-fabric-tutorial-create-vnet-and-windows-cluster.md) * Setup [diagnostics collection](service-fabric-tutorial-create-vnet-and-windows-cluster.md#configurediagnostics_anchor) for the cluster * Enable the [EventStore service](service-fabric-tutorial-create-vnet-and-windows-cluster.md#configureeventstore_anchor) in the cluster
service-fabric Service Fabric Tutorial Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-scale-cluster.md
In this tutorial series you learn how to:
Before you begin this tutorial: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* Install [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+* Install [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
* Create a secure [Windows cluster](service-fabric-tutorial-create-vnet-and-windows-cluster.md) on Azure ## Important considerations and guidelines
service-fabric Service Fabric Tutorial Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-upgrade-cluster.md
In this tutorial series you learn how to:
Before you begin this tutorial: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* Install [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+* Install [Azure PowerShell](/powershell/azure/install-azure-powershell) or [Azure CLI](/cli/azure/install-azure-cli).
* Create a secure [Windows cluster](service-fabric-tutorial-create-vnet-and-windows-cluster.md) on Azure * Set up a Windows development environment. Install [Visual Studio 2019](https://www.visualstudio.com) and the **Azure development**, **ASP.NET and web development**, and **.NET Core cross-platform development** workloads. Then set up a [.NET development environment](service-fabric-get-started.md).
service-fabric Tutorial Managed Cluster Add Remove Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/tutorial-managed-cluster-add-remove-node-type.md
This part of the series covers how to:
## Prerequisites * A Service Fabric managed cluster (see [*Deploy a managed cluster*](tutorial-managed-cluster-deploy.md)).
-* [Azure PowerShell 4.7.0](/powershell/azure/release-notes-azureps#azservicefabric) or later (see [*Install Azure PowerShell*](/powershell/azure/install-az-ps)).
+* [Azure PowerShell 4.7.0](/powershell/azure/release-notes-azureps#azservicefabric) or later (see [*Install Azure PowerShell*](/powershell/azure/install-azure-powershell)).
## Add a node type to a Service Fabric managed cluster
service-fabric Tutorial Managed Cluster Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/tutorial-managed-cluster-scale.md
This part of the series covers how to:
## Prerequisites * A Service Fabric managed cluster (see [*Deploy a managed cluster*](tutorial-managed-cluster-deploy.md)).
-* [Azure PowerShell 4.7.0](/powershell/azure/release-notes-azureps#azservicefabric) or later (see [*Install Azure PowerShell*](/powershell/azure/install-az-ps)).
+* [Azure PowerShell 4.7.0](/powershell/azure/release-notes-azureps#azservicefabric) or later (see [*Install Azure PowerShell*](/powershell/azure/install-azure-powershell)).
## Scale a Service Fabric managed cluster Change the instance count to increase or decrease the number of nodes on the node type that you would like to scale. You can find node type names in the Azure Resource Manager template (ARM template) from your cluster deployment, or in the Service Fabric Explorer.
service-health Alerts Activity Log Service Notifications Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-arm.md
To learn more about action groups, see [Create and manage action groups](../azur
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Review the template
service-health Alerts Activity Log Service Notifications Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-bicep.md
To learn more about action groups, see [Create and manage action groups](../azur
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- To run the commands from your local computer, install Azure CLI or the Azure PowerShell modules. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
## Review the Bicep file
service-health Resource Health Alert Arm Template Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-alert-arm-template-guide.md
Azure Resource Health keeps you informed about the current and historical health
To follow the instructions on this page, you'll need to set up a few things in advance:
-1. You need to install the [Azure PowerShell module](/powershell/azure/install-az-ps)
+1. You need to install the [Azure PowerShell module](/powershell/azure/install-azure-powershell)
2. You need to [create or reuse an Action Group](../azure-monitor/alerts/action-groups.md) configured to notify you ## Instructions
site-recovery Azure To Azure Exclude Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-exclude-disks.md
Before you start:
- Make sure that you understand the [disaster-recovery architecture and components](azure-to-azure-architecture.md). - Review the [support requirements](azure-to-azure-support-matrix.md) for all components.-- Make sure that you have AzureRm PowerShell "Az" module. To install or update PowerShell, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+- Make sure that you have AzureRm PowerShell "Az" module. To install or update PowerShell, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
- Make sure that you have created a recovery services vault and protected virtual machines at least once. If you haven't done these things, follow the process at [Set up disaster recovery for Azure virtual machines using Azure PowerShell](azure-to-azure-powershell.md). - If you're looking for information on adding disks to an Azure VM enabled for replication, [review this article](azure-to-azure-enable-replication-added-disk.md).
site-recovery Azure To Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md
You learn how to:
Before you start: - Make sure that you understand the [scenario architecture and components](azure-to-azure-architecture.md). - Review the [support requirements](azure-to-azure-support-matrix.md) for all components.-- You have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [Guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+- You have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [Guide to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Sign in to your Microsoft Azure subscription
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
You can easily update your selection of a proximity placement group in the DR re
### Prerequisites -- Make sure that you have the Azure PowerShell Az module. If you need to install or upgrade Azure PowerShell, follow the [guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+- Make sure that you have the Azure PowerShell Az module. If you need to install or upgrade Azure PowerShell, follow the [guide to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
- The minimum Azure PowerShell Az version should be 4.1.0. To check the current version, use the following command: ```
site-recovery Hyper V Azure Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-powershell-resource-manager.md
You don't need to be a PowerShell expert to use this article, but you do need to
Make sure you have these prerequisites in place: - A [Microsoft Azure](https://azure.microsoft.com/) account. You can start with a [free trial](https://azure.microsoft.com/pricing/free-trial/). In addition, you can read about [Azure Site Recovery Manager pricing](https://azure.microsoft.com/pricing/details/site-recovery/).-- Azure PowerShell. For information about this release and how to install it, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- Azure PowerShell. For information about this release and how to install it, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
In addition, the specific example described in this article has the following prerequisites:
site-recovery Physical Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-manage-configuration-server.md
Upgrade the server as follows:
## Delete or unregister a configuration server (PowerShell)
-1. [Install](/powershell/azure/install-Az-ps) Azure PowerShell module
+1. [Install](/powershell/azure/install-azure-powershell) Azure PowerShell module
2. Login into to your Azure account using the command `Connect-AzAccount ΓÇôUseDeviceAuthentication`
site-recovery Tutorial Replicate Vms Edge Zone To Another Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-replicate-vms-edge-zone-to-another-zone.md
Here the primary location is an Azure Public MEC and secondary location is anoth
### Prerequisites -- Ensure Azure Az PowerShell module is installed. For information on how to install, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps)
+- Ensure Azure Az PowerShell module is installed. For information on how to install, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell)
- The minimum Azure Az PowerShell version must be 9.1.0+. Use the following command to see the current version: ```
site-recovery Tutorial Replicate Vms Edge Zone To Azure Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-replicate-vms-edge-zone-to-azure-region.md
Here the primary location is an Azure Public MEC and secondary location is the p
### Prerequisites -- Ensure Azure Az PowerShell module is installed. For information on how to install, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+- Ensure Azure Az PowerShell module is installed. For information on how to install, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
- The minimum Azure Az PowerShell version must be 4.1.0. Use the following command to see the current version: ```
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
Before you start:
- Make sure that you understand the [scenario architecture and components](vmware-azure-architecture.md). - Review the [support requirements](./vmware-physical-azure-support-matrix.md) for all components.-- You have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [Guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+- You have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [Guide to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Log into Azure
site-recovery Vmware Azure Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-configuration-server.md
ProxyPassword="Password"
You can optionally delete the configuration server by using PowerShell.
-1. [Install](/powershell/azure/install-Az-ps) the Azure PowerShell module.
+1. [Install](/powershell/azure/install-azure-powershell) the Azure PowerShell module.
2. Sign in to your Azure account by using this command: `Connect-AzAccount`
spring-apps How To Enterprise Configure Apm Intergration And Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-intergration-and-ca-certificates.md
The following languages are supported:
The following list shows the required environment variables: - `connection-string`-- `sampling-percentage` Upper-case keys are allowed, and you can also replace `_` with `-`.
az spring build-service builder buildpack-binding delete \
## Next steps -- [Azure Spring Apps](index.yml)
+- [Azure Spring Apps](index.yml)
spring-apps How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-setup-autoscale.md
To follow these procedures, you need:
## Navigate to the Autoscale page in the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Go to the Azure Spring Apps **Overview** page.
-3. Select the resource group that contains your service.
-4. Select the **Apps** tab under **Settings** in the menu on the left navigation pane.
-5. Select the application for which you want to set up Autoscale. In this example, select the application named **demo**. You should then see the application's **Overview** page.
-6. Go to the **Scale out** tab under **Settings** in the menu on the left navigation pane.
+1. Go to the Azure Spring Apps **Overview** page.
+1. Select the **Apps** tab under **Settings** in the menu on the left navigation pane.
+1. Select the application for which you want to set up Autoscale. In this example, select the application named **demo**. You should then see the application's **Overview** page.
+1. Go to the **Scale out** tab under **Settings** in the menu on the left navigation pane.
## Set up Autoscale settings for your application in the Azure portal
You can also set Autoscale modes using the Azure CLI. The following commands cre
```azurecli az monitor autoscale create \
- --resource-group demo-rg \
- --name demo-setting \
- --resource /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourcegroups/demo-rg/providers/Microsoft.AppPlatform/Spring/autoscale/apps/demo/deployments/default \
+ --resource-group <resource-group-name> \
+ --name <autoscale-setting-name> \
+ --resource /subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.AppPlatform/Spring/<service-instance-name>/apps/<app-name>/deployments/<deployment-name> \
--min-count 1 \ --max-count 5 \ --count 1
You can also set Autoscale modes using the Azure CLI. The following commands cre
```azurecli az monitor autoscale rule create \
- --resource-group demo-rg \
- --autoscale-name demo-setting \
+ --resource-group <resource-group-name> \
+ --autoscale-name <autoscale-setting-name> \
--scale out 1 \ --cooldown 1 \
- --condition "tomcat.global.request.total.count > 100 avg 1m where AppName == demo and Deployment == default"
+ --condition "tomcat.global.request.total.count > 100 avg 1m where AppName == <app-name> and Deployment == <deployment-name>"
``` For information on the available metrics, see the [User metrics options](./concept-metrics.md#user-metrics-options) section of [Metrics for Azure Spring Apps](./concept-metrics.md).
spring-apps Quickstart Monitor End To End Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-monitor-end-to-end-enterprise.md
Last updated 05/31/2022
-# Quickstart: Monitor application end-to-end
+# Quickstart: Monitor applications end-to-end
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
spring-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quotas.md
The following table defines limits for the pricing tiers in Azure Spring Apps.
| Memory | per app instance | 2 GB | 8 GB | 32 GB | 4 GB | | Azure Spring Apps service instances | per region per subscription | 10 | 10 | 10 | 10 | | Total app instances | per Azure Spring Apps service instance | 25 | 500 | 500 | 160 |
-| Custom Domains | per Azure Spring Apps service instance | 0 | 25 | 25 | 25 |
+| Custom Domains | per Azure Spring Apps service instance | 0 | 500 | 500 | 500 |
| Persistent volumes | per Azure Spring Apps service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps | 50 GB/app x 10 apps | Not applicable | | Inbound Public Endpoints | per Azure Spring Apps service instance | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> | | Outbound Public IPs | per Azure Spring Apps service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> |
static-web-apps Bitbucket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/bitbucket.md
Now that the repository is created, you can create a static web app from the Azu
- pipe: microsoft/azure-static-web-apps-deploy:main variables: APP_LOCATION: '$BITBUCKET_CLONE_DIR/Client'
- OUTPUT_LOCATION: '$BITBUCKET_CLONE_DIR/wwwroot'
+ OUTPUT_LOCATION: 'wwwroot'
API_TOKEN: $deployment_token ```
static-web-apps Publish Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-azure-resource-manager.md
In this tutorial, you learn to:
- [Install Azure CLI on Windows OS](/cli/azure/install-azure-cli-windows) - [Install Azure CLI on Linux OS](/cli/azure/install-azure-cli-linux) - [Install Azure CLI on macOS](/cli/azure/install-azure-cli-macos)
- - [Install Azure PowerShell](/powershell/azure/install-az-ps)
+ - [Install Azure PowerShell](/powershell/azure/install-azure-powershell)
## Create a GitHub personal access token
storage-mover Project Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/project-manage.md
The first step in defining a migration job is the creation of a project resource
Install-Module -Name Az.StorageMover -Scope CurrentUser -Repository PSGallery -Force ```
- The [Install Azure PowerShell](/powershell/azure/install-az-ps) article has more details.
+ The [Install Azure PowerShell](/powershell/azure/install-azure-powershell) article has more details.
You'll need to supply values for the required `-Name`, `-ResourceGroupName`, and `-StorageMoverName` parameters. The `-Description` parameter is optional.
storage-mover Storage Mover Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/storage-mover-create.md
Install-Module -Name Az.StorageMover -Scope CurrentUser -Repository PSGallery -F
```
-The [Install Azure PowerShell](/powershell/azure/install-az-ps) article has more details.
+The [Install Azure PowerShell](/powershell/azure/install-azure-powershell) article has more details.
To deploy a storage mover resource, you'll need to supply values for the required `-Name`, `-ResourceGroupName`, and `-Region` parameters. The `-Description` parameter is optional.
storage Blob Containers Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-powershell.md
This how-to article explains how to work with both individual and multiple stora
- An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -- Azure PowerShell module Az, which is the recommended PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- Azure PowerShell module Az, which is the recommended PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
You'll need to obtain authorization to an Azure subscription before you can use the examples in this article. Authorization can occur by authenticating with an Azure Active Directory (Azure AD) account or using a shared key. The examples in this article use Azure AD authentication in conjunction with context objects. Context objects encapsulate your Azure AD credentials and pass them on subsequent data operations, eliminating the need to reauthenticate.
storage Blob Inventory How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory-how-to.md
You can add, edit, or remove a policy by using the Azure PowerShell module.
1. Open a Windows PowerShell command window.
-2. Make sure that you have the latest Azure PowerShell module. See [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+2. Make sure that you have the latest Azure PowerShell module. See [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
3. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
Blob storage supports block blobs, append blobs, and page blobs. Block blobs are
- An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -- Azure PowerShell module `Az`, which is the recommended PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+- Azure PowerShell module `Az`, which is the recommended PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
### Configure a context object to encapsulate credentials
storage Data Lake Storage Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-powershell.md
ACL inheritance is already available for new child items that are created under
Install-Module Az.Storage -Repository PSGallery -Force ```
- For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+ For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell)
## Connect to the account
storage Data Lake Storage Directory File Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-dotnet.md
using System.IO;
## Connect to the account
-To use the snippets in this article, you'll need to create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that represents the storage account.
+To use the snippets in this article, you need to create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that represents the storage account.
### Connect by using Azure Active Directory (Azure AD)
The following code example shows how to list deleted paths and restore a soft-de
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_RestoreDirectory":::
-If you rename the directory that contains the soft-deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft-deleted items.
+If you rename the directory that contains the soft-deleted items, those items become disconnected from the directory. If you want to restore those items, you have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you receive an error when you attempt to restore those soft-deleted items.
## Upload a file to a directory
This example, prints the names of each file that is located in a directory named
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_ListFilesInDirectory":::
+## Create a user delegation SAS for a directory
+
+To work with the code examples in this section, add the following `using` directive:
+
+```csharp
+using Azure.Storage.Sas;
+```
+
+The following code example shows how to generate a user delegation SAS for a directory when a hierarchical namespace is enabled for the storage account:
++
+The following example tests the user delegation SAS created in the previous example from a simulated client application. If the SAS is valid, the client application is able to list file paths for this directory. If the SAS is invalid (for example, the SAS is expired), the Storage service returns error code 403 (Forbidden).
++
+To learn more about creating a user delegation SAS, see [Create a user delegation SAS with .NET](storage-blob-user-delegation-sas-create-dotnet.md).
+
+## Create a service SAS for a directory
+
+In a storage account with a hierarchical namespace enabled, you can create a service SAS for a directory. To create the service SAS, make sure you have installed version 12.5.0 or later of the [Azure.Storage.Files.DataLake](https://www.nuget.org/packages/Azure.Storage.Files.DataLake/) package.
+
+The following example shows how to create a service SAS for a directory:
++
+To learn more about creating a service SAS, see [Create a service SAS with .NET](sas-service-create-dotnet.md).
+ ## See also - [API reference documentation](/dotnet/api/azure.storage.files.datalake)
storage Data Lake Storage Directory File Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-powershell.md
To learn about how to get, set, and update the access control lists (ACL) of dir
Install-Module Az.Storage -Repository PSGallery -Force ```
- For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+ For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell)
## Connect to the account
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
After you have configured object replication, the Azure portal displays the repl
### [PowerShell](#tab/powershell)
-To create a replication policy with PowerShell, first install version [2.5.0](https://www.powershellgallery.com/packages/Az.Storage/2.5.0) or later of the Az.Storage PowerShell module. For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-az-ps).
+To create a replication policy with PowerShell, first install version [2.5.0](https://www.powershellgallery.com/packages/Az.Storage/2.5.0) or later of the Az.Storage PowerShell module. For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-azure-powershell).
The following example shows how to create a replication policy first on the destination account, and then on the source account. Remember to replace values in angle brackets with your own values:
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-enable.md
To enable blob soft delete for your storage account by using the Azure portal, f
Install-Module Az.Storage -Repository PsGallery -RequiredVersion 3.7.1-preview -AllowClobber -AllowPrerelease -Force ```
- For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+ For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell)
3. Obtain storage account authorization by using either a storage account key, a connection string, or Azure Active Directory (Azure AD). For more information, see [Connect to the account](data-lake-storage-directory-file-acl-powershell.md#connect-to-the-account).
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
# Change feed support in Azure Blob Storage
-The purpose of the change feed is to provide transaction logs of all the changes that occur to the blobs and the blob metadata in your storage account. The change feed provides **ordered**, **guaranteed**, **durable**, **immutable**, **read-only** log of these changes. Client applications can read these logs at any time, either in streaming or in batch mode. The change feed enables you to build efficient and scalable solutions that process change events that occur in your Blob Storage account at a low cost.
+The purpose of the change feed is to provide transaction logs of all the changes that occur to the blobs and the blob metadata in your storage account. The change feed provides **ordered**, **guaranteed**, **durable**, **immutable**, **read-only** log of these changes. Client applications can read these logs at any time, either in streaming or in batch mode. Each change generates exactly one transaction log entry, so you won't have to manage multiple log entries for the same change. The change feed enables you to build efficient and scalable solutions that process change events that occur in your Blob Storage account at a low cost.
To learn how to process records in the change feed, see [Process change feed in Azure Blob Storage](storage-blob-change-feed-how-to.md).
storage Storage Blob Copy Async Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md
+
+ Title: Copy a blob with asynchronous scheduling using JavaScript
+
+description: Learn how to copy a blob with asynchronous scheduling in Azure Storage by using the JavaScript client library.
+++ Last updated : 05/08/2023+++
+ms.devlang: javascript
+++
+# Copy a blob with asynchronous scheduling using JavaScript
+
+This article shows how to copy a blob with asynchronous scheduling using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL. You can also abort a pending copy operation.
+
+The client library methods covered in this article use the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and can be used when you want to perform a copy with asynchronous scheduling. For most copy scenarios where you want to move data into a storage account and have a URL for the source object, see [Copy a blob from a source object URL with JavaScript](storage-blob-copy-url-javascript.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Copy Blob](/rest/api/storageservices/copy-blob#authorization)
+ - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization)
+- The package **@azure/storage-blob** installed to your project directory. If you're using `DefaultAzureCredential` for authorization, you also need **@azure/identity**. To learn more about setting up your project, see [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md).
++
+## Copy a blob with asynchronous scheduling
+
+This section gives an overview of methods provided by the Azure Storage client library for JavaScript to perform a copy operation with asynchronous scheduling.
+
+The following methods wrap the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and begin an asynchronous copy of data from the source blob:
+
+- [BlobClient.beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl)
+
+The `beginCopyFromURL` method returns a long running operation poller that allows you to wait indefinitely until the copy is completed.
+
+## Copy a blob from a source within Azure
+
+If you're copying a blob within the same storage account, the operation can complete synchronously. Access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. For an alterative synchronous copy operation, see [Copy a blob from a source object URL with JavaScript](storage-blob-copy-url-javascript.md).
+
+If the copy source is a blob in a different storage account, the operation can complete asynchronously. The source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
+
+The following example shows a scenario for copying a source blob from a different storage account with asynchronous scheduling. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own. The example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
++
+> [!NOTE]
+> User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key. To create a user delegation SAS token, the Azure AD security principal needs appropriate permissions. For authorization requirements, see [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key#authorization).
+
+## Copy a blob from a source outside of Azure
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Check the status of a copy operation
+
+To check the status of an asynchronous `Copy Blob` operation, you can poll the [getProperties](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-getproperties) method and check the copy status.
+
+The following code example shows how to check the status of a pending copy operation:
++
+## Abort a copy operation
+
+Aborting a pending `Copy Blob` operation results in a destination blob of zero length. However, the metadata for the destination blob has the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
+
+To abort a pending copy operation, call the following operation:
+
+- [BlobClient.abortCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-abortcopyfromurl)
+
+This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) REST API operation, which cancels a pending `Copy Blob` operation. The following code example shows how to abort a pending `Copy Blob` operation:
++
+## Resources
+
+To learn more about copying blobs with asynchronous scheduling using the Azure Blob Storage client library for JavaScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)
+- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js)
+
storage Storage Blob Copy Async Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md
+
+ Title: Copy a blob with asynchronous scheduling using TypeScript
+
+description: Learn how to copy a blob with asynchronous scheduling in Azure Storage by using the client library for JavaScript and TypeScript.
+++ Last updated : 05/08/2023+++
+ms.devlang: typescript
+++
+# Copy a blob with asynchronous scheduling using TypeScript
+
+This article shows how to copy a blob with asynchronous scheduling using the [Azure Storage client library for JavaScript and TypeScript](/javascript/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL. You can also abort a pending copy operation.
+
+The client library methods covered in this article use the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and can be used when you want to perform a copy with asynchronous scheduling. For most copy scenarios where you want to move data into a storage account and have a URL for the source object, see [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Copy Blob](/rest/api/storageservices/copy-blob#authorization)
+ - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization)
+- The package **@azure/storage-blob** installed to your project directory. If you're using `DefaultAzureCredential` for authorization, you also need **@azure/identity**. To learn more about setting up your project, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).
++
+## Copy a blob with asynchronous scheduling
+
+This section gives an overview of methods provided by the Azure Storage client library for JavaScript and TypeScript to perform a copy operation with asynchronous scheduling.
+
+The following methods wrap the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and begin an asynchronous copy of data from the source blob:
+
+- [BlobClient.beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl)
+
+The `beginCopyFromURL` method returns a long running operation poller that allows you to wait indefinitely until the copy is completed.
+
+## Copy a blob from a source within Azure
+
+If you're copying a blob within the same storage account, the operation can complete synchronously. Access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. For an alterative synchronous copy operation, see [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md).
+
+If the copy source is a blob in a different storage account, the operation can complete asynchronously. The source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
+
+The following example shows a scenario for copying a source blob from a different storage account with asynchronous scheduling. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own. The example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
++
+> [!NOTE]
+> User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key. To create a user delegation SAS token, the Azure AD security principal needs appropriate permissions. For authorization requirements, see [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key#authorization).
+
+## Copy a blob from a source outside of Azure
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Check the status of a copy operation
+
+To check the status of an asynchronous `Copy Blob` operation, you can poll the [getProperties](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-getproperties) method and check the copy status.
+
+The following code example shows how to check the status of a pending copy operation:
++
+## Abort a copy operation
+
+Aborting a pending `Copy Blob` operation results in a destination blob of zero length. However, the metadata for the destination blob has the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
+
+To abort a pending copy operation, call the following operation:
+
+- [BlobClient.abortCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-abortcopyfromurl)
+
+This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) REST API operation, which cancels a pending `Copy Blob` operation. The following code example shows how to abort a pending `Copy Blob` operation:
++
+## Resources
+
+To learn more about copying blobs with asynchronous scheduling using the Azure Blob Storage client library for JavaScript and TypeScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript and TypeScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar language paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)
+- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/copy-blob.ts)
+
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
description: Learn how to copy a blob in Azure Storage by using the JavaScript c
Previously updated : 11/30/2022 Last updated : 05/08/2023
# Copy a blob with JavaScript
-This article shows how to copy a blob in a storage account using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). It also shows how to abort an asynchronous copy operation.
+This article provides an overview of copy operations using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme).
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create-javascript.md).
+## About copy operations
-## About copying blobs
+Copy operations can be used to move data within a storage account, between storage accounts, or into a storage account from a source outside of Azure. When using the Blob Storage client libraries to copy data resources, it's important to understand the REST API operations behind the client library methods. The following table lists REST API operations that can be used to copy data resources to a storage account. The table also includes links to detailed guidance about how to perform these operations using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme).
-A copy operation can perform any of the following actions:
+| REST API operation | When to use | Client library methods | Guidance |
+| | | | |
+| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) | [Copy a blob from a source object URL with JavaScript](storage-blob-copy-url-javascript.md) |
+| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl) | [Copy a blob from a source object URL with JavaScript](storage-blob-copy-url-javascript.md) |
+| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) | [Copy a blob with asynchronous scheduling using JavaScript](storage-blob-copy-async-javascript.md) |
-- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or can be a new blob created by the copy operation.-- Copy a source blob to a destination blob with the same name, effectively replacing the destination blob. Such a copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.-- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.-- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.-- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.
+For append blobs, you can use the [Append Block From URL](/rest/api/storageservices/append-block-from-url) operation to commit a new block of data to the end of an existing append blob. The following client library method wraps this operation:
-The source blob for a copy operation may be one of the following types:
-- Block blob-- Append blob-- Page blob-- Blob snapshot-- Blob version
+- [appendBlockFromURL](/javascript/api/@azure/storage-blob/appendblobclient#@azure-storage-blob-appendblobclient-appendblockfromurl)
-If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob will be overwritten.
+For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/put-page-from-url) operation to write a range of pages to a page blob where the contents are read from a URL. The following client library method wraps this operation:
-The destination blob can't be modified while a copy operation is in progress. A destination blob can only have one outstanding copy operation. One way to enforce this requirement is to use a blob lease, as shown in the code example.
+- [uploadPagesFromURL](/javascript/api/@azure/storage-blob/pageblobclient#@azure-storage-blob-pageblobclient-uploadpagesfromurl)
-The entire source blob or file is always copied. Copying a range of bytes or set of blocks isn't supported. When a blob is copied, its system properties are copied to the destination blob with the same values.
+## Client library resources
-## Copy a blob
-
-To copy a blob, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then use the [BlobClient.beginCopyFromURL method](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl). The following code example gets a [BlobClient](/javascript/api/@azure/storage-blob/blobclient) representing a previously created blob and copies it to a new blob:
-
-```javascript
-async function copyBlob(
- blobServiceClient,
- sourceBlobContainerName,
- sourceBlobName,
- destinationBlobContainerName,
- destinationBlobName) {
-
- // create container clients
- const sourceContainerClient = blobServiceClient.getContainerClient(sourceBlobContainerName);
- const destinationContainerClient = blobServiceClient.getContainerClient(destinationBlobContainerName);
-
- // create blob clients
- const sourceBlobClient = await sourceContainerClient.getBlobClient(sourceBlobName);
- const destinationBlobClient = await destinationContainerClient.getBlobClient(destinationBlobName);
-
- // start copy
- const copyPoller = await destinationBlobClient.beginCopyFromURL(sourceBlobClient.url);
- console.log('start copy from A to B');
-
- // wait until done
- await copyPoller.pollUntilDone();
-}
-```
-
-## Cancel a copy operation
-
-When you abort a copy operation, the destination blob's property, [copyStatus](/javascript/api/@azure/storage-blob/blobbegincopyfromurlresponse#properties), is set to [aborted](/javascript/api/@azure/storage-blob/copystatustype).
-
-```javascript
-async function copyThenAbortBlob(
- blobServiceClient,
- sourceBlobContainerName,
- sourceBlobName,
- destinationBlobContainerName,
- destinationBlobName) {
-
- // create container clients
- const sourceContainerClient = blobServiceClient.getContainerClient(sourceBlobContainerName);
- const destinationContainerClient = blobServiceClient.getContainerClient(destinationBlobContainerName);
-
- // create blob clients
- const sourceBlobClient = await sourceContainerClient.getBlobClient(sourceBlobName);
- const destinationBlobClient = await destinationContainerClient.getBlobClient(destinationBlobName);
-
- // start copy
- const copyPoller = await destinationBlobClient.beginCopyFromURL(sourceBlobClient.url);
- console.log('start copy from A to C');
-
- // cancel operation after starting it -
- // sample file may be too small to be canceled.
- try {
- await copyPoller.cancelOperation();
- console.log('request to cancel copy from A to C');
-
- // calls to get the result now throw PollerCancelledError
- await copyPoller.getResult();
- } catch (err) {
- if (err.name === 'PollerCancelledError') {
- console.log('The copy was cancelled.');
- }
- }
-}
-```
-
-## Abort a copy operation
-
-Aborting a copy operation, with [BlobClient.abortCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-abortcopyfromurl) results in a destination blob of zero length. However, the metadata for the destination blob will have the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods. The final blob will be committed when the copy completes.
-
-## Resources
-
-To learn more about copying blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for copying blobs use the following REST API operations:
--- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)-- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js)-
+- [Client library reference documentation](/javascript/api/@azure/storage-blob)
+- [Client library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob)
+- [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
description: Learn how to copy a blob with TypeScript in Azure Storage by using
Previously updated : 03/21/2023 Last updated : 05/08/2023
# Copy a blob with TypeScript
-This article shows how to copy a blob in a storage account using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). It also shows how to abort an asynchronous copy operation.
+This article provides an overview of copy operations using the [Azure Storage client library for JavaScript and TypeScript](/javascript/api/overview/azure/storage-blob-readme).
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with TypeScript](storage-blob-container-create-typescript.md).
+## About copy operations
-## About copying blobs
+Copy operations can be used to move data within a storage account, between storage accounts, or into a storage account from a source outside of Azure. When using the Blob Storage client libraries to copy data resources, it's important to understand the REST API operations behind the client library methods. The following table lists REST API operations that can be used to copy data resources to a storage account. The table also includes links to detailed guidance about how to perform these operations using the [Azure Storage client library for JavaScript and TypeScript](/javascript/api/overview/azure/storage-blob-readme).
-A copy operation can perform any of the following actions:
+| REST API operation | When to use | Client library methods | Guidance |
+| | | | |
+| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) | [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md) |
+| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl) | [Copy a blob from a source object URL with TypeScript](storage-blob-copy-url-typescript.md) |
+| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) | [Copy a blob with asynchronous scheduling using TypeScript](storage-blob-copy-async-typescript.md) |
-- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or can be a new blob created by the copy operation.-- Copy a source blob to a destination blob with the same name, effectively replacing the destination blob. Such a copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.-- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.-- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.-- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.
+For append blobs, you can use the [Append Block From URL](/rest/api/storageservices/append-block-from-url) operation to commit a new block of data to the end of an existing append blob. The following client library method wraps this operation:
-The source blob for a copy operation may be one of the following types:
-- Block blob-- Append blob-- Page blob-- Blob snapshot-- Blob version
+- [appendBlockFromURL](/javascript/api/@azure/storage-blob/appendblobclient#@azure-storage-blob-appendblobclient-appendblockfromurl)
-If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob will be overwritten.
+For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/put-page-from-url) operation to write a range of pages to a page blob where the contents are read from a URL. The following client library method wraps this operation:
-The destination blob can't be modified while a copy operation is in progress. A destination blob can only have one outstanding copy operation. One way to enforce this requirement is to use a blob lease, as shown in the code example.
+- [uploadPagesFromURL](/javascript/api/@azure/storage-blob/pageblobclient#@azure-storage-blob-pageblobclient-uploadpagesfromurl)
-The entire source blob or file is always copied. Copying a range of bytes or set of blocks isn't supported. When a blob is copied, its system properties are copied to the destination blob with the same values.
+## Client library resources
-## Copy a blob
-
-To copy a blob, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then use the [BlobClient.beginCopyFromURL method](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl). The following code example gets a [BlobClient](/javascript/api/@azure/storage-blob/blobclient) representing a previously created blob and copies it to a new blob:
--
-## Cancel a copy operation
-
-When you abort a copy operation, the destination blob's property, [copyStatus](/javascript/api/@azure/storage-blob/blobbegincopyfromurlresponse#properties), is set to [aborted](/javascript/api/@azure/storage-blob/copystatustype).
--
-## Abort a copy operation
-
-Aborting a copy operation, with [BlobClient.abortCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-abortcopyfromurl) results in a destination blob of zero length. However, the metadata for the destination blob will have the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods. The final blob will be committed when the copy completes.
-
-## Resources
-
-To learn more about copying blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
-
-### REST API operations
-
-The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for copying blobs use the following REST API operations:
--- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)-- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-copy.ts)-
+- [Client library reference documentation](/javascript/api/@azure/storage-blob)
+- [Client library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob)
+- [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)
storage Storage Blob Copy Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md
+
+ Title: Copy a blob from a source object URL with JavaScript
+
+description: Learn how to copy a blob from a source object URL in Azure Storage by using the JavaScript client library.
+++ Last updated : 05/08/2023+++
+ms.devlang: javascript
+++
+# Copy a blob from a source object URL with JavaScript
+
+This article shows how to copy a blob from a source object URL using the [Azure Storage client library for JavaScript](/javascript/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL.
+
+The client library methods covered in this article use the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) and [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operations. These methods are preferred for copy scenarios where you want to move data into a storage account and have a URL for the source object. For copy operations where you want asynchronous scheduling, see [Copy a blob with asynchronous scheduling using JavaScript](storage-blob-copy-async-javascript.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url#authorization)
+ - [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization)
+- The package **@azure/storage-blob** installed to your project directory. If you're using `DefaultAzureCredential` for authorization, you also need **@azure/identity**. To learn more about setting up your project, see [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md).
++
+## Copy a blob from a source object URL
+
+This section gives an overview of methods provided by the Azure Storage client library for JavaScript to perform a copy operation from a source object URL.
+
+The following method wraps the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) REST API operation, and creates a new block blob where the contents of the blob are read from a given URL:
+
+- [BlockBlobClient.syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl)
+
+These methods are preferred for scenarios where you want to move data into a storage account and have a URL for the source object.
+
+For large objects, you may choose to work with individual blocks. The following method wraps the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. This method creates a new block to be committed as part of a blob where the contents are read from a source URL:
+
+- [BlockBlobClient.stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl)
+
+## Copy a blob from a source within Azure
+
+If you're copying a blob from a source within Azure, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key.
+
+The following example shows a scenario for copying from a source blob within Azure:
++
+The [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) method can also accept a [BlockBlobSyncUploadFromURLOptions](/javascript/api/@azure/storage-blob/blockblobsyncuploadfromurloptions) parameter to specify further options for the operation.
+
+## Copy a blob from a source outside of Azure
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Resources
+
+To learn more about copying blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-put-from-url.js)
+
storage Storage Blob Copy Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md
+
+ Title: Copy a blob from a source object URL with TypeScript
+
+description: Learn how to copy a blob from a source object URL in Azure Storage by using the client library for JavaScript and TypeScript.
+++ Last updated : 05/08/2023+++
+ms.devlang: typescript
+++
+# Copy a blob from a source object URL with TypeScript
+
+This article shows how to copy a blob from a source object URL using the [Azure Storage client library for JavaScript and TypeScript](/javascript/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL.
+
+The client library methods covered in this article use the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) and [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operations. These methods are preferred for copy scenarios where you want to move data into a storage account and have a URL for the source object. For copy operations where you want asynchronous scheduling, see [Copy a blob with asynchronous scheduling using TypeScript](storage-blob-copy-async-typescript.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url#authorization)
+ - [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization)
+- The package **@azure/storage-blob** installed to your project directory. If you're using `DefaultAzureCredential` for authorization, you also need **@azure/identity**. To learn more about setting up your project, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).
++
+## Copy a blob from a source object URL
+
+This section gives an overview of methods provided by the Azure Storage client library for JavaScript and TypeScript to perform a copy operation from a source object URL.
+
+The following method wraps the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) REST API operation, and creates a new block blob where the contents of the blob are read from a given URL:
+
+- [BlockBlobClient.syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl)
+
+These methods are preferred for scenarios where you want to move data into a storage account and have a URL for the source object.
+
+For large objects, you may choose to work with individual blocks. The following method wraps the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. This method creates a new block to be committed as part of a blob where the contents are read from a source URL:
+
+- [BlockBlobClient.stageBlockFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-stageblockfromurl)
+
+## Copy a blob from a source within Azure
+
+If you're copying a blob from a source within Azure, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key.
+
+The following example shows a scenario for copying from a source blob within Azure:
++
+The [syncUploadFromURL](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-syncuploadfromurl) method can also accept a [BlockBlobSyncUploadFromURLOptions](/javascript/api/@azure/storage-blob/blockblobsyncuploadfromurloptions) parameter to specify further options for the operation.
+
+## Copy a blob from a source outside of Azure
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Resources
+
+To learn more about copying blobs using the Azure Blob Storage client library for JavaScript and TypeScript, see the following resources.
+
+### REST API operations
+
+The Azure SDK for JavaScript and TypeScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar language paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/copy-blob-put-from-url.ts)
+
storage Storage Blob Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-quickstart-powershell.md
When you're finished, you see that the event data has been sent to the web app.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-This article requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+This article requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
## Sign in to Azure
storage Storage Blob Scalable App Create Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-create-vm.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module Az version 0.7 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module Az version 0.7 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
storage Storage Blob Static Website How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website-how-to.md
You can enable static website hosting by using the Azure PowerShell module.
Get-InstalledModule -Name Az -AllVersions | select Name,Version ```
- If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+ If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
3. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
storage Storage Blob User Delegation Sas Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md
To check which version of the Az.Storage module is installed, run the following
Get-Module -ListAvailable -Name Az.Storage -Refresh ```
-For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-az-ps).
+For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-azure-powershell).
## Sign in to Azure PowerShell with Azure AD
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-powershell.md
You will also need the Storage Blob Data Contributor role to read, write, and de
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-This quickstart requires the Azure PowerShell module Az version 0.7 or later. Run `Get-InstalledModule -Name Az -AllVersions | select Name,Version` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+This quickstart requires the Azure PowerShell module Az version 0.7 or later. Run `Get-InstalledModule -Name Az -AllVersions | select Name,Version` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
[!INCLUDE [storage-quickstart-tutorial-intro-include-powershell](../../../includes/storage-quickstart-tutorial-intro-include-powershell.md)]
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
To learn more about these capabilities and evaluate the impact of this upgrade o
1. Open a Windows PowerShell command window.
-2. Make sure that you have the latest Azure PowerShell module. See [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+2. Make sure that you have the latest Azure PowerShell module. See [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
3. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
storage Account Encryption Key Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/account-encryption-key-create.md
To create a storage account that relies on the account encryption key with the A
# [PowerShell](#tab/powershell)
-To use PowerShell to create a storage account that relies on the account encryption key, make sure you have installed the Azure PowerShell module, version 3.4.0 or later. For more information, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+To use PowerShell to create a storage account that relies on the account encryption key, make sure you have installed the Azure PowerShell module, version 3.4.0 or later. For more information, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
Next, create a general-purpose v2 storage account by calling the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command, with the appropriate parameters:
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
To verify that infrastructure encryption is enabled for a storage account with t
# [PowerShell](#tab/powershell)
-To use PowerShell to create a storage account with infrastructure encryption enabled, make sure you have installed the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage), version 2.2.0 or later. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+To use PowerShell to create a storage account with infrastructure encryption enabled, make sure you have installed the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage), version 2.2.0 or later. For more information, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
Next, create a general-purpose v2 or premium block blob storage account by calling the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command. Include the `-RequireInfrastructureEncryption` option to enable infrastructure encryption.
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
Next, update your code to use passwordless connections.
credential); ```
+## [Go](#tab/go)
+
+1. To use `DefaultAzureCredential` in a Go application, install the `azidentity` module:
+
+ ```bash
+ go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```go
+ import (
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ )
+ ```
+
+1. Identify the locations in your code that create a `Client` instance to connect to Azure Blob Storage. Update your code to match the following example:
+
+ ```go
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ // handle error
+ }
+
+ serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net", storageAccountName)
+ client, err := azblob.NewClient(serviceURL, cred, nil)
+ if err != nil {
+ // handle error
+ }
+ ```
+ ## [Java](#tab/java) 1. To use `DefaultAzureCredential` in a Java application, install the `azure-identity` package via one of the following approaches:
Next, update your code to use passwordless connections.
from azure.identity import DefaultAzureCredential ```
-1. Identify the locations in your code that create a `BlobServiceClient` to connect to Azure Blob Storage. Update your code to match the following example:
+1. Identify the locations in your code that create a `BlobServiceClient` object to connect to Azure Blob Storage. Update your code to match the following example:
```python credential = DefaultAzureCredential()
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
None.
# [PowerShell](#tab/azure-powershell)
-To create an Azure storage account with PowerShell, make sure you have installed the latest [Azure Az PowerShell module](https://www.powershellgallery.com/packages/Az). See [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+To create an Azure storage account with PowerShell, make sure you have installed the latest [Azure Az PowerShell module](https://www.powershellgallery.com/packages/Az). See [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
# [Azure CLI](#tab/azure-cli)
To create an account with Azure DNS zone endpoints (preview), follow these steps
1. Close and reopen the PowerShell console.
-1. Install version [4.4.2-preview](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview) or later of the Az.Storage PowerShell module. You may need to uninstall other versions of the PowerShell module. For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-az-ps).
+1. Install version [4.4.2-preview](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview) or later of the Az.Storage PowerShell module. You may need to uninstall other versions of the PowerShell module. For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-azure-powershell).
```azurepowershell Install-Module Az.Storage -Repository PsGallery -RequiredVersion 4.4.2-preview -AllowClobber -AllowPrerelease -Force
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
To upgrade a general-purpose v1 or Blob storage account to a general-purpose v2
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-To upgrade a general-purpose v1 account to a general-purpose v2 account using PowerShell, first update PowerShell to use the latest version of the **Az.Storage** module. See [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps) for information about installing PowerShell.
+To upgrade a general-purpose v1 account to a general-purpose v2 account using PowerShell, first update PowerShell to use the latest version of the **Az.Storage** module. See [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell) for information about installing PowerShell.
Next, call the following command to upgrade the account, substituting your resource group name, storage account name, and desired account access tier.
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
To initiate an account failover from the Azure portal, follow these steps:
## [PowerShell](#tab/azure-powershell)
-To use PowerShell to initiate an account failover, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later. For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+To use PowerShell to initiate an account failover, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later. For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
To initiate an account failover from PowerShell, call the following command:
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
You must set the default rule to **deny**, or network rules have no effect. Howe
### [PowerShell](#tab/azure-powershell)
-1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
2. Choose which type of public network access you want to allow:
If you want to enable access to your storage account from a virtual network or s
#### [PowerShell](#tab/azure-powershell)
-1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
2. List virtual network rules:
You can manage IP network rules for storage accounts through the Azure portal, P
#### [PowerShell](#tab/azure-powershell)
-1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
2. List IP network rules:
To learn more about working with storage analytics, see [Use Azure Storage analy
#### [PowerShell](#tab/azure-powershell)
-1. Install [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
2. Display the exceptions for the storage account's network rules:
storage Storage Powershell Independent Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md
To use Azure Storage in one of the independent clouds, you connect to that cloud
- You determine and use the available regions. - You use the correct endpoint suffix, which is different from Azure Public.
-The examples require Azure PowerShell module Az version 0.7 or later. In a PowerShell window, run `Get-Module -ListAvailable Az` to find the version. If nothing is listed, or you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+The examples require Azure PowerShell module Az version 0.7 or later. In a PowerShell window, run `Get-Module -ListAvailable Az` to find the version. If nothing is listed, or you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
## Log in to Azure
storage Storage Require Secure Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-require-secure-transfer.md
To require secure transfer programmatically, set the *enableHttpsTrafficOnly* pr
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-This sample requires the Azure PowerShell module Az version 0.7 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+This sample requires the Azure PowerShell module Az version 0.7 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
Run `Connect-AzAccount` to create a connection with Azure.
storage Storage Use Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-emulator.md
Some Azure storage client libraries, such as the Xamarin library, only support a
You can also generate a SAS token by using Azure PowerShell. The following example generates a SAS token with full permissions to a blob container:
-1. Install Azure PowerShell if you haven't already (using the latest version of the Azure PowerShell cmdlets is recommended). For installation instructions, see [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+1. Install Azure PowerShell if you haven't already (using the latest version of the Azure PowerShell cmdlets is recommended). For installation instructions, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
2. Open Azure PowerShell and run the following commands, replacing `CONTAINER_NAME` with a name of your choosing: ```powershell
storage Elastic San Connect Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md
The iSCSI CSI driver for Kubernetes is [licensed under the Apache 2.0 license](h
## Prerequisites - Have an [Azure Elastic SAN](elastic-san-create.md) with volumes-- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-az-ps)
+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
- Meet the [compatibility requirements](https://github.com/kubernetes-csi/csi-driver-iscsi/blob/master/README.md#container-images--kubernetes-compatibility) for the iSCSI CSI driver ## Limitations
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
This article explains how to deploy and configure an elastic storage area networ
## Prerequisites -- If you're using Azure PowerShell, install the [latest Azure PowerShell module](/powershell/azure/install-az-ps).
+- If you're using Azure PowerShell, install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell).
- If you're using Azure CLI, install the [latest version](/cli/azure/install-azure-cli). - Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN.
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
You can manage virtual network rules for volume groups through the Azure portal,
### [PowerShell](#tab/azure-powershell) -- Install the [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
+- Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
- List virtual network rules.
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
We strongly recommend that you read [Planning for an Azure Files deployment](../
> Start-Process -FilePath "ndp48-x86-x64-allos-enu.exe" -ArgumentList "/q /norestart" -Wait > ```
-7. The Az PowerShell module, which can be installed by following the instructions here: [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+7. The Az PowerShell module, which can be installed by following the instructions here: [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!NOTE] > The Az.StorageSync module is now installed automatically when you install the Az PowerShell module.
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
This article assumes that:
- You allow domain traffic to the following endpoints, see [Azure service endpoints](../file-sync/file-sync-firewall-and-proxy.md#firewall): Additionally:-- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-az-ps).
+- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-azure-powershell).
- If you intend to use the Azure CLI, [install the latest version](/cli/azure/install-azure-cli). ## Create the private endpoints
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
In the following table, we have provided both the size of the namespace as well
### Evaluation cmdlet Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation cmdlet. This cmdlet checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported operating system version. Its checks cover most but not all of the features mentioned below; we recommend you read through the rest of this section carefully to ensure your deployment goes smoothly.
-The evaluation cmdlet can be installed by installing the Az PowerShell module, which can be installed by following the instructions here: [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+The evaluation cmdlet can be installed by installing the Az PowerShell module, which can be installed by following the instructions here: [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
#### Usage You can invoke the evaluation tool in a few different ways: you can perform the system checks, the dataset checks, or both. To perform both the system and dataset checks:
storage File Sync Server Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-registration.md
To register a server with a Storage Sync Service, you must first prepare your se
![Server Manager UI with the IE Enhanced Security Configuration highlighted](media/storage-sync-files-server-registration/server-manager-ie-config.png)
-* Ensure that the Azure PowerShell module is installed on your server. If your server is a member of a Failover Cluster, every node in the cluster will require the Az module. More details on how to install the Az module can be found on the [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+* Ensure that the Azure PowerShell module is installed on your server. If your server is a member of a Failover Cluster, every node in the cluster will require the Az module. More details on how to install the Az module can be found on the [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
> [!NOTE] > We recommend using the newest version of the Az PowerShell module to register/unregister a server. If the Az package has been previously installed on this server (and the PowerShell version on this server is 5.* or greater), you can use the `Update-Module` cmdlet to update this package.
storage File Sync Troubleshoot Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-installation.md
To install the Az or AzureRM module on PowerShell 5.1, perform the following ste
1. Type **powershell** from an elevated command prompt and hit enter. 2. Install the latest Az or AzureRM module by following the documentation:
- - [Az module (requires .NET 4.7.2)](/powershell/azure/install-az-ps)
+ - [Az module (requires .NET 4.7.2)](/powershell/azure/install-azure-powershell)
- [AzureRM module](https://go.microsoft.com/fwlink/?linkid=856959) 3. Run ServerRegistration.exe, and complete the wizard to register the server with a Storage Sync Service.
storage Files Troubleshoot Smb Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-authentication.md
To mitigate this, you have two options: either rotate the service principal pass
#### Option 1: Update the service principal password using PowerShell
-1. Install the latest Az.Storage and AzureAD modules. Use PowerShell 5.1, because currently the AzureAD module doesn't work in PowerShell 7. Azure Cloud Shell won't work in this scenario. For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-Az-ps).
+1. Install the latest Az.Storage and AzureAD modules. Use PowerShell 5.1, because currently the AzureAD module doesn't work in PowerShell 7. Azure Cloud Shell won't work in this scenario. For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
To install the modules, open PowerShell with elevated privileges and run the following commands:
storage Files Troubleshoot Smb Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-connectivity.md
System error 53 or system error 67 can occur if port 445 outbound communication
To check if your firewall or ISP is blocking port 445, use the [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) tool or `Test-NetConnection` cmdlet.
-To use the `Test-NetConnection` cmdlet, the Azure PowerShell module must be installed. See [Install Azure PowerShell module](/powershell/azure/install-Az-ps) for more information. Remember to replace `<your-storage-account-name>` and `<your-resource-group-name>` with the relevant names for your storage account.
+To use the `Test-NetConnection` cmdlet, the Azure PowerShell module must be installed. See [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for more information. Remember to replace `<your-storage-account-name>` and `<your-resource-group-name>` with the relevant names for your storage account.
```azurepowershell
If all SMB clients have closed their open handles on a file/directory and the is
To force a file handle to be closed, use the [Close-AzStorageFileHandle](/powershell/module/az.storage/close-azstoragefilehandle) PowerShell cmdlet. > [!Note]
-> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
#### Cause 2 A file lease is preventing a file from being modified or deleted. You can check if a file has a file lease with the following PowerShell, replacing `<resource-group>`, `<storage-account>`, `<file-share>`, and `<path-to-file>` with the appropriate values for your environment:
If the SMB clients have closed all open handles and the issue continues to occur
- Use the [Close-AzStorageFileHandle](/powershell/module/az.storage/close-azstoragefilehandle) PowerShell cmdlet to close open handles. > [!Note]
-> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
To view open handles for a file share, directory or file, use the [`Get-AzStorag
To close open handles for a file share, directory or file, use the [`Close-AzStorageFileHandle`](/powershell/module/az.storage/close-azstoragefilehandle) PowerShell cmdlet. > [!Note]
-> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
<a id="networkerror59"></a> ### ERROR_UNEXP_NET_ERR (59) when doing any operations on a handle
To view open handles for a file share, directory or file, use the [Get-AzStorage
To close open handles for a file share, directory or file, use the [Close-AzStorageFileHandle](/powershell/module/az.storage/close-azstoragefilehandle) PowerShell cmdlet. > [!Note]
-> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+> The `Get-AzStorageFileHandle` and `Close-AzStorageFileHandle` cmdlets are included in Az PowerShell module version 2.4 or later. To install the latest Az PowerShell module, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
storage Storage Files Configure P2s Vpn Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md
The article details the steps to configure a Point-to-Site VPN on Windows (Windo
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Prerequisites-- The most recent version of the Azure PowerShell module. For more information on how to install the Azure PowerShell, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps) and select your operating system. If you prefer to use the Azure CLI on Windows, you may, however the instructions below are presented for Azure PowerShell.
+- The most recent version of the Azure PowerShell module. For more information on how to install the Azure PowerShell, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell) and select your operating system. If you prefer to use the Azure CLI on Windows, you may, however the instructions below are presented for Azure PowerShell.
- An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage accounts, which are management constructs that represent a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can learn more about how to deploy Azure file shares and storage accounts in [Create an Azure file share](storage-how-to-create-file-share.md).
storage Storage Files Enable Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-enable-soft-delete.md
Azure Files offers soft delete for file shares so that you can more easily recov
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Prerequisites-- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-az-ps).
+- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-azure-powershell).
- If you intend to use the Azure CLI, [install the latest version](/cli/azure/install-azure-cli). ## Getting started
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
The AzFilesHybrid PowerShell module provides cmdlets for deploying and configuri
### Prerequisites - If you don't have [.NET Framework 4.7.2 or higher](https://dotnet.microsoft.com/download/dotnet-framework/) installed, install it now. It's required for the AzFilesHybrid module to import successfully.-- Make sure you have [Azure PowerShell](/powershell/azure/install-az-ps) (Az module) and [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage/) installed. You must have at least Az.PowerShell 2.8.0+ and Az.Storage 4.3.0+ to use AzFilesHybrid.
+- Make sure you have [Azure PowerShell](/powershell/azure/install-azure-powershell) (Az module) and [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage/) installed. You must have at least Az.PowerShell 2.8.0+ and Az.Storage 4.3.0+ to use AzFilesHybrid.
- Install the [Active Directory PowerShell](/powershell/module/activedirectory/) module. ### Download AzFilesHybrid module
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
To enable Azure AD DS authentication over SMB with the [Azure portal](https://po
# [PowerShell](#tab/azure-powershell)
-To enable Azure AD DS authentication over SMB with Azure PowerShell, install the latest Az module (2.4 or newer) or the Az.Storage module (1.5 or newer). For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-Az-ps).
+To enable Azure AD DS authentication over SMB with Azure PowerShell, install the latest Az module (2.4 or newer) or the Az.Storage module (1.5 or newer). For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
To create a new storage account, call [New-AzStorageAccount](/powershell/module/az.storage/New-azStorageAccount), and then set the **EnableAzureActiveDirectoryDomainServicesForFile** parameter to **true**. In the following example, remember to replace the placeholder values with your own values. (If you were using the previous preview module, the parameter for enabling the feature is **EnableAzureFilesAadIntegrationForSMB**.)
storage Storage Files Networking Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-dns.md
Before you can setup DNS forwarding to Azure Files, you need to have completed t
- A storage account containing an Azure file share you would like to mount. To learn how to create a storage account and an Azure file share, see [Create an Azure file share](storage-how-to-create-file-share.md). - A private endpoint for the storage account. To learn how to create a private endpoint for Azure Files, see [Create a private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint).-- The [latest version](/powershell/azure/install-az-ps) of the Azure PowerShell module.
+- The [latest version](/powershell/azure/install-azure-powershell) of the Azure PowerShell module.
> [!Important] > This guide assumes you're using the DNS server within Windows Server in your on-premises environment. All of the steps described in this guide are possible with any DNS server, not just the Windows DNS Server.
storage Storage Files Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-endpoints.md
We recommend reading [Azure Files networking considerations](storage-files-netwo
- This article assumes that you have already created an Azure subscription. If you don't already have a subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - This article assumes that you have already created an Azure file share in a storage account that you would like to connect to from on-premises. To learn how to create an Azure file share, see [Create an Azure file share](storage-how-to-create-file-share.md).-- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-az-ps).
+- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-azure-powershell).
- If you intend to use the Azure CLI, [install the latest version](/cli/azure/install-azure-cli). ## Endpoint configurations
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
For more information on these three choices, see [Planning for an Azure Files de
## Prerequisites - This article assumes that you've already created an Azure subscription. If you don't already have a subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-az-ps).
+- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-azure-powershell).
- If you intend to use Azure CLI, [install the latest version](/cli/azure/install-azure-cli). ## Create a storage account
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you'd like to install and use PowerShell locally, you'll need the Azure PowerShell module Az version 7.0.0 or later. We recommend installing the latest available version. To find out which version of the Azure PowerShell module you're running, execute `Get-InstalledModule Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to your Azure account. To use multi-factor authentication, you'll need to supply your Azure tenant ID, such as `Login-AzAccount -TenantId <TenantId>`.
+If you'd like to install and use PowerShell locally, you'll need the Azure PowerShell module Az version 7.0.0 or later. We recommend installing the latest available version. To find out which version of the Azure PowerShell module you're running, execute `Get-InstalledModule Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to your Azure account. To use multi-factor authentication, you'll need to supply your Azure tenant ID, such as `Login-AzAccount -TenantId <TenantId>`.
# [Azure CLI](#tab/azure-cli)
storage Passwordless Migrate Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/passwordless-migrate-queues.md
The Azure Identity client library, for each of the following ecosystems, provide
using Azure.Identity; ```
-1. Identify the locations in your code that create a `QueueClient` to connect to Azure Queue Storage. Update your code to match the following example:
+1. Identify the locations in your code that create a `QueueClient` object to connect to Azure Queue Storage. Update your code to match the following example:
```csharp var credential = new DefaultAzureCredential();
The Azure Identity client library, for each of the following ecosystems, provide
new DefaultAzureCredential()); ```
+## [Go](#tab/go)
+
+1. To use `DefaultAzureCredential` in a Go application, install the `azidentity` module:
+
+ ```bash
+ go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```go
+ import (
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ )
+ ```
+
+1. Identify the locations in your code that create a `QueueClient` instance to connect to Azure Queue Storage. Update your code to match the following example:
+
+ ```go
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ // handle error
+ }
+
+ serviceURL := fmt.Sprintf("https://%s.queue.core.windows.net/", storageAccountName)
+ client, err := azqueue.NewQueueClient(serviceURL, cred, nil)
+ if err != nil {
+ // handle error
+ }
+ ```
+ ## [Java](#tab/java) 1. To use `DefaultAzureCredential` in a Java application, install the `azure-identity` package via one of the following approaches:
The Azure Identity client library, for each of the following ecosystems, provide
from azure.identity import DefaultAzureCredential ```
-1. Identify the locations in your code that create a `QueueClient` to connect to Azure Queue Storage. Update your code to match the following example:
+1. Identify the locations in your code that create a `QueueClient` object to connect to Azure Queue Storage. Update your code to match the following example:
```python credential = DefaultAzureCredential()
storage Storage Powershell How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-powershell-how-to-use-queues.md
Azure Queue Storage is a service for storing large numbers of messages that can
> - Delete a message > - Delete a queue
-This how-to guide requires the Azure PowerShell (`Az`) module v0.7 or later. Run `Get-Module -ListAvailable Az` to find the currently installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+This how-to guide requires the Azure PowerShell (`Az`) module v0.7 or later. Run `Get-Module -ListAvailable Az` to find the currently installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
There are no PowerShell cmdlets for the data plane for queues. To perform data plane operations such as add a message, read a message, and delete a message, you have to use the .NET storage client library as it is exposed in PowerShell. You create a message object and then you can use commands such as `AddMessage` to perform operations on that message. This article shows you how to do that.
storage Table Storage How To Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-how-to-use-powershell.md
This how-to article covers common Azure Table storage operations. You learn how
This how-to article shows you how to create a new storage account in a new resource group so you can easily remove it when you're done. You can also use an existing storage account.
-The examples require Az PowerShell modules `Az.Storage (1.1.0 or greater)` and `Az.Resources (1.2.0 or greater)`. In a PowerShell window, run `Get-Module -ListAvailable Az*` to find the version. If nothing is displayed, or you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+The examples require Az PowerShell modules `Az.Storage (1.1.0 or greater)` and `Az.Resources (1.2.0 or greater)`. In a PowerShell window, run `Get-Module -ListAvailable Az*` to find the version. If nothing is displayed, or you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
> [!IMPORTANT]
-> Using this Azure feature from PowerShell requires that you have the `Az` module installed. The current version of `AzTable` is not compatible with the older AzureRM module. Follow the [latest install instructions for installing Az module](/powershell/azure/install-az-ps) if needed.
+> Using this Azure feature from PowerShell requires that you have the `Az` module installed. The current version of `AzTable` is not compatible with the older AzureRM module. Follow the [latest install instructions for installing Az module](/powershell/azure/install-azure-powershell) if needed.
> > For module name compatibility reasons, this module is also published under the previous name `AzureRmStorageTables` in PowerShell Gallery. This document will reference the new name only.
storsimple Storsimple 8000 Automation Azurerm Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-automation-azurerm-scripts.md
This section takes an example script and details the various steps required to r
Before you begin, ensure that you have: * Azure PowerShell installed. To install Azure PowerShell modules:
- * In a Windows environment, follow the steps in [Install and configure Azure PowerShell](/powershell/azure/install-az-ps). You can install Azure PowerShell on your Windows Server host for your StorSimple if using one.
- * In a Linux or MacOS environment, follow the steps in [Install and configure Azure PowerShell on MacOS or Linux](/powershell/azure/install-az-ps).
+ * In a Windows environment, follow the steps in [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). You can install Azure PowerShell on your Windows Server host for your StorSimple if using one.
+ * In a Linux or MacOS environment, follow the steps in [Install and configure Azure PowerShell on MacOS or Linux](/powershell/azure/install-azure-powershell).
For more information about using Azure PowerShell, go to [Get started with using Azure PowerShell](/powershell/azure/get-started-azureps).
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/automation-powershell.md
We highly recommend local development using [VSCode](https://code.visualstudio.c
## Writing the PowerShell script locally
-The best way to develop the script is locally. PowerShell being cross-platform, the script can be written and tested on any OS. On Windows we can use [Windows Terminal](https://www.microsoft.com/p/windows-terminal/9n0dx20hk701) with [PowerShell 7](/powershell/scripting/install/installing-powershell-on-windows), and [Az PowerShell](/powershell/azure/install-az-ps).
+The best way to develop the script is locally. PowerShell being cross-platform, the script can be written and tested on any OS. On Windows we can use [Windows Terminal](https://www.microsoft.com/p/windows-terminal/9n0dx20hk701) with [PowerShell 7](/powershell/scripting/install/installing-powershell-on-windows), and [Az PowerShell](/powershell/azure/install-azure-powershell).
The final script that will be used is available for [Functions](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/run.ps1) (and [Azure Automation](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/runbook.ps1)). It's different than the one explained below, having been wired to the hosting environment (Functions or Automation). We'll discuss that aspect later. First, let's step through a version of it that only **runs locally**.
stream-analytics Quick Start Build Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-start-build-application.md
Here are the typical scenarios for processing and analyzing clickstream:
## Prerequisites * Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/). * Install [Git](https://git-scm.com/downloads).
-* Azure PowerShell module. [Visit here to install or upgrade](/powershell/azure/install-Az-ps).
+* Azure PowerShell module. [Visit here to install or upgrade](/powershell/azure/install-azure-powershell).
## Filter clickstream requests
stream-analytics Resource Manager Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/resource-manager-export.md
You're ready to deploy your Azure Stream Analytics job using the Azure Resource
In a PowerShell window, run the following command. Be sure to replace the *ResourceGroupName*, *TemplateFile*, and *TemplateParameterFile* with your actual resource group name, and the complete file paths to the *JobTemplate.json* and *JobTemplate.parameters.json* files in the **Deploy Folder** of your job workspace.
-If you don't have Azure PowerShell configured, follow the steps in [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+If you don't have Azure PowerShell configured, follow the steps in [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
```azurepowershell New-AzResourceGroupDeployment -ResourceGroupName "<your resource group>" -TemplateFile "<path to JobTemplate.json>" -TemplateParameterFile "<path to JobTemplate.parameters.json>"
stream-analytics Stream Analytics Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-powershell.md
The example job reads streaming data from an IoT Hub device. The input data is g
* If you don't have an Azure subscription, create a [free account.](https://azure.microsoft.com/free/)
-* This quickstart requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version that is installed on your local machine. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps).
+* This quickstart requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version that is installed on your local machine. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
* Some IoT Hub actions are not supported by Azure PowerShell and must be completed using Azure CLI version 2.0.70 or later and the IoT extension for Azure CLI. [Install the Azure CLI](/cli/azure/install-azure-cli) and use `az extension add --name azure-iot` to install the IoT extension.
The following Azure PowerShell code block uses commands to create blob storage t
## Create a Stream Analytics job
-Create a Stream Analytics job with [New-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/new-azstreamanalyticsjob) cmdlet. This cmdlet takes the job name, resource group name, and job definition as parameters. The job name can be any friendly name that identifies your job. It can have alphanumeric characters, hyphens, and underscores only and it must be between 3 and 63 characters long. The job definition is a JSON file that contains the properties required to create a job. On your local machine, create a file named `JobDefinition.json` and add the following JSON data to it:
+Create a Stream Analytics job with [New-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/new-azstreamanalyticsjob) cmdlet. This cmdlet takes the job name, resource group name, location, and sku name as parameters. The job name can be any friendly name that identifies your job. It can have alphanumeric characters, hyphens, and underscores only and it must be between 3 and 63 characters long.
-```json
-{
- "location":"WestUS2",
- "properties":{
- "sku":{
- "name":"standard"
- },
- "eventsOutOfOrderPolicy":"adjust",
- "eventsOutOfOrderMaxDelayInSeconds":10,
- "compatibilityLevel": 1.1
- }
-}
-```
-
-Next, run the `New-AzStreamAnalyticsJob` cmdlet. Replace the value of `jobDefinitionFile` variable with the path where you've stored the job definition JSON file.
+Run the `New-AzStreamAnalyticsJob` cmdlet.
```powershell $jobName = "MyStreamingJob"
-$jobDefinitionFile = "C:\JobDefinition.json"
+$resourceGroup = "MyResourceGroup"
New-AzStreamAnalyticsJob ` -ResourceGroupName $resourceGroup `
- -File $jobDefinitionFile `
-Name $jobName `
- -Force
+ -Location centralus `
+ -SkuName Standard
``` ## Configure input to the job
synapse-analytics Quickstart Create Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-powershell.md
If you choose to use Cloud Shell, see [Overview of Azure Cloud Shell](../cloud-s
### Install the Azure PowerShell module locally
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
For more information about authentication with Azure PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Some general system constraints might affect your workload:
| Maximum identifier length in characters | 128. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects).| | Maximum query duration | 30 minutes. | | Maximum size of the result set | Up to 400 GB shared between concurrent queries. |
-| Maximum concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1,000 active sessions that are executing lightweight queries. The numbers will drop if the queries are more complex or scan a larger amount of data, so in thatcase consider decreasing concurrency and execute queries over a longer period of time if possible.|
+| Maximum concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1,000 active sessions that are executing lightweight queries. The numbers will drop if the queries are more complex or scan a larger amount of data, so in that case consider decreasing concurrency and execute queries over a longer period of time if possible.|
| Maximum size of External Table name | 100 characters. | ### Can't create a database in serverless SQL pool
traffic-manager Quickstart Create Traffic Manager Profile Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a Resource Group Create a resource group using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup).
traffic-manager Traffic Manager Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-diagnostic-logs.md
Azure Traffic Manager resource logs can provide insight into the behavior of the
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account.
-If you run PowerShell from your computer, you need the Azure PowerShell module, 1.0.0 or later. You can run `Get-Module -ListAvailable Az` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to sign in to Azure.
+If you run PowerShell from your computer, you need the Azure PowerShell module, 1.0.0 or later. You can run `Get-Module -ListAvailable Az` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Login-AzAccount` to sign in to Azure.
1. **Retrieve the Traffic Manager profile:**
traffic-manager Traffic Manager Subnet Override Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-powershell.md
There are two types of routing profiles that support subnet overrides:
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a Traffic Manager subnet override
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
This article shows you how to generate a registration key using the Azure portal
## Prerequisites
-Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required. In addition, you'll need:
+Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required, such as operating systems, virtual networks, and identity providers. In addition, you'll need:
- An existing host pool. -- If you're joining session hosts to Azure Active Directory (Azure AD), you need an account that can join computers to your tenant. To learn more about joining session hosts to Azure AD, see [Azure AD-joined session hosts](azure-ad-joined-session-hosts.md).--- If you're joining session hosts to Active Directory domain using Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS), you need a domain account that can join computers to your domain. For Azure AD DS, you would need to be a member of the [*AAD DC Administrators* group](../active-directory-domain-services/tutorial-create-instance-advanced.md#configure-an-administrative-group).
+- If you have existing session hosts in the host pool, make a note of the virtual machine size, the image, and name prefix that was used. All session hosts in a host pool should be the same configuration, including the same identity provider. For example, a host pool shouldn't contain some session hosts joined to Azure AD and some session hosts joined to an Active Directory domain.
-- A virtual network and subnet in the same Azure region you want to create session hosts. You don't need a public IP address or open inbound ports for your session hosts.
+- The Azure account you use must have the following built-in role-based access control (RBAC) roles as a minimum on the resource group:
-- If you have existing session hosts in the host pool, make a note of the virtual machine size, the image, and name prefix that was used. All session hosts in a host pool should be the same configuration, including the same identity provider. For example, a host pool shouldn't contain some session hosts joined to Azure AD and some session hosts joined to an Active Directory domain.
+ | Action | RBAC role(s) |
+ |--|--|
+ | Generate a host pool registration key | [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) |
+ | Create and add session hosts using the Azure portal | [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor)<br />[Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
-- If you're creating virtual machines outside of the Azure Virtual Desktop service, make sure you're using a [supported operating system](prerequisites.md#operating-systems-and-licenses) (OS). Remember to use a multi-session OS for a pooled host pool.
+ Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role.
-- A minimum of *Contributor* built-in [role-based access control](../role-based-access-control/built-in-roles.md) (RBAC) role on the resource group.
+- Don't disable [Windows Remote Management](/windows/win32/winrm/about-windows-remote-management) (WinRM) when creating and adding session hosts using the Azure portal, as it's required by [PowerShell DSC](/powershell/dsc/overview).
- If you want to use Azure CLI or Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension or the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
Here's how to create session hosts and register them to a host pool using the Az
| Network security group | Select whether you want to use a network security group (NSG).<br /><br />- **Basic** will create a new NSG for the VM NIC.<br /><br />- **Advanced** enables you to select an existing NSG. | | Public inbound ports | We recommend you select **No**. | | **Domain to join** | |
- | Select which directory you would like to join | Select from **Azure Active Directory** or **Active Directory** and complete the relevant parameters for the option you select. |
+ | Select which directory you would like to join | Select from **Azure Active Directory** or **Active Directory** and complete the relevant parameters for the option you select.<br /><br />To learn more about joining session hosts to Azure AD, see [Azure AD-joined session hosts](azure-ad-joined-session-hosts.md). |
| **Virtual Machine Administrator account** | | | Username | Enter a name to use as the local administrator account for the new session host VMs. | | Password | Enter a password for the local administrator account. |
Select the relevant tab for your scenario and follow the steps.
1. Follow the prompts and complete the installation.
-1. The virtual machines should now appear as a session host in the host pool. Finally, restart the virtual machines.
+1. After a short time, the virtual machines should now be listed as session hosts in the host pool. The status of the session hosts may initially show as **Unavailable** and if there is a newer agent version available, it will upgrade automatically.
+
+1. Once the status of the session hosts is **Available**, restart the virtual machines.
# [Command line](#tab/cmd)
Using `msiexec` enables you to install the agent and boot loader from the comman
msiexec /i Microsoft.RDInfra.RDAgentBootLoader.Installer-x64.msi /quiet ```
-1. The virtual machines should now appear as a session host in the host pool. Finally, restart the virtual machines.
+1. After a short time, the virtual machines should now be listed as session hosts in the host pool. The status of the session hosts may initially show as **Unavailable** and if there is a newer agent version available, it will upgrade automatically.
+
+1. Once the status of the session hosts is **Available**, restart the virtual machines.
Now that you've expanded your existing host pool, you can sign in to an Azure Vi
- [Connect with the web client](./users/connect-web.md) - [Connect with the Android client](./users/connect-android-chrome-os.md) - [Connect with the macOS client](./users/connect-macos.md)-- [Connect with the iOS client](./users/connect-ios-ipados.md)
+- [Connect with the iOS client](./users/connect-ios-ipados.md)
virtual-desktop Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/cli-powershell.md
To learn how to install Azure CLI and Azure PowerShell across all supported plat
- Azure CLI: [How to install the Azure CLI](/cli/azure/install-azure-cli) -- Azure PowerShell: [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps)
+- Azure PowerShell: [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell)
## Example commands
virtual-desktop Create Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md
This list refers to the list of regions where the *metadata* for the host pool w
## Prerequisites
-Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required. In addition, you'll need:
+Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required, such as operating systems, virtual networks, and identity providers. Select the relevant tab for your scenario.
-- An Azure account with an active subscription.
+# [Portal](#tab/portal)
+
+In addition, you'll need:
+
+- The Azure account you use must have the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you'll need to create this first.
+
+ | Resource type | RBAC role(s) |
+ |--|--|
+ | Host pool | [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor)<br />[Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
+ | Workspace | [Desktop Virtualization Workspace Contributor](rbac.md#desktop-virtualization-workspace-contributor) |
+ | Application group | [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
+ | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
+
+ Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
+
+- Don't disable [Windows Remote Management](/windows/win32/winrm/about-windows-remote-management) (WinRM) when creating session hosts using the Azure portal, as it's required by [PowerShell DSC](/powershell/dsc/overview).
+
+# [Azure CLI](#tab/cli)
+
+In addition, you'll need:
+
+- The account must have the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you'll need to create this first.
+
+ | Resource type | RBAC role |
+ |--|--|
+ | Host pool | [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) |
+ | Workspace | [Desktop Virtualization Workspace Contributor](rbac.md#desktop-virtualization-workspace-contributor) |
+ | Application group | [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
+ | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
-- The account must have the following built-in role-based access control (RBAC) roles on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you'll need to create this first.
+ Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
+
+- If you want to use Azure CLI locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
+
+> [!IMPORTANT]
+> If you want to create Azure Active Directory-joined session hosts, we only support this using the Azure portal with the Azure Virtual Desktop service.
+
+# [Azure PowerShell](#tab/powershell)
+
+In addition, you'll need:
+
+- The account must have the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you'll need to create this first.
| Resource type | RBAC role | |--|--|
Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a gen
Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types. -- If you want to use Azure CLI or Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension or the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
+- If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
+
+> [!IMPORTANT]
+> If you want to create Azure Active Directory-joined session hosts, we only support this using the Azure portal with the Azure Virtual Desktop service.
++ ## Create a host pool
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
To access virtual desktops and remote apps from your session hosts, your users n
### Session hosts
-You need to join session hosts that provide virtual desktops and remote apps to an AD DS domain, Azure AD DS domain, or the same Azure AD tenant as your users.
+You need to join session hosts that provide virtual desktops and remote apps to the same Azure AD tenant as your users, or an Active Directory domain (either AD DS or Azure AD DS).
-- If you're joining session hosts to an AD DS domain and you want to manage them using [Intune](/mem/intune/fundamentals/what-is-intune), you'll need to configure [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to enable [hybrid Azure AD join](../active-directory/devices/hybrid-azuread-join-plan.md).-- If you're joining session hosts to an Azure AD DS domain, you can't manage them using [Intune](/mem/intune/fundamentals/what-is-intune).
+To join session hosts to Azure AD or an Active Directory domain, you need the following permissions:
+
+- For Azure Active Directory (Azure AD), you need an account that can join computers to your tenant. For more information, see [Manage device identities](../active-directory/devices/device-management-azure-portal.md#configure-device-settings). To learn more about joining session hosts to Azure AD, see [Azure AD-joined session hosts](azure-ad-joined-session-hosts.md).
+
+- For an Active Directory domain, you need a domain account that can join computers to your domain. For Azure AD DS, you would need to be a member of the [*AAD DC Administrators* group](../active-directory-domain-services/tutorial-create-instance-advanced.md#configure-an-administrative-group).
### Users
-Your users need accounts that are in Azure AD. If you're also using AD DS or Azure AD DS in your deployment of Azure Virtual Desktop, these accounts will need to be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means the user account is synchronized. You'll need to keep the following things in mind based on which account you use:
+Your users need accounts that are in Azure AD. If you're also using AD DS or Azure AD DS in your deployment of Azure Virtual Desktop, these accounts will need to be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means the user accounts are synchronized. You'll need to keep the following things in mind based on which identity provider you use:
- If you're using Azure AD with AD DS, you'll need to configure [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to synchronize user identity data between AD DS and Azure AD. - If you're using Azure AD with Azure AD DS, user accounts are synchronized one way from Azure AD to Azure AD DS. This synchronization process is automatic.
You'll need to enter the following identity parameters when deploying session ho
> [!IMPORTANT] > The account you use for joining a domain can't have multi-factor authentication (MFA) enabled.
->
-> When joining an Azure AD DS domain, the account you use must be part of the *AAD DC administrators* group.
## Operating systems and licenses
Consider the following when managing session hosts:
- Don't enable any policies or configurations that disable *Windows Installer*. If you disable Windows Installer, the service won't be able to install agent updates on your session hosts, and your session hosts won't function properly.
+- If you're joining session hosts to an AD DS domain and you want to manage them using [Intune](/mem/intune/fundamentals/what-is-intune), you'll need to configure [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to enable [hybrid Azure AD join](../active-directory/devices/hybrid-azuread-join-plan.md).
+
+- If you're joining session hosts to an Azure AD DS domain, you can't manage them using [Intune](/mem/intune/fundamentals/what-is-intune).
+ - If you're using Azure AD-join with Windows Server for your session hosts, you can't enroll them in Intune as Windows Server is not supported with Intune. You'll need to use hybrid Azure AD-join and Group Policy from an Active Directory domain, or local Group Policy on each session host. ## Remote Desktop clients
virtual-desktop Tutorial Create Connect Personal Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tutorial-create-connect-personal-desktop.md
To create a personal host pool, workspace, application group, and session host V
1. Select **Create**. A host pool, workspace, application group, and session host will be created. Once your deployment is complete, select **Go to resource**. This goes to the host pool overview.
+1. Finally, from the host pool overview select **Session hosts** and verify the status of the session hosts is **Available**.
+ ## Assign users to the application group Once your host pool, workspace, application group, and session host VM(s) have been deployed, you need to assign users to the application group that was automatically created. After users are assigned to the application group, they'll automatically be assigned to an available session host VM because *Assignment type* was set to **Automatic** when the host pool was created.
virtual-desktop Create Service Principal Role Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-service-principal-role-powershell.md
In this tutorial, learn how to:
Before you can create service principals and role assignments, you need to do the following:
-1. Follow the steps to [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. Follow the steps to [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
2. [Download and import the Azure Virtual Desktop PowerShell module](/powershell/windows-virtual-desktop/overview/).
virtual-desktop Deploy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md
You need to create an Azure Active Directory App Registration and a Log Analytic
You also need to install these two PowerShell modules before you get started: -- [Azure PowerShell module](/powershell/azure/install-az-ps)
+- [Azure PowerShell module](/powershell/azure/install-azure-powershell)
- [Azure AD module](/powershell/azure/active-directory/install-adv2) Make sure you have your Subscription ID ready for when you sign in.
virtual-machines Capture Image Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capture-image-resource.md
This command returns JSON that describes the VM image. Save this output for late
Creating an image directly from the VM ensures that the image includes all of the disks associated with the VM, including the OS disk and any data disks. This example shows how to create a managed image from a VM that uses managed disks.
-Before you begin, make sure that you have the latest version of the Azure PowerShell module. To find the version, run `Get-Module -ListAvailable Az` in PowerShell. If you need to upgrade, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-az-ps). If you are running PowerShell locally, run `Connect-AzAccount` to create a connection with Azure.
+Before you begin, make sure that you have the latest version of the Azure PowerShell module. To find the version, run `Get-Module -ListAvailable Az` in PowerShell. If you need to upgrade, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, run `Connect-AzAccount` to create a connection with Azure.
> [!NOTE]
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
To use the Azure portal, sign in to the portal and follow these steps.
# [PowerShell](#tab/azure-powershell)
-To use Azure PowerShell, install the latest Az module or the Az.Storage module. For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-Az-ps).
+To use Azure PowerShell, install the latest Az module or the Az.Storage module. For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
[!INCLUDE [azure-powershell-requirements-no-header.md](../../includes/azure-powershell-requirements-no-header.md)]
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Premium SSD v2 support a 4k physical sector size by default, but can be configur
## Prerequisites -- Install either the latest [Azure CLI](/cli/azure/install-azure-cli) or the latest [Azure PowerShell module](/powershell/azure/install-az-ps).
+- Install either the latest [Azure CLI](/cli/azure/install-azure-cli) or the latest [Azure PowerShell module](/powershell/azure/install-azure-powershell).
## Determine region availability programmatically
virtual-machines Disks Enable Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-performance.md
Once enabled, the IOPS and throughput limits for an eligible disk increase to th
## Prerequisites
-Either use the Azure Cloud Shell to run your commands or install a version of the [Azure PowerShell module](/powershell/azure/install-az-ps) 9.5 or newer, or a version of the [Azure CLI](/cli/azure/install-azure-cli) that is 2.44.0 or newer.
+Either use the Azure Cloud Shell to run your commands or install a version of the [Azure PowerShell module](/powershell/azure/install-azure-powershell) 9.5 or newer, or a version of the [Azure CLI](/cli/azure/install-azure-cli) that is 2.44.0 or newer.
## Enable performance plus
virtual-machines Disks Performance Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-performance-tiers.md
Install the latest [Azure CLI](/cli/azure/install-az-cli2) and sign in to an Azure account with [az login](/cli/azure/reference-index). # [PowerShell](#tab/azure-powershell)
-Install the latest [Azure PowerShell version](/powershell/azure/install-az-ps), and sign in to an Azure account in with `Connect-AzAccount`.
+Install the latest [Azure PowerShell version](/powershell/azure/install-azure-powershell), and sign in to an Azure account in with `Connect-AzAccount`.
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Previously updated : 02/21/2023 Last updated : 05/02/2023
## What is Azure Hybrid Benefit?
-Azure Hybrid Benefit (AHB) for Linux virtual machines enables you to take advantage of discounted reserved instance rates for your Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) VMs. Enabling AHB saves money by applying the licensing costs for RHEL and SLES on top of the discounted rate of reserved instances. This article explains the two Azure Hybrid Benefit licensing models and the process of converting to and between them.
+Azure Hybrid Benefit (AHB) for Linux lets you easily switch the software subscription model for your VM. You can remove licensing cost by bringing your Red Hat and SUSE Linux subscriptions directly to Azure, or utilize a model where you pay for subscriptions as you use them. This article defines 'BYOS' and 'PAYG' licensing models, compares the benefits of each model, and shows how you can use the Azure Hybrid Benefit to switch between the two at any point. This process applies to Virtual Machine Scale Sets, Spot Virtual Machines, and custom images. It allows for seamless bi-directional conversions between the two models.
-Customers may see savings estimated to up to 76% with Azure Hybrid Benefit for Linux.
+Customers may see savings estimated to up to 76% with Azure Hybrid Benefit for Linux. Savings estimates are based on one standard D2s v3 Azure VM with RHEL or SLES subscription in the East US region running at a pay-as-you-go rate vs a reduced rate for a 3-year reserved instance plan. Based on Azure pricing as of October 2022. Prices subject to change. Actual savings may vary based on location, instance type, or usage.
## Defining Pay-as-you-go (PAYG) and Bring-your-own-subscription (BYOS) In Azure, there are two main licensing pricing options: 'pay-as-you-go' (PAYG) and 'bring-your-own-subscription' (BYOS). 'PAYG' is a pricing option where you pay for the resources you use on an hourly or monthly basis. You only pay for what you use and can scale up or down as needed. On the other hand, 'BYOS' is a licensing option where you can use your existing licenses for certain software, in this case RHEL and SLES, on Azure virtual machines. You can use your existing licenses and don't have to purchase new ones for use in Azure.
-Virtual machines deployed from pay-as-you-go images on Azure without Azure Hybrid Benefit incur *both* an infrastructure fee and a software fee. You can either convert these VMs to standard BYOS, Azure Hybrid Benefit BYOS, or Azure Hybrid Benefit PAYG.
+> [!NOTE]
+> Virtual machines deployed from PAYG images or VMs converted from BYOS models incur *both* an infrastructure fee and a software fee. If you have your own license, use Azure Hybrid Benefit to convers from a PAYG to BYOS model.
-After you apply Azure Hybrid Benefit to your RHEL or SLES virtual machine, you're no longer charged a software fee. Your virtual machine is charged a BYOS fee instead. You can use Azure Hybrid Benefit to switch back to pay-as-you-go billing at any time.
+You can use Azure Hybrid Benefit to switch back to pay-as-you-go billing at any time.
-## Which Linux virtual machines qualify for Azure Hybrid Benefit?
-Azure Hybrid Benefit BYOS and PAYG capability are available to all RHEL and SLES virtual machines. The VM can be created using a custom image or one taken from the Azure Marketplace
+## Which Linux virtual machines qualify for Azure Hybrid Benefit?
Azure dedicated host instances and SQL hybrid benefits aren't eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines.
-## Getting started with PAYG Azure Hybrid Benefit
+## Getting started with Azure Hybrid Benefit
-# [Red Hat (RHEL) PAYG Azure Hybrid Benefit](#tab/rhelpayg)
+# [Red Hat (RHEL) PAYG to BYOS conversion](#tab/rhelpayg)
-Azure Hybrid Benefit for pay-as-you-go virtual machines for RHEL is available to Red Hat customers who meet the following criteria:
+Azure Hybrid Benefit for converting PAYG virtual machines to BYOS for RHEL is available to Red Hat customers who meet the following criteria:
- Have active or unused RHEL subscriptions that are eligible for use in Azure - Have correctly enabled one or more of their subscriptions for use in Azure with the [Red Hat Cloud Access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program
-To start using Azure Hybrid Benefit for Red Hat:
+Bring your own subscription to Red Hat:
1. Enable one or more of your eligible RHEL subscriptions for use in Azure using the [Red Hat Cloud Access customer interface](https://access.redhat.com/management/cloud).The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process then have access to Azure Hybrid Benefit
To start using Azure Hybrid Benefit for Red Hat:
1. Follow the recommended [next steps](https://access.redhat.com/articles/5419341) to configure update sources for your RHEL virtual machines and for RHEL subscription compliance guidelines.
-# [SUSE (SLES) PAYG Azure Hybrid Benefit](#tab/slespayg)
+# [SUSE (SLES) PAYG to BYOS conversion](#tab/slespayg)
Azure Hybrid Benefit for pay-as-you-go virtual machines for SUSE is available to customers who have:
To start using Azure Hybrid Benefit for SUSE:
-### Enable PAYG Azure Hybrid Benefit in the Azure portal
+### Enable AHB in the Azure portal
In the Azure portal, you can enable Azure Hybrid Benefit on existing virtual machines or on new virtual machines at the time that you create them.
-#### Enable PAYG Azure Hybrid Benefit on an existing virtual machine in the Azure portal
+#### Convert to BYOS on an existing virtual machine in the Azure portal
To enable Azure Hybrid Benefit on an existing virtual machine:
To enable Azure Hybrid Benefit on an existing virtual machine:
![Screenshot of the Azure portal that shows the Licensing section of the configuration page for Azure Hybrid Benefit.](./media/azure-hybrid-benefit/create-configuration-blade.png)
-#### Enable PAYG Azure Hybrid Benefit when creating a new virtual machine in the Azure portal
+#### Convert to BYOS in the Azure portal
To enable Azure Hybrid Benefit when you create a virtual machine, use the following procedure. (The SUSE workflow is the same as the RHEL example shown here.)
To enable Azure Hybrid Benefit when you create a virtual machine, use the follow
![Screenshot of the Azure Hybrid Benefit configuration pane after you create a virtual machine.](./media/azure-hybrid-benefit/create-configuration-blade.png)
-### Enable and disable PAYG Azure Hybrid Benefit using the Azure CLI
+### Convert to BYOS using the Azure CLI
You can use the `az vm update` command to update existing virtual machines. * For RHEL virtual machines, run the command with a `--license-type` parameter of `RHEL_BYOS`.
-* For SLES virtual machines, run the command with a `--license-type` parameter of `SLES_BYOS`.
-
-#### Enable PAYG Azure Hybrid Benefit using the Azure CLI
- ```azurecli
-# This will enable Azure Hybrid Benefit on a RHEL virtual machine
+# This will enable BYOS on a RHEL virtual machine using Azure Hybrid Benefit
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS
+```
-# This will enable Azure Hybrid Benefit on a SLES virtual machine
+* For SLES virtual machines, run the command with a `--license-type` parameter of `SLES_BYOS`.
+```azurecli
+# This will enable BYOS on a SLES virtual machine
az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS ```
-#### Disable PAYG Azure Hybrid Benefit using the Azure CLI
+#### Convert to PAYG using the Azure CLI
-To disable Azure Hybrid Benefit, use a `--license-type` value of `None`:
+To return a VM to a PAYG model, use a `--license-type` value of `None`:
```azurecli
-# This will disable Azure Hybrid Benefit on a virtual machine
+# This will enable PAYG on a virtual machine using Azure Hybrid Benefit
az vm update -g myResourceGroup -n myVmName --license-type None ```
-#### Enable PAYG Azure Hybrid Benefit on a large number of virtual machines using the Azure CLI
+#### Convert multiple VM license models simultaneously using the Azure CLI
-To enable Azure Hybrid Benefit on a large number of virtual machines, you can use the `--ids` parameter in the Azure CLI:
+To switch the licensing model on a large number of virtual machines, you can use the `--ids` parameter in the Azure CLI:
```azurecli
-# This will enable Azure Hybrid Benefit on a RHEL virtual machine. In this example, ids.txt is an
+# This will enable BYOS on a RHEL virtual machine. In this example, ids.txt is an
# existing text file that contains a delimited list of resource IDs corresponding # to the virtual machines using Azure Hybrid Benefit az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS --ids $(cat ids.txt)
$(az vm list -g MyResourceGroup --query "[].id" -o tsv)
az vm list -o json | jq '.[] | {Virtual MachineName: .name, ResourceID: .id}' ```
-### Apply PAYG when creating a new VM
+### Use AHB when creating a new VM
In addition to applying Azure Hybrid Benefit to existing pay-as-you-go virtual machines, you can invoke it at the time of virtual machine creation. Benefits of doing so are threefold: -- You can provision both pay-as-you-go and BYOS virtual machines by using the same image and process.
+- You can provision both PAYG and BYOS virtual machines by using the same image and process.
- It enables future licensing mode changes. These changes aren't available with a BYOS-only image or if you bring your own virtual machine. - The virtual machine is connected to Red Hat Update Infrastructure (RHUI) by default, to help keep it up to date and secure. You can change the updated mechanism after deployment at any time.
-### Check the PAYG Azure Hybrid Benefit status of a virtual machine
+### Check the licensing model of an AHB enabled VM
You can view the Azure Hybrid Benefit status of a virtual machine by using the Azure CLI or by using Azure Instance Metadata Service.
-### Check PAYG Azure Hybrid Benefit status using the Azure CLI
+### Check licensing model using the Azure CLI
You can use the `az vm get-instance-view` command to check the status. Look for a `licenseType` field in the response. If the `licenseType` field exists and the value is `RHEL_BYOS` or `SLES_BYOS`, your virtual machine has Azure Hybrid Benefit enabled.
You can use the `az vm get-instance-view` command to check the status. Look for
az vm get-instance-view -g MyResourceGroup -n MyVm ```
-### Check PAYG status using Azure Instance Metadata Service
+### Check the licensing model of an AHB enabled VM using Azure Instance Metadata Service
From within the virtual machine itself, you can query the attested metadata in Azure Instance Metadata Service to determine the virtual machine's `licenseType` value. A `licenseType` value of `RHEL_BYOS` or `SLES_BYOS` indicates that your virtual machine has Azure Hybrid Benefit enabled. [Learn more about attested metadata](./instance-metadata-service.md#attested-data).
-### PAYG for reserved instance VMs
+### AHB for reserved instance VMs
[Azure reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md) (Azure Reserved Virtual Machine Instances) help you save money by committing to one-year or three-year plans for multiple products. Azure Hybrid Benefit for pay-as-you-go virtual machines is available for reserved instances.
If you've purchased compute costs at a discounted rate by using reserved instanc
>[!NOTE] >If you've already purchased reservations for RHEL or SUSE pay-as-you-go software on Azure Marketplace, please wait for the reservation tenure to finish before using Azure Hybrid Benefit for pay-as-you-go virtual machines.
-## Getting started with BYOS Azure Hybrid Benefit
+## Red Hat (RHEL) VMs with Azure Hybrid Benefit
-# [Red Hat (RHEL) BYOS Azure Hybrid Benefit](#tab/rhelbyos)
+# [Red Hat (RHEL) BYOS to PAYG](#tab/rhelbyos)
To start using Azure Hybrid Benefit for Red Hat:
To start using Azure Hybrid Benefit for Red Hat:
> [!Note] > In the unlikely event that the extension can't install repositories or there are any other issues, switch the license type back to empty and reach out to Microsoft support. This ensures that you don't get billed for software updates.
-# [SUSE (SLES) BYOS Azure Hybrid Benefit](#tab/slesbyos)
+# [SUSE (SLES) BYOS to PAYG](#tab/slesbyos)
To start using Azure Hybrid Benefit for SLES virtual machines:
To start using Azure Hybrid Benefit for SLES virtual machines:
1. You should now be connected to the SUSE public cloud update infrastructure on Azure. The relevant repositories are installed on your machine.
-1. If you want to switch back to the bring-your-own-subscription model, just change the license type to `None` and run the extension. This action removes all repositories from your virtual machine and stop the billing.
+1. If you want to switch back to the bring-your-own-subscription model, just change the license type to `None` and run the extension. This action removes all repositories from your virtual machine and stops the billing.
After you successfully install the `AHBForRHEL` extension, you can use the `az vm update` command to update the existing license type on your running virtual machines. For SLES virtual machines, run the command and set the `--license-type` parameter to one of the following license types: `RHEL_BASE`, `RHEL_EUS`, `RHEL_SAPHA`, `RHEL_SAPAPPS`, `RHEL_BASESAPAPPS`, or `RHEL_BASESAPHA`.
-### Enable BYOS Azure Hybrid Benefit using the Azure CLI
+### Converting a VM license model using the Azure CLI
# [RHEL](#tab/rhelEnablebyos)
After you successfully install the `AHBForRHEL` extension, you can use the `az v
-## Enable and disable BYOS Azure Hybrid Benefit for SLES
+## SUSE (SLES) VMs with Azure Hybrid Benefit
After you successfully install the `AHBForSLES` extension, you can use the `az vm update` command to update the existing license type on your running virtual machines. For SLES virtual machines, run the command and set the `--license-type` parameter to one of the following license types: `SLES_STANDARD`, `SLES_SAP`, or `SLES_HPC`.
-### Disable BYOS Azure Hybrid Benefit using the Azure CLI
+### Enable PAYG using the Azure CLI
1. Ensure that the Azure Hybrid Benefit extension is installed on your virtual machine. 1. To disable Azure Hybrid Benefit, use the following command: ```azurecli
- # This will disable Azure Hybrid Benefit on a virtual machine
+ # This will enable PAYG on a virtual machine
az vm update -g myResourceGroup -n myVmName --license-type None ```
-### Check the BYOS Azure Hybrid Benefit status of a virtual machine
+### Check licensing model of a virtual machine
1. Ensure that the Azure Hybrid Benefit extension is installed. 1. In the Azure CLI or Azure Instance Metadata Service, run the following command:
If you use Azure Hybrid Benefit BYOS to PAYG capability for SLES and want more i
- **Q: Can I use a license type of RHEL_BYOS with a SLES image, or vice versa?**
- - A: No, you can't. Trying to enter a license type that incorrectly matches the distribution running on your virtual machine will not update any billing metadata. But if you accidentally enter the wrong license type, updating your virtual machine again to the correct license type will still enable Azure Hybrid Benefit.
+ - A: No, you can't. Trying to enter a license type that incorrectly matches the distribution running on your virtual machine won't update any billing metadata. But if you accidentally enter the wrong license type, updating your virtual machine again to the correct license type still enables Azure Hybrid Benefit.
- **Q: I've registered with Red Hat Cloud Access but still can't enable Azure Hybrid Benefit on my RHEL virtual machines. What should I do?**
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
For more information, see [Get started with Azure CLI 2.0](/cli/azure/get-starte
# [Azure PowerShell](#tab/powershellazure)
-The [Azure PowerShell az module](/powershell/azure/new-azureps-module-az) provides a set of cmdlets that uses the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources. You can use it in your browser with [Azure Cloud Shell](../../cloud-shell/overview.md), or you can install it on your local machine using the instructions in [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
+The [Azure PowerShell az module](/powershell/azure/new-azureps-module-az) provides a set of cmdlets that uses the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources. You can use it in your browser with [Azure Cloud Shell](../../cloud-shell/overview.md), or you can install it on your local machine using the instructions in [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
If you already have it installed locally, make sure you use the latest version of Azure PowerShell SDK version to configure Azure Disk Encryption. Download the latest version of [Azure PowerShell release](https://github.com/Azure/azure-powershell/releases).
virtual-machines Change Availability Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/change-availability-set.md
The following steps describe how to change the availability set of a VM using Azure PowerShell. A VM can only be added to an availability set when it is created. To change the availability set, you need to delete and then recreate the virtual machine.
-This article was last tested on 2/12/2019 using the [Azure Cloud Shell](https://shell.azure.com/powershell) and the [Az PowerShell module](/powershell/azure/install-az-ps) version 1.2.0.
+This article was last tested on 2/12/2019 using the [Azure Cloud Shell](https://shell.azure.com/powershell) and the [Az PowerShell module](/powershell/azure/install-azure-powershell) version 1.2.0.
> [!WARNING] > This is just an example and in some cases it will need to be updated for your specific deployment.
virtual-machines Disks Enable Double Encryption At Rest Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-enable-double-encryption-at-rest-powershell.md
Double encryption at rest isn't currently supported with either Ultra Disks or P
## Prerequisites
-Install the latest [Azure PowerShell version](/powershell/azure/install-az-ps), and sign in to an Azure account using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
+Install the latest [Azure PowerShell version](/powershell/azure/install-azure-powershell), and sign in to an Azure account using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
## Getting started
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
For guidance on how to copy a managed disk from one region to another, see [Copy
### Prerequisites -- [Install the Azure PowerShell module](/powershell/azure/install-Az-ps).
+- [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
- A VHD [has been prepared for Azure](prepare-for-upload-vhd-image.md), stored locally. - On Windows: You don't need to convert your VHD to VHDx, convert it a fixed size, or resize it to include the 512-byte offset. `Add-AZVHD` performs these functions for you. - [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) must be enabled for Add-AzVHD to perform these functions.
Add-AzVhd -LocalFilePath $path -ResourceGroupName $resourceGroup -Location $loca
### Prerequisites - Download the latest [version of AzCopy v10](../../storage/common/storage-use-azcopy-v10.md#download-and-install-azcopy).-- [Install the Azure PowerShell module](/powershell/azure/install-Az-ps).
+- [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
- A fixed size VHD that [has been prepared for Azure](prepare-for-upload-vhd-image.md), stored locally. To upload your VHD to Azure, you'll need to create an empty managed disk that is configured for this upload process. Before you create one, there's some additional information you should know about these disks.
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
Builder PowerShell module.
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. If you choose to use PowerShell locally, this article requires that you install the Azure PowerShell
-module and connect to your Azure account by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+module and connect to your Azure account by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
Some of the steps require cmdlets from the [Az.ImageBuilder](https://www.powershellgallery.com/packages/Az.ImageBuilder) module. Install separately by using the following command.
virtual-machines Ps Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/ps-template.md
New-AzResourceGroupDeployment `
```
-If you choose to install and use the PowerShell locally instead of from the Azure Cloud shell, this tutorial requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally instead of from the Azure Cloud shell, this tutorial requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
In the previous example, you specified a template stored in GitHub. You can also download or create a template and specify the local path with the `-template-file` parameter.
virtual-machines Configure Azure Oci Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-azure-oci-networking.md
Title: Connect Azure ExpressRoute with Oracle Cloud Infrastructure | Microsoft Docs
-description: Connect Azure ExpressRoute with Oracle Cloud Infrastructure (OCI) FastConnect to enable cross-cloud Oracle application solutions
+description: Connect Azure ExpressRoute with Oracle Cloud Infrastructure (OCI) FastConnect to enable cross-cloud Oracle application solutions.
Previously updated : 03/16/2020 Last updated : 04/16/2023 # Set up a direct interconnection between Azure and Oracle Cloud Infrastructure
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
-To create an [integrated multi-cloud experience](oracle-oci-overview.md), Microsoft and Oracle offer direct interconnection between Azure and Oracle Cloud Infrastructure (OCI) through [ExpressRoute](../../../expressroute/expressroute-introduction.md) and [FastConnect](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/fastconnectoverview.htm). Through the ExpressRoute and FastConnect interconnection, customers can experience low latency, high throughput, private direct connectivity between the two clouds.
+To create an [integrated multicloud experience](oracle-oci-overview.md), Microsoft and Oracle offer direct interconnection between Azure and Oracle Cloud Infrastructure (OCI) through [ExpressRoute](../../../expressroute/expressroute-introduction.md) and [FastConnect](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/fastconnectoverview.htm). Through the ExpressRoute and FastConnect interconnection, you can experience low latency, high throughput, private direct connectivity between the two clouds.
> [!IMPORTANT] > Oracle has certified these applications to run in Azure when using the Azure / Oracle Cloud interconnect solution:
-> * E-Business Suite
-> * JD Edwards EnterpriseOne
-> * PeopleSoft
-> * Oracle Retail applications
-> * Oracle Hyperion Financial Management
+>
+> - E-Business Suite
+> - JD Edwards EnterpriseOne
+> - PeopleSoft
+> - Oracle Retail applications
+> - Oracle Hyperion Financial Management
The following image shows a high-level overview of the interconnection:
-![Cross-cloud network connection](https://user-images.githubusercontent.com/37556655/115093592-bced0180-9ecf-11eb-976d-9d4c7a1be2a8.png)
> [!NOTE]
-> The ExpressRoute connection seen in the diagram is a regular [ExpressRoute circuit](../../../expressroute/expressroute-introduction.md) and supports all fuctionalities such as Global Reach.
->
+> The ExpressRoute connection seen in the diagram is a regular [ExpressRoute circuit](../../../expressroute/expressroute-introduction.md) and supports all fuctionality, such as Global Reach.
+>
## Prerequisites
-* To establish connectivity between Azure and OCI, you must have an active Azure subscription and an active OCI tenancy.
+- To establish connectivity between Azure and OCI, you must have an active Azure subscription and an active OCI tenancy.
-* Connectivity is only possible where an Azure ExpressRoute peering location is in proximity to or in the same peering location as the OCI FastConnect. See [Region Availability](oracle-oci-overview.md#region-availability).
+- Connectivity is only possible where an Azure ExpressRoute peering location is in proximity to or in the same peering location as the OCI FastConnect. See [Region Availability](oracle-oci-overview.md#region-availability).
## Configure direct connectivity between ExpressRoute and FastConnect
-1. Create a standard ExpressRoute circuit on your Azure subscription under a resource group.
- * While creating the ExpressRoute, choose **Oracle Cloud FastConnect** as the service provider. To create an ExpressRoute circuit, see the [step-by-step guide](../../../expressroute/expressroute-howto-circuit-portal-resource-manager.md).
- * An Azure ExpressRoute circuit provides granular bandwidth options, whereas FastConnect supports 1, 2, 5, or 10 Gbps. Therefore, it is recommended to choose one of these matching bandwidth options under ExpressRoute.
+Create a standard ExpressRoute circuit on your Azure subscription under a resource group. For more information, see [Create and modify an ExpressRoute circuit](../../../expressroute/expressroute-howto-circuit-portal-resource-manager.md).
- ![Create ExpressRoute circuit](media/configure-azure-oci-networking/exr-create-new.png)
-1. Note down your ExpressRoute **Service key**. You need to provide the key while configuring your FastConnect circuit.
+1. In the Azure portal, enter *ExpressRoute* in the search bar, and then select **ExpressRoute circuits**.
+1. Under **Express Route circuits**, select **Create**.
+1. Select your subscription, enter or create a resource group, and enter a name for your ExpressRoute. Select **Next: Configuration** to continue.
+1. Select **Oracle Cloud FastConnect** as the service provider and select your peering location.
+1. An Azure ExpressRoute circuit provides granular bandwidth options. FastConnect supports 1, 2, 5, or 10 Gbps. For **Bandwidth**, choose one of these matching bandwidth options.
- ![ExpressRoute Service key](media/configure-azure-oci-networking/exr-service-key.png)
+ :::image type="content" source="media/configure-azure-oci-networking/exr-create-new.png" alt-text="Screenshot shows the Create ExpressRoute circuit dialog." lightbox ="media/configure-azure-oci-networking/exr-create-new.png":::
- > [!IMPORTANT]
- > You will be billed for ExpressRoute charges as soon as the ExpressRoute circuit is provisioned (even if the **Provider Status** is **Not Provisioned**).
+1. Select **Review + create** to create your ExpressRoute.
-1. Carve out two private IP address spaces of /30 each that do not overlap with your Azure virtual network or OCI virtual cloud network IP Address space. We will refer to the first IP address space as primary address space and the second IP address space as secondary address space. Note down the addresses, which you need when configuring your FastConnect circuit.
-1. Create a Dynamic Routing Gateway (DRG). You will need this when creating your FastConnect circuit. For more information, see the [Dynamic Routing Gateway](https://docs.cloud.oracle.com/iaas/Content/Network/Tasks/managingDRGs.htm) documentation.
-1. Create a FastConnect circuit under your Oracle tenant. For more information, see the [Oracle documentation](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/azure.htm).
+After you create your ExpressRoute, configure direct connectivity between ExpressRoute and FastConnect.
+
+1. Go to your new ExpressRoute and find the **Service key**. You need to provide the key while configuring your FastConnect circuit.
+
+ :::image type="content" source="media/configure-azure-oci-networking/exr-service-key.png" alt-text="Screenshot shows the Oracle ExpressRoute circuit with Service key." lightbox="media/configure-azure-oci-networking/exr-service-key.png":::
+
+ > [!IMPORTANT]
+ > You are billed for ExpressRoute charges as soon as the ExpressRoute circuit is provisioned, even if **Provider Status** is **Not Provisioned**.
+
+1. Carve out two private IP address spaces of `/30` each. Be sure that the spaces don't overlap with your Azure virtual network or OCI virtual cloud network IP Address space. The first IP address space is the *primary address space* and the second IP address space is the *secondary address space*. You need these addresses when you configure your FastConnect circuit.
+1. Create a Dynamic Routing Gateway (DRG). You need this gateway when you create your FastConnect circuit. For more information, see [Dynamic Routing Gateway](https://docs.cloud.oracle.com/iaas/Content/Network/Tasks/managingDRGs.htm).
+1. Create a FastConnect circuit under your Oracle tenant. For more information, see [Access to Microsoft Azure](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/azure.htm).
- * Under FastConnect configuration, select **Microsoft Azure: ExpressRoute** as the provider.
- * Select the Dynamic Routing Gateway that you provisioned in the previous step.
- * Select the bandwidth to be provisioned. For optimal performance, the bandwidth must match the bandwidth selected when creating the ExpressRoute circuit.
- * In **Provider Service Key**, paste the ExpressRoute service key.
- * Use the first /30 private IP address space carved out in a previous step for the **Primary BGP IP Address** and the second /30 private IP address space for the **Secondary BGP IP** Address.
- * Assign the first useable address of the two ranges for the Oracle BGP IP Address (Primary and Secondary) and the second address to the customer BGP IP Address (from a FastConnect perspective). The first useable IP address is the second IP address in the /30 address space (the first IP address is reserved by Microsoft).
- * Click **Create**.
-1. Complete linking the FastConnect to virtual cloud network under your Oracle tenant via Dynamic Routing Gateway, using Route Table.
-1. Navigate to Azure and ensure that the **Provider Status** for your ExpressRoute circuit has changed to **Provisioned** and that a peering of type **Azure private** has been provisioned. This is a pre-requisite for the following steps.
+ 1. Under FastConnect configuration, select **Microsoft Azure: ExpressRoute** as the provider.
+ 1. Select the Dynamic Routing Gateway that you provisioned in the previous step.
+ 1. Select the bandwidth to be provisioned. For optimal performance, the bandwidth must match the bandwidth selected when creating the ExpressRoute circuit.
+ 1. In **Provider Service Key**, paste the ExpressRoute service key.
+ 1. Use the first `/30` private IP address space carved out in a previous step for the **Primary BGP IP Address** and the second `/30` private IP address space for the **Secondary BGP IP Address**.
+ 1. Assign the first useable address of the two ranges for the Oracle BGP IP Address (primary and secondary) and the second address to the customer BGP IP Address from a FastConnect perspective. The first useable IP address is the second IP address in the `/30` address space. Microsoft reserves the first IP address.
+ 1. Select **Create**.
- ![ExpressRoute provider status](media/configure-azure-oci-networking/exr-provider-status.png)
-1. Click on the **Azure private** peering. You will see the peering details have automatically been configured based on the information you entered when setting up your FastConnect circuit.
+1. Complete linking the FastConnect to virtual cloud network under your Oracle tenant with Dynamic Routing Gateway, using Route Table.
+1. Navigate to Azure and ensure that the **Provider Status** for your ExpressRoute circuit has changed to **Provisioned** and that a peering of type **Azure private** has been provisioned. This status is a prerequisite for the following step.
- ![Private peering settings](media/configure-azure-oci-networking/exr-private-peering.png)
+ :::image type="content" source="media/configure-azure-oci-networking/exr-provider-status.png" alt-text="Screenshot shows the Oracle ExpressRoute circuit with the ExpressRoute provider status highlighted." lightbox="media/configure-azure-oci-networking/exr-provider-status.png":::
+
+1. Select the **Azure private** peering. You see the peering details have automatically been configured based on the information you entered when setting up your FastConnect circuit.
+
+ :::image type="content" source="media/configure-azure-oci-networking/exr-private-peering.png" alt-text="Screenshot shows private peering settings." lightbox="media/configure-azure-oci-networking/exr-private-peering.png":::
## Connect virtual network to ExpressRoute
-1. Create a virtual network and virtual network gateway, if you haven't already. For details, see the [step-by-step guide](../../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md).
-1. Set up the connection between the virtual network gateway and your ExpressRoute circuit by executing the [Terraform script](https://github.com/microsoft/azure-oracle/tree/master/InterConnect-2) or by executing the PowerShell command to [Configure ExpressRoute FastPath](../../../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
+Create a virtual network and virtual network gateway, if you haven't already. For more information, see [Configure a virtual network gateway for ExpressRoute using the Azure portal](../../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md).
+
+Set up the connection between the virtual network gateway and your ExpressRoute circuit by using the [Terraform script](https://github.com/microsoft/azure-oracle/tree/master/InterConnect-2) or by using the PowerShell command to [Configure ExpressRoute FastPath](../../../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
-Once you have completed the network configuration, you can verify the validity of your configuration by clicking on **Get ARP Records** and **Get route table** under the ExpressRoute Private peering blade in the Azure portal.
+Once you have completed the network configuration, you can verify your configuration by selecting **Get ARP Records** and **Get route table** under the ExpressRoute Private peering page in the Azure portal.
## Automation
-Microsoft has created Terraform scripts to enable automated deployment of the network interconnect. The Terraform scripts need to authenticate with Azure before execution, because they require adequate permissions on the Azure subscription. Authentication can be performed using an [Azure Active Directory service principal](../../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or using the Azure CLI. For more information, see the [Terraform documentation](https://www.terraform.io/cli/auth).
+Microsoft has created Terraform scripts to enable automated deployment of the network interconnect. The Terraform scripts need to authenticate with Azure before they run, because they require adequate permissions on the Azure subscription. Authentication can be performed using an [Azure Active Directory service principal](../../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or using the Azure CLI. For more information, see [CLI Authentication](https://www.terraform.io/cli/auth).
-The Terraform scripts and related documentation to deploy the inter-connect can be found in this [GitHub repository](https://aka.ms/azureociinterconnecttf).
+For the Terraform scripts and related documentation to deploy the inter-connect, see [Azure-OCI Cloud Inter-Connect](https://aka.ms/azureociinterconnecttf).
## Monitoring
-Installing agents on both the clouds, you can leverage Azure [Network Performance Monitor (NPM)](../../../expressroute/how-to-npm.md) to monitor the performance of the end-to-end network. NPM helps you to readily identify network issues, and helps eliminate them.
+Installing agents on both the clouds, you can use Azure [Network Performance Monitor](../../../expressroute/how-to-npm.md) to monitor the performance of the end-to-end network. Network Performance Monitor helps you to readily identify network issues, and helps eliminate them.
## Delete the interconnect link
-To delete the interconnect, the following steps must be followed, in the specific order given. Failure to do so will result in a "failed state" ExpressRoute circuit.
+To delete the interconnect, perform these steps in the order given. Failure to do so results in a *failed state* ExpressRoute circuit.
-1. Delete the ExpressRoute connection. Delete the connection by clicking the **Delete** icon on the page for your connection. For more information, see the [ExpressRoute documentation](../../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md#clean-up-resources).
+1. Delete the ExpressRoute connection. Delete the connection by selecting the **Delete** icon on the page for your connection. For more information, see [Clean up resources](../../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md#clean-up-resources).
1. Delete the Oracle FastConnect from the Oracle Cloud Console. 1. Once the Oracle FastConnect circuit has been deleted, you can delete the Azure ExpressRoute circuit.
-At this point, the delete and deprovisioning process is complete.
+The delete and deprovisioning process is complete.
## Next steps
-* For more information about the cross-cloud connection between OCI and Azure, see the [Oracle documentation](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/azure.htm).
-* Use [Terraform scripts](https://aka.ms/azureociinterconnecttf) to deploy infrastructure for targeted Oracle applications over Azure, and configure the network interconnect.
+- For more information about the cross-cloud connection between OCI and Azure, see [Access to Microsoft Azure](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/azure.htm).
+- Use Terraform scripts to deploy infrastructure for targeted Oracle applications over Azure, and configure the network interconnect. For more information, see [Azure-OCI Cloud Inter-Connect](https://aka.ms/azureociinterconnecttf).
virtual-network-manager How To Exclude Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-exclude-elements.md
Last updated 03/22/2023-++ # Define dynamic network group membership in Azure Virtual Network Manager with Azure Policy
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
This tutorial peers virtual networks in the same region. You can also peer virtu
- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
# [**Azure CLI**](#tab/create-peering-cli)
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-powershell.md
To use Azure CLI to create a Linux or Windows VM with Accelerated Networking ena
- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- [Azure PowerShell](/powershell/azure/install-az-ps) 1.0.0 or later installed. To find your currently installed version, run `Get-Module -ListAvailable Az`. If you need to install or upgrade, install the latest version of the Az module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az).
+- [Azure PowerShell](/powershell/azure/install-azure-powershell) 1.0.0 or later installed. To find your currently installed version, run `Get-Module -ListAvailable Az`. If you need to install or upgrade, install the latest version of the Az module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az).
- In PowerShell, sign in to your Azure account by using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
virtual-network Diagnose Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-routing-problem.md
Though effective routes were viewed through the VM in the previous steps, you ca
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to log into Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
+You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to log into Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
Get the effective routes for a network interface with [Get-AzEffectiveRouteTable](/powershell/module/az.network/get-azeffectiveroutetable). The following example gets the effective routes for a network interface named *myVMNic1*, that is in a resource group named *myResourceGroup*:
virtual-network Diagnose Network Traffic Filter Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-traffic-filter-problem.md
Though effective security rules were viewed through the VM, you can also view ef
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to log into Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions)].
+You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to log into Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions)].
Get the effective security rules for a network interface with [Get-AzEffectiveNetworkSecurityGroup](/powershell/module/az.network/get-azeffectivenetworksecuritygroup). The following example gets the effective security rules for a network interface named *myVMVMNic*, that is in a resource group named *myResourceGroup*:
virtual-network Add Dual Stack Ipv6 Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-powershell.md
In this article, you'll add IPv6 support to an existing virtual network. You'll
- Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
- An existing virtual network, public IP address and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address and a virtual machine, see [Quickstart: Create a Linux virtual machine in Azure with PowerShell](../../virtual-machines/linux/quick-create-powershell.md).
virtual-network Associate Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/associate-public-ip-address-vm.md
Install the [Azure CLI](/cli/azure/install-azure-cli?toc=%2fazure%2fvirtual-netw
## Azure PowerShell
-Install [Azure PowerShell](/powershell/azure/install-az-ps) on your machine, or use Cloud Shell. Cloud Shell is a free Bash shell that you can run directly within the Azure portal. It includes Azure PowerShell preinstalled and configured to use with your Azure account. Select the **Open Cloudshell** button in the Azure PowerShell code examples that follow. When you select **Open Cloudshell**, Cloud Shell loads in your browser and prompts you to sign into your Azure account.
+Install [Azure PowerShell](/powershell/azure/install-azure-powershell) on your machine, or use Cloud Shell. Cloud Shell is a free Bash shell that you can run directly within the Azure portal. It includes Azure PowerShell preinstalled and configured to use with your Azure account. Select the **Open Cloudshell** button in the Azure PowerShell code examples that follow. When you select **Open Cloudshell**, Cloud Shell loads in your browser and prompts you to sign into your Azure account.
1. If you're using Azure PowerShell locally, sign in to Azure with `Connect-AzAccount`.
virtual-network Configure Routing Preference Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-routing-preference-virtual-machine-powershell.md
In this tutorial, you learn how to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
virtual-network Create Custom Ip Address Prefix Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md
The steps in this article detail the process to:
- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary. - A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
> [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
The steps in this article detail the process to:
- A customer owned IPv4 range to provision in Azure. - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
> [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
virtual-network Create Public Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-powershell.md
In this quickstart, you'll learn how to create an Azure public IP address. Publi
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group An Azure resource group is a logical container into which Azure resources are deployed and managed.
virtual-network Create Public Ip Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-powershell.md
When you create a public IP address resource, you can assign a static public IP
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
virtual-network Create Vm Dual Stack Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-powershell.md
In this article, you'll create a virtual machine in Azure with PowerShell. The v
- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps). - Ensure your Az. Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az. Network" if necessary.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
virtual-network Public Ip Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-powershell.md
In this article, you'll learn how to upgrade a static Basic SKU public IP addres
* A **static** basic SKU public IP address in your subscription. For more information, see [Create a basic public IP address using PowerShell](./create-public-ip-powershell.md?tabs=create-public-ip-basic%2Ccreate-public-ip-non-zonal%2Crouting-preference#create-public-ip). * Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Upgrade public IP address
virtual-network Remove Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/remove-public-ip-address-vm.md
az network nic ip-config update \
## PowerShell
-Install [PowerShell](/powershell/azure/install-az-ps), or use the [Azure Cloud Shell](../../cloud-shell/overview.md). The Azure Cloud Shell is a free shell that you can run directly within the Azure portal. It has PowerShell preinstalled and configured to use with your account.
+Install [PowerShell](/powershell/azure/install-azure-powershell), or use the [Azure Cloud Shell](../../cloud-shell/overview.md). The Azure Cloud Shell is a free shell that you can run directly within the Azure portal. It has PowerShell preinstalled and configured to use with your account.
- If using PowerShell locally, sign in to Azure with `Connect-AzAccount`.
virtual-network Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-powershell.md
By default, the routing preference for public IP address is set to the Microsoft
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
virtual-network Virtual Network Deploy Static Pip Arm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-ps.md
Public IP addresses have a [nominal charge](https://azure.microsoft.com/pricing/
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
virtual-network Virtual Network Multiple Ip Addresses Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-powershell.md
This article explains how to add multiple IP addresses to a virtual machine usin
- PowerShell environment in [Azure Cloud Shell](https://shell.azure.com/powershell) or Azure PowerShell installed locally. To learn more about using PowerShell in Azure Cloud Shell, see [Azure Cloud Shell Quickstart](/azure/cloud-shell/quickstart?tabs=powershell).
- - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-InstalledModule -Name Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). Ensure your Az.Network module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary.
+ - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-InstalledModule -Name Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Ensure your Az.Network module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary.
- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
virtual-network Virtual Network Network Interface Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-network-interface-addresses.md
If you don't have an Azure account with an active subscription, [create one for
- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. Select the Cloud Shell icon from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure.
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure.
- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. Select the Cloud Shell icon from the top navigation bar of the Azure portal and then select **Bash** from the drop-down list.
virtual-network Virtual Networks Static Private Ip Arm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md
A virtual machine (VM) is automatically assigned a private IP address from a ran
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
If you don't have an Azure account with an active subscription, [create one for
- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure.
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure.
- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
virtual-network Manage Route Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-route-table.md
If you don't have one, set up an Azure account with an active subscription. [Cre
- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then choose **PowerShell** if it isn't already selected.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Also run `Connect-AzAccount` to create a connection with Azure.
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Also run `Connect-AzAccount` to create a connection with Azure.
- **Azure CLI users**: Run the commands via either the [Azure Cloud Shell](https://shell.azure.com/bash) or the Azure CLI running locally. Use Azure CLI version 2.0.31 or later if you're running the Azure CLI locally. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Also run `az login` to create a connection with Azure.
virtual-network Manage Subnet Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-subnet-delegation.md
Subnet delegation gives explicit permissions to the service to create service-sp
- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create the virtual network
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
If you don't have an Azure account with an active subscription, [create one for
- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure.
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure.
- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
virtual-network Migrate Classic Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/migrate-classic-vnet-powershell.md
When you migrate the virtual network from the classic to Resource Manager model,
## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- The steps and examples in this article use Azure PowerShell Az module. To install the Az modules locally on your computer, see [Install Azure PowerShell](/powershell/azure/install-az-ps). To learn more about the new Az module, see [Introducing the new Azure PowerShell Az module](/powershell/azure/new-azureps-module-az). PowerShell cmdlets are updated frequently. If you arenΓÇÖt running the latest version, the values specified in the instructions may fail. To find the installed versions of PowerShell on your system, use the cmdlet Get-Module -ListAvailable Az cmdlet.
+- The steps and examples in this article use Azure PowerShell Az module. To install the Az modules locally on your computer, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). To learn more about the new Az module, see [Introducing the new Azure PowerShell Az module](/powershell/azure/new-azureps-module-az). PowerShell cmdlets are updated frequently. If you arenΓÇÖt running the latest version, the values specified in the instructions may fail. To find the installed versions of PowerShell on your system, use the cmdlet Get-Module -ListAvailable Az cmdlet.
- To migrate a virtual network with an application gateway, remove the gateway before you run a prepare operation to move the network. After you complete the migration, reconnect the gateway in Azure Resource Manager. - Verify that youΓÇÖve installed both the classic and Az Azure PowerShell modules locally on your computer. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/). - Azure ExpressRoute gateways that connect to ExpressRoute circuits in another subscription can't be migrated automatically. In these cases, remove the ExpressRoute gateway, migrate the virtual network, and re-create the gateway.
virtual-network Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-bicep.md
A virtual network is the fundamental building block for private networks in Azur
# [PowerShell](#tab/azure-powershell)
- 1. [Install Azure PowerShell locally](/powershell/azure/install-Az-ps) to run the cmdlets. You need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
+ 1. [Install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. You need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
1. Run `Connect-AzAccount` to connect to Azure.
virtual-network Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-powershell.md
A virtual network is the fundamental building block for private networks in Azur
The steps in this quickstart run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
- You can also [install Azure PowerShell locally](/powershell/azure/install-Az-ps) to run the cmdlets. The steps in this article require Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. The steps in this article require Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
If you run PowerShell locally, run `Connect-AzAccount` to connect to Azure.
virtual-network Virtual Network Powershell Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-filter-network-traffic.md
This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, and HTTPS, while outbound traffic to the internet from the back-end subnet isn't permitted. After running the script, you have one virtual machine with two NICs. Each NIC is connected to a different subnet.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-InstalledModule -Name Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-InstalledModule -Name Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack-standard-load-balancer.md
This article outlines the necessary steps to create a dual stack virtual network
- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name Az.Network`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application in A
- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name Az.Network`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Sample script
virtual-network Virtual Network Powershell Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-multi-tier-application.md
This script sample creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. After running the script, you'll have two virtual machines, one in each subnet that you can deploy web server and MySQL software to.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
virtual-network Virtual Network Powershell Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md
This script sample creates and connects two virtual networks in the same region through the Azure network. After running the script, you'll create a peering between two virtual networks.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Az PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Az PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
virtual-network Virtual Network Powershell Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-route-traffic-through-nva.md
This script sample creates a virtual network with front-end and back-end subnets. It also creates a VM with IP forwarding enabled to route traffic between the two subnets. After running the script you can deploy network software, such as a firewall application, to the VM.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Az PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Az PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
virtual-network Tutorial Connect Virtual Networks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create virtual networks
virtual-network Tutorial Create Route Table Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a route table
virtual-network Tutorial Filter Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a network security group
virtual-network Tutorial Restrict Network Access To Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a virtual network
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
If you don't have an Azure account with an active subscription, [create one for
- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
virtual-network Virtual Network Manage Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-subnet.md
You can run the commands either in the [Azure Cloud Shell](/azure/cloud-shell/ov
- Azure Cloud Shell is a free interactive shell that has common Azure tools preinstalled and configured to use with your account. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. -- If you [install Azure PowerShell locally](/powershell/azure/install-Az-ps) to run the commands, you need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
+- If you [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the commands, you need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
Also make sure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use `Get-InstalledModule -Name Az.Network`. To update, use the command `Update-Module -Name Az.Network`.
virtual-network Virtual Network Network Interface Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface-vm.md
If you don't have one, set up an Azure account with an active subscription. [Cre
- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure.
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure.
- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
virtual-network Virtual Network Network Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface.md
You can run the commands either in the [Azure Cloud Shell](/azure/cloud-shell/ov
- Azure Cloud Shell is a free interactive shell that has common Azure tools preinstalled and configured to use with your account. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. -- If you [install Azure PowerShell locally](/powershell/azure/install-Az-ps) to run the commands, you need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
+- If you [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the commands, you need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
Also make sure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use `Get-InstalledModule -Name "Az.Network"`. To update, use the command `Update-Module -Name Az.Network`.
virtual-network Virtual Network Nsg Manage Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-nsg-manage-log.md
You can use the [Azure portal](#azure-portal), [Azure PowerShell](#azure-powersh
You can run the commands that in this section in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account.
-If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you run PowerShell locally, you also need to run the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
+If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you run PowerShell locally, you also need to run the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
To enable resource logging, you need the ID of an existing NSG. If you don't have an existing NSG, create one by using the [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) cmdlet.
virtual-network Virtual Network Service Endpoint Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a virtual network
virtual-wan Virtual Wan Route Table Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva.md
Verify that you have met the following criteria:
## <a name="signin"></a>1. Sign in
-Make sure you install the latest version of the Resource Manager PowerShell cmdlets. For more information about installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/install-az-ps). This is important because earlier versions of the cmdlets do not contain the current values that you need for this exercise.
+Make sure you install the latest version of the Resource Manager PowerShell cmdlets. For more information about installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). This is important because earlier versions of the cmdlets do not contain the current values that you need for this exercise.
1. Open your PowerShell console with elevated privileges, and sign in to your Azure account. This cmdlet prompts you for the sign-in credentials. After signing in, it downloads your account settings so that they are available to Azure PowerShell.
web-application-firewall Configure Waf Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/configure-waf-custom-rules.md
If you want run the Azure PowerShell in this article in one continuous script th
If you choose to install and use Azure PowerShell locally, this script requires the Azure PowerShell module version 2.1.0 or later.
-1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
2. To create a connection with Azure, run `Connect-AzAccount`. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
web-application-firewall Per Site Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/per-site-policies.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
web-application-firewall Tutorial Restrict Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
web-application-firewall Upgrade Ag Waf Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/upgrade-ag-waf-policy.md
See [Upgrade Web Application Firewall policies using Azure PowerShell](migrate-p
## Upgrade Application Gateway v1 to WAF v2 with WAF policy
+> [!IMPORTANT]
+> We announced the deprecation of the Application Gateway V1 SKU (Standard and WAF) on April 28, 2023 and subsequently this SKU retires on April 28, 2026. For more information, see [Migrate your Application Gateways from V1 SKU to V2 SKU by April 28, 2026](../../application-gateway/v1-retirement.md).
+ Application Gateway v1 doesn't support WAF policy. Upgrading to WAF policy is a two step process: - Upgrade Application Gateway v1 to v2 version.
Application Gateway v1 doesn't support WAF policy. Upgrading to WAF policy is a
## Next steps
-For more information about WAF on Application Gateway policy, see [Azure Web Application Firewall (WAF) policy overview](policy-overview.md).
+- For more information about WAF on Application Gateway policy, see [Azure Web Application Firewall (WAF) policy overview](policy-overview.md).
+- [Migrate your Application Gateways from V1 SKU to V2 SKU by April 28, 2026](../../application-gateway/v1-retirement.md)
web-application-firewall Waf Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/scripts/waf-custom-rules-powershell.md
This script creates an Application Gateway Web Application Firewall that uses cu
If you choose to install and use Azure PowerShell locally, this script requires the Azure PowerShell module version 2.1.0 or later.
-1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
2. To create a connection with Azure, run `Connect-AzAccount`. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]