Updates from: 03/10/2022 02:10:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
A computer that's running either of the following:
# [Visual Studio](#tab/visual-studio)
-* [Visual Studio 2019 16.8 or later](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019), with the ASP.NET and web development workload
-* [.NET 5.0 SDK](https://dotnet.microsoft.com/download/dotnet)
+* [Visual Studio 2022 17.0 or later](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019), with the ASP.NET and web development workload
+* [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet)
# [Visual Studio Code](#tab/visual-studio-code) * [Visual Studio Code](https://code.visualstudio.com/download) * [C# for Visual Studio Code (latest version)](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
-* [.NET 5.0 SDK](https://dotnet.microsoft.com/download/dotnet)
+* [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet)
To create the web app registration, do the following:
1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *webapp1*). 1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
-1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://localhost:5001/signin-oidc`.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://localhost:44316/signin-oidc`.
1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox. 1. Select **Register**. 1. Select **Overview**.
Your final configuration file should look like the following JSON:
## Step 5: Run the sample web app 1. Build and run the project.
-1. Go to `https://localhost:5001`.
+1. Go to `https://localhost:44316`.
1. Select **Sign Up/In**. ![Screenshot of the "Sign Up/In" button on the project Welcome page.](./media/configure-authentication-sample-web-app/web-app-sign-in.png)
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
The default token validation code is configured to use an Azure AD token and req
After you deploy the SCIM endpoint, you can test to ensure that it's compliant with SCIM RFC. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
-The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
+The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *TokenController.cs* in **SCIMReferenceCode** > **Microsoft.SCIM.WebHostSample** > **Controllers**.
> [!NOTE] > You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
To develop a SCIM-compliant user and group endpoint with interoperability for a
> [!div class="nextstepaction"] > [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
-> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
+> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 02/03/2022 Last updated : 03/09/2022 -+ # Azure Active Directory application provisioning: What's new Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## February 2022
+
+### Updated articles
+
+- [Reference for writing expressions for attribute mappings in Azure Active Directory](functions-for-customizing-application-data.md)
+ ## January 2022 ### Updated articles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/whats-new-docs.md
Title: "What's new in Azure Active Directory application proxy" description: "New and updated documentation for the Azure Active Directory application proxy." Previously updated : 02/03/2022 Last updated : 03/09/2022
Welcome to what's new in Azure Active Directory application proxy documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## February 2022
+
+### Updated articles
+
+- [Plan an Azure AD Application Proxy deployment](application-proxy-deployment-plan.md)
+- [Secure access to on-premises APIs with Azure Active Directory Application Proxy](application-proxy-secure-api-access.md)
+ ## January 2022 ### Updated articles
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| Excelsecu | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html | | Feitian | ![y] | ![y]| ![y]| ![y]| ![y] | https://shop.ftsafe.us/pages/microsoft | | Fortinet | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.fortinet.com/ |
+| Giesecke + Devrient (G+D) | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication |
| GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key | | HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us | | Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
Previously updated : 02/24/2022 Last updated : 03/09/2022
This article describes how to onboard an Amazon Web Services (AWS) account on Cl
- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod). - To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).
+## View a training video on configuring and onboarding an AWS account
+
+To view a video on how to configure and onboard AWS accounts in CloudKnox, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+ ## Onboard an AWS account 1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
Previously updated : 02/24/2022 Last updated : 03/09/2022
To add CloudKnox to your Azure AD tenant:
- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod). - To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).
-## View a training video on enabling CloudKnox
+## View a training video on enabling CloudKnox in your Azure AD tenant
-To view a video on how to enable CloudKnox in your Azure AD tenant, select
-[Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
## How to onboard an Azure subscription
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Previously updated : 02/24/2022 Last updated : 03/09/2022
To enable CloudKnox in your organization:
## View a training video on enabling CloudKnox
-To view a video on how to enable CloudKnox in your Azure AD tenant, select
-[Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+- To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+- To view a video on how to configure and onboard AWS accounts in CloudKnox, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
## How to enable CloudKnox on your Azure AD tenant
active-directory Cloudknox Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-training-videos.md
Previously updated : 02/24/2022 Last updated : 03/09/2022
To view step-by-step training videos on how to use CloudKnox Permissions Managem
To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+## Configure and onboard Amazon Web Services (AWS) accounts
+
+To view a video on how to configure and onboard Amazon Web Services (AWS) accounts in CloudKnox Permissions Management, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
++ <!## Privilege on demand (POD) work flows - View a step-by-step video on the [privilege on demand (POD) work flow from the Just Enough Permissions (JEP) Controller](https://vimeo.com/461508166/3d88107f41).
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-prerequisites.md
Previously updated : 10/19/2021 Last updated : 03/04/2022
Run the [IdFix tool](/office365/enterprise/prepare-directory-attributes-for-sync
- If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service. - If your firewall or proxy allows you to specify safe suffixes, add connections to \*.msappproxy.net and \*.servicebus.windows.net. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
+ - If you are installing against the **US government** cloud, and your firewall or proxy allows you to specify safe suffixes, add connections to:
+ - *.microsoftonline.us
+ - *.microsoft.us
+ - *.msappproxy.us
+ - *.windowsazure.us
+
- Your agents need access to login.windows.net and login.microsoftonline.com for initial registration. Open your firewall for those URLs as well. - For certificate validation, unblock the following URLs: mscrl.microsoft.com:80, crl.microsoft.com:80, ocsp.msocsp.com:80, and www\.microsoft.com:80. These URLs are used for certificate validation with other Microsoft products, so you might already have these URLs unblocked.
active-directory How To Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-troubleshoot.md
ms.technology: identity-adfs
# Cloud sync troubleshooting
-Cloud sync touches many different things and has many different dependencies. This broad scope can give rise to various problems. This article helps you troubleshoot these problems. It introduces the typical areas for you to focus on, how to gather additional information, and the various techniques you can use to track down problems.
--
-## Common troubleshooting areas
-
-|Name|Description|
-|--|--|
-|[Agent problems](#agent-problems)|Verify that the agent was installed correctly and that it communicates with Azure Active Directory (Azure AD).|
-|[Object synchronization problems](#object-synchronization-problems)|Use provisioning logs to troubleshoot object synchronization problems.|
-|[Provisioning quarantined problems](#provisioning-quarantined-problems)|Understand provisioning quarantine problems and how to fix them.|
-|[Password writeback](#password-writeback)|Understand common password writeback issues and how to fix them.|
-
+Cloud sync has many different dependencies and interactions, which can give rise to various problems. This article helps you troubleshoot these problems. It introduces the typical areas for you to focus on, how to gather additional information, and the various techniques you can use to track down problems.
## Agent problems
-Some of the first things that you want to verify with the agent are:
-- Is it installed?-- Is the agent running locally?-- Is the agent in the portal?-- Is the agent marked as healthy?
+When you troubleshoot agent problems, you verify that the agent was installed correctly, and that it communicates with Azure Active Directory (Azure AD). In particular, some of the first things that you want to verify with the agent are:
+
+- Is it installed?
+- Is the agent running locally?
+- Is the agent in the portal?
+- Is the agent marked as healthy?
-These items can be verified in the Azure portal and on the local server that's running the agent.
+You can verify these items in the Azure portal and on the local server that's running the agent.
### Azure portal agent verification
-To verify that the agent is seen by Azure and is healthy, follow these steps.
+To verify that Azure detects the agent, and that the agent is healthy, follow these steps:
1. Sign in to the Azure portal. 1. On the left, select **Azure Active Directory** > **Azure AD Connect**. In the center, select **Manage sync**. 1. On the **Azure AD Connect cloud sync** screen, select **Review all agents**.
- ![Review all agents](media/how-to-install/install-7.png)</br>
-
-1. On the **On-premises provisioning agents** screen, you see the agents you've installed. Verify that the agent in question is there and is marked *Healthy*.
+ ![Screenshot that shows the option to review all agents.](media/how-to-install/install-7.png)
- ![On-premises provisioning agents screen](media/how-to-install/install-8.png)</br>
+1. On the **On-premises provisioning agents** screen, you see the agents you've installed. Verify that the agent in question is there. If all is well, you will see the *active* (green) status for the agent.
-### Verify the required open ports
+ ![Screenshot that shows the installed agent, and its status.](media/how-to-install/install-8.png)
-Verify that Azure AD Connect Provisioning agent is able to communicate successfully with Azure data centers. If there's a firewall in the path, make sure that the following ports to outbound traffic are open.
+### Verify the required open ports
-Open the following ports to **outbound** traffic.
+Verify that the Azure AD Connect provisioning agent is able to communicate successfully with Azure datacenters. If there's a firewall in the path, make sure that the following ports to outbound traffic are open:
| Port number | How it's used | | -- | |
-| 80 | Downloading certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
-| 443 | All outbound communication with the Application Proxy service |
+| 80 | Downloading certificate revocation lists (CRLs), while validating the TLS/SSL certificate. |
+| 443 | Handling all outbound communication with the Application Proxy service. |
-If your firewall enforces traffic according to originating users, also open ports 80 and 443 for traffic from Windows services that run as a Network Service.
+If your firewall enforces traffic according to originating users, also open ports 80 and 443 for traffic from Windows services that run as a network service.
### Allow access to URLs
Allow access to the following URLs:
| URL | Port | How it's used | | | | |
-| `*.msappproxy.net` <br> `*.servicebus.windows.net` | 443/HTTPS | Communication between the connector and the Application Proxy cloud service |
+| `*.msappproxy.net` <br> `*.servicebus.windows.net` | 443/HTTPS | Communication between the connector and the Application Proxy cloud service. |
| `crl3.digicert.com` <br> `crl4.digicert.com` <br> `ocsp.digicert.com` <br> `crl.microsoft.com` <br> `oneocsp.microsoft.com` <br> `ocsp.msocsp.com`<br> | 80/HTTP | The connector uses these URLs to verify certificates. | | `login.windows.net` <br> `secure.aadcdn.microsoftonline-p.com` <br> `*.microsoftonline.com` <br> `*.microsoftonline-p.com` <br> `*.msauth.net` <br> `*.msauthimages.net` <br> `*.msecnd.net` <br> `*.msftauth.net` <br> `*.msftauthimages.net` <br> `*.phonefactor.net` <br> `enterpriseregistration.windows.net` <br> `management.azure.com` <br> `policykeyservice.dc.ad.msft.net` <br> `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 443/HTTPS | The connector uses these URLs during the registration process. | | `ctldl.windowsupdate.com` | 80/HTTP | The connector uses this URL during the registration process. |
-You can allow connections to `*.msappproxy.net`, `*.servicebus.windows.net`, and other URLs above if your firewall or proxy lets you configure access rules based on domain suffixes. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
+You can allow connections to `*.msappproxy.net`, `*.servicebus.windows.net`, and other of the preceding URLs, if your firewall or proxy lets you configure access rules based on domain suffixes. If not, you need to allow access to the [Azure IP ranges and service tags - public cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
> [!IMPORTANT]
-> Avoid all forms of inline inspection and termination on outbound TLS communications between Azure AD Application Proxy connectors and Azure AD Application Proxy Cloud services.
+> Avoid all forms of inline inspection and termination on outbound TLS communications between Azure AD Application Proxy connectors and Azure AD Application Proxy cloud services.
### DNS name resolution for Azure AD Application Proxy endpoints
-Public DNS records for Azure AD Application Proxy endpoints are chained CNAME records pointing to an A record. This ensures fault tolerance and flexibility. ItΓÇÖs guaranteed that the Azure AD Application Proxy Connector always accesses host names with the domain suffixes `*.msappproxy.net` or `*.servicebus.windows.net`. However, during the name resolution the CNAME records might contain DNS records with different host names and suffixes. Due to this, you must ensure that the device (depending on your setup - connector server, firewall, outbound proxy) can resolve all the records in the chain and allows connection to the resolved IP addresses. Since the DNS records in the chain might be changed from time to time, we cannot provide you with any list DNS records.
+Public DNS records for Azure AD Application Proxy endpoints are chained CNAME records, pointing to an A record. This ensures fault tolerance and flexibility. ItΓÇÖs guaranteed that the Azure AD Application Proxy connector always accesses host names with the domain suffixes `*.msappproxy.net` or `*.servicebus.windows.net`.
+
+However, during the name resolution, the CNAME records might contain DNS records with different host names and suffixes. Due to this, you must ensure that the device can resolve all the records in the chain, and allows connection to the resolved IP addresses. Because the DNS records in the chain might be changed from time to time, we can't provide you with any list DNS records.
### On the local server
-To verify that the agent is running, follow these steps.
+To verify that the agent is running, follow these steps:
-1. On the server with the agent installed, open **Services** by either navigating to it or by going to **Start** > **Run** > **Services.msc**.
-1. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are there and their status is *Running*.
+1. On the server with the agent installed, open **Services**. Do this by going to **Start** > **Run** > **Services.msc**.
+1. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are there. Also confirm that their status is *Running*.
- ![Services screen](media/how-to-troubleshoot/troubleshoot-1.png)
+ ![Screenshot of local services and their status.](media/how-to-troubleshoot/troubleshoot-1.png)
### Common agent installation problems
-The following sections describe some common agent installation problems and typical resolutions.
+The following sections describe some common agent installation problems, and typical resolutions of those problems.
#### Agent failed to start You might receive an error message that states:
-**Service 'Microsoft Azure AD Connect Provisioning Agent' failed to start. Verify that you have sufficient privileges to start the system services.**
+*Service 'Microsoft Azure AD Connect Provisioning Agent' failed to start. Verify that you have sufficient privileges to start the system services.*
-This problem is typically caused by a group policy that prevented permissions from being applied to the local NT Service log-on account created by the installer (NT SERVICE\AADConnectProvisioningAgent). These permissions are required to start the service.
+This problem is typically caused by a group policy. The policy prevented permissions from being applied to the local NT Service sign-in account created by the installer (`NT SERVICE\AADConnectProvisioningAgent`). These permissions are required to start the service.
-To resolve this problem, follow these steps.
+To resolve this problem, follow these steps:
1. Sign in to the server with an administrator account.
-1. Open **Services** by either navigating to it or by going to **Start** > **Run** > **Services.msc**.
+1. Open **Services** by going to **Start** > **Run** > **Services.msc**.
1. Under **Services**, double-click **Microsoft Azure AD Connect Provisioning Agent**. 1. On the **Log On** tab, change **This account** to a domain admin. Then restart the service.
- ![Log On tab](media/how-to-troubleshoot/troubleshoot-3.png)
+ ![Screenshot that shows options available from the log on tab.](media/how-to-troubleshoot/troubleshoot-3.png)
-#### Agent times out or certificate is invalid
+#### Agent times out or certificate isn't valid
You might get the following error message when you attempt to register the agent.
-![Time-out error message](media/how-to-troubleshoot/troubleshoot-4.png)
+![Screenshot that shows a time-out error message.](media/how-to-troubleshoot/troubleshoot-4.png)
+
+This problem is usually caused by the agent being unable to connect to the hybrid identity service. To resolve this problem, configure an outbound proxy.
-This problem is usually caused by the agent being unable to connect to the Hybrid Identity Service and requires you to configure an HTTP proxy. To resolve this problem, configure an outbound proxy.
+The provisioning agent supports the use of an outbound proxy. You can configure it by editing the following agent .config file: *C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config*.
-The provisioning agent supports use of an outbound proxy. You can configure it by editing the agent config file *C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config*.
-Add the following lines into it, toward the end of the file just before the closing `</configuration>` tag.
-Replace the variables `[proxy-server]` and `[proxy-port]` with your proxy server name and port values.
+Add the following lines into it, toward the end of the file, just before the closing `</configuration>` tag. Replace the variables `[proxy-server]` and `[proxy-port]` with your proxy server name and port values.
```xml <system.net>
Replace the variables `[proxy-server]` and `[proxy-port]` with your proxy server
#### Agent registration fails with security error
-You might get an error message when you install the cloud provisioning agent.
+You might get an error message when you install the cloud provisioning agent. This problem is typically caused by the agent being unable to run the PowerShell registration scripts, due to local PowerShell execution policies.
-This problem is typically caused by the agent being unable to execute the PowerShell registration scripts due to local PowerShell execution policies.
-
-To resolve this problem, change the PowerShell execution policies on the server. You need to have Machine and User policies set as *Undefined* or *RemoteSigned*. If they're set as *Unrestricted*, you'll see this error. For more information, see [PowerShell execution policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies).
+To resolve this problem, change the PowerShell execution policies on the server. You need to have machine and user policies set as `Undefined` or `RemoteSigned`. If they're set as `Unrestricted`, you'll see this error. For more information, see [PowerShell execution policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies).
### Log files
-By default, the agent emits minimal error messages and stack trace information. You can find these trace logs in the folder **C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace**.
+By default, the agent emits minimal error messages and stack trace information. You can find these trace logs in the following folder: *C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace*.
To gather additional details for troubleshooting agent-related problems, follow these steps.
-1. Install the AADCloudSyncTools PowerShell module as described [here](reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
-2. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. You can use the following switches to fine tune your data collection.
- - SkipVerboseTrace to only export current logs without capturing verbose logs (default = false)
- - TracingDurationMins to specify a different capture duration (default = 3 mins)
- - OutputPath to specify a different output path (default = UserΓÇÖs Documents)
-
+1. [Install the AADCloudSyncTools PowerShell module](reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
+1. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. You can use the following options to fine-tune your data collection.
+ - `SkipVerboseTrace` to only export current logs without capturing verbose logs (default = false).
+ - `TracingDurationMins` to specify a different capture duration (default = 3 minutes).
+ - `OutputPath` to specify a different output path (default = userΓÇÖs Documents folder).
## Object synchronization problems
-The following section contains information on troubleshooting object synchronization.
-
-### Provisioning logs
+In the Azure portal, you can use provisioning logs to help track down and troubleshoot object synchronization problems. To view the logs, select **Logs**.
-In the Azure portal, provisioning logs can be used to help track down and troubleshoot object synchronization problems. To view the logs, select **Logs**.
-
-![Logs button](media/how-to-troubleshoot/log-1.png)
+![Screenshot that shows the logs button.](media/how-to-troubleshoot/log-1.png)
Provisioning logs provide a wealth of information on the state of the objects being synchronized between your on-premises Active Directory environment and Azure.
-![Provisioning Logs screen](media/how-to-troubleshoot/log-2.png)
+![Screenshot that shows information about provisioning logs.](media/how-to-troubleshoot/log-2.png)
-You can use the drop-down boxes at the top of the page to filter the view to zero in on specific problems, such as dates. Double-click an individual event to see additional information.
+You can filter the view to focus on specific problems, such as dates. Double-click an individual event to see additional information.
-![Provisioning Logs drop-down box information](media/how-to-troubleshoot/log-3.png)
+![Screenshot that shows the provisioning logs dropdown list information.](media/how-to-troubleshoot/log-3.png)
This information provides detailed steps and where the synchronization problem is occurring. In this way, you can pinpoint the exact spot of the problem. - ## Provisioning quarantined problems
-Cloud sync monitors the health of your configuration and places unhealthy objects in a quarantine state. If most or all of the calls made against the target system consistently fail because of an error, for example, invalid admin credentials, the sync job is marked as in quarantine.
+Cloud sync monitors the health of your configuration, and places unhealthy objects in a quarantine state. If most or all of the calls made against the target system consistently fail because of an error (for example, invalid admin credentials), the sync job is marked as in quarantine.
-![Quarantine status](media/how-to-troubleshoot/quarantine-1.png)
+![Screenshot that shows the quarantine status.](media/how-to-troubleshoot/quarantine-1.png)
By selecting the status, you can see additional information about the quarantine. You can also obtain the error code and message. ![Screenshot that shows additional information about the quarantine.](media/how-to-troubleshoot/quarantine-2.png)
-Right-clicking on the status will bring up additional options:
+Right-clicking on the status will bring up additional options to:
-- view provisioning logs-- view agent-- clear quarantine
+- View the provisioning logs.
+- View the agents.
+- Clear the quarantine.
![Screenshot that shows the right-click menu options.](media/how-to-troubleshoot/quarantine-4.png) ### Resolve a quarantine
-There are two different ways to resolve a quarantine. They are:
+There are two different ways to resolve a quarantine. You can clear the quarantine, or you can restart the provisioning job.
-- clear quarantine - clears the watermark and runs a delta sync-- restart the provisioning job - clears the watermark and runs an initial sync
+#### Clear the quarantine
-#### Clear quarantine
-
-To clear the watermark and run a delta sync on the provisioning job once you have verified it, simply right-click on the status and select **clear quarantine**.
+To clear the watermark and run a delta sync on the provisioning job after you have verified it, simply right-click on the status and select **Clear quarantine**.
You should see a notice that the quarantine is clearing.
You should see a notice that the quarantine is clearing.
Then you should see the status on your agent as healthy.
-![Quarantine status information](media/how-to-troubleshoot/quarantine-6.png)
+![Screenshot that shows the agent status is healthy.](media/how-to-troubleshoot/quarantine-6.png)
#### Restart the provisioning job
-Use the Azure portal to restart the provisioning job. On the agent configuration page, select **Restart provisioning**.
+Use the Azure portal to restart the provisioning job. On the agent configuration page, select **Restart sync**.
- ![Restart provisioning](media/how-to-troubleshoot/quarantine-3.png)
+ ![Screenshot that shows options on the agent configuration page.](media/how-to-troubleshoot/quarantine-3.png)
-- Use Microsoft Graph to [restart the provisioning job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). You'll have full control over what you restart. You can choose to clear:
+Alternatively, you can use Microsoft Graph to [restart the provisioning job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). You have full control over what you restart. You can choose to clear:
- - Escrows, to restart the escrow counter that accrues toward quarantine status.
- - Quarantine, to remove the application from quarantine.
- - Watermarks.
+- Escrows, to restart the escrow counter that accrues toward quarantine status.
+- Quarantine, to remove the application from quarantine.
+- Watermarks.
- Use the following request:
+Use the following request:
`POST /servicePrincipals/{id}/synchronization/jobs/{jobId}/restart`
-## Repairing the the Cloud Sync service account
+## Repair the cloud sync service account
-If you need to repair the cloud sync service account you can use the `Repair-AADCloudSyncToolsAccount`.
+If you need to repair the cloud sync service account, you can use the `Repair-AADCloudSyncToolsAccount` command.
- 1. Use the installation steps outlined [here](reference-powershell.md#install-the-aadcloudsynctools-powershell-module) to begin and then continue with the remaining steps.
+ 1. [Install the AADCloudSyncTools PowerShell module](reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
- 2. From a PowerShell session with administrative privileges, type or copy and paste the following:
+ 1. From a PowerShell session with administrative privileges, type, or copy and paste, the following:
```powershell Connect-AADCloudSyncTools ```
- 3. Enter your Azure AD global admin credentials.
+ 1. Enter your Azure AD global admin credentials.
- 4. Type or copy and paste the following:
+ 1. Type, or copy and paste, the following:
```powershell Repair-AADCloudSyncToolsAccount ```
- 5. Once this completes it should say that the account was repaired successfully.
+ 1. After this completes, it should say that the account was repaired successfully.
## Password writeback
-The following information is important to keep in mind with regard to enabling and using password writeback with cloud sync.
--- If you need to update the [gMSA permissions](how-to-gmsa-cmdlets.md#using-set-aadcloudsyncpermissions), it may take up to an hour or more for these permissions to replicate to all the objects in your directory. If you don't assign these permissions, writeback may appear to be configured correctly, but users may encounter errors when they update their on-premises passwords from the cloud. Permissions must be applied to ΓÇ£This object and all descendant objectsΓÇ¥ for **Unexpire Password** to appear. -- If passwords for some user accounts aren't written back to the on-premises directory, make sure that inheritance isn't disabled for the account in the on-prem AD DS environment. Write permissions for passwords must be applied to descendant objects for the feature to work correctly. -- Password policies in the on-premises AD DS environment may prevent password resets from being correctly processed. If you are testing this feature and want to reset passwords for users more than once per day, the group policy for Minimum password age must be set to 0. This setting can be found under **Computer Configuration > Policies > Windows Settings > Security Settings > Account Policies** within **gpmc.msc**.
- - If you update the group policy, wait for the updated policy to replicate, or use the gpupdate /force command.
- - For passwords to be changed immediately, Minimum password age must be set to 0. However, if users adhere to the on-premises policies, and the Minimum password age is set to a value greater than zero, password writeback will not work after the on-premises policies are evaluated.
--
+To enable and use password writeback with cloud sync, keep the following in mind:
+- If you need to update the [gMSA permissions](how-to-gmsa-cmdlets.md#using-set-aadcloudsyncpermissions), it might take an hour or more for these permissions to replicate to all the objects in your directory. If you don't assign these permissions, writeback can appear to be configured correctly, but users might encounter errors when they update their on-premises passwords from the cloud. Permissions must be applied to **This object and all descendant objects** for **Unexpire Password** to appear.
+- If passwords for some user accounts aren't written back to the on-premises directory, make sure that inheritance isn't disabled for the account in the on-premises Active Directory Domain Services (AD DS) environment. Write permissions for passwords must be applied to descendant objects for the feature to work correctly.
+- Password policies in the on-premises AD DS environment might prevent password resets from being correctly processed. If you're testing this feature and want to reset passwords for users more than once per day, the group policy for the minimum password age must be set to 0. You can find this setting in the following location: **Computer Configuration** > **Policies** > **Windows Settings** > **Security Settings** > **Account Policies**, within **gpmc.msc**.
+ - If you update the group policy, wait for the updated policy to replicate, or use the `gpupdate /force` command.
+ - For passwords to be changed immediately, the minimum password age must be set to 0. However, if users adhere to the on-premises policies, and the minimum password age is set to a value greater than 0, password writeback doesn't work after the on-premises policies are evaluated.
## Next steps
active-directory Msal Net Web Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-web-browsers.md
IPublicClientApplication pca = PublicClientApplicationBuilder
> [!Note] > If you configure `http://localhost`, internally MSAL.NET will find a random open port and use it.
-### Linux and MAC
+### Linux and macOS
-On Linux, MSAL.NET will open the default OS browser using the xdg-open tool. To troubleshoot, run the tool from a terminal, for example, `xdg-open "https://www.bing.com"`. On Mac, the browser is opened by invoking `open <url>`.
+On Linux, MSAL.NET opens the default OS browser with a tool like [xdg-open](http://manpages.ubuntu.com/manpages/focal/man1/xdg-open.1.html). Opening the browser with `sudo` is unsupported by MSAL and will cause MSAL to throw an exception.
-### Customizing the experience
+On macOS, the browser is opened by invoking `open <url>`.
-> [!NOTE]
-> Customization is available in MSAL.NET 4.1.0 or later.
+### Customizing the experience
-MSAL.NET is able to respond with an HTTP message when a token is received or in case of error. You can display an HTML message or redirect to an URL of your choice:
+MSAL.NET can respond with an HTTP message or HTTP redirect when a token is received or an error occurs.
```csharp var options = new SystemWebViewOptions()
active-directory Quickstart Configure App Expose Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
First, follow these steps to create an example scope named `Employees.Read.All`:
1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-expose-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration. 1. Select **Azure Active Directory** > **App registrations**, and then select your API's app registration.
-1. Select **Expose an API** > **Add a scope**.
+1. Select **Expose an API**
+1. Select **Set** next to **Application ID URI** if you haven't yet configured one.
+
+ You can use the default value of `api://<application-client-id>` or another [supported App ID URI pattern](reference-app-manifest.md#identifieruris-attribute). The App ID URI acts as the prefix for the scopes you'll reference in your API's code, and it must be globally unique.
+1. Select **Add a scope**:
:::image type="content" source="media/quickstart-configure-app-expose-web-apis/portal-02-expose-api.png" alt-text="An app registration's Expose an API pane in the Azure portal":::
-1. You're prompted to set an **Application ID URI** if you haven't yet configured one.
- The App ID URI acts as the prefix for the scopes you'll reference in your API's code, and it must be globally unique. You can use the default value provided, which is in the form `api://<application-client-id>`, or specify a more readable URI like `https://contoso.com/api`.
-
- More information on valid app ID URI patterns is available in the [Azure AD app manifest reference](reference-app-manifest.md).
1. Next, specify the scope's attributes in the **Add a scope** pane. For this walk-through, you can use the example values or specify your own.
active-directory Enterprise State Roaming Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md
The data retention policy isn't configurable. Once the data is permanently delet
## Next steps
-* [Enterprise State Roaming overview](enterprise-state-roaming-overview.md)
* [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml) * [Group Policy and MDM settings for settings sync](enterprise-state-roaming-group-policy-settings.md) * [Windows 10 roaming settings reference](enterprise-state-roaming-windows-settings-reference.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 02/03/2022 Last updated : 03/09/2022
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## February 2022
+
+### Updated articles
+
+- [Add Google as an identity provider for B2B guest users](google-federation.md)
+- [External Identities in Azure Active Directory](external-identities-overview.md)
+- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)
+- [B2B collaboration overview](what-is-b2b.md)
+- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
+- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)
+- [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)
+- [Azure Active Directory B2B best practices](b2b-fundamentals.md)
+- [Azure Active Directory B2B collaboration FAQs](faq.yml)
+- [Email one-time passcode authentication](one-time-passcode.md)
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
+- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
+ ## January 2022 ### Updated articles
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
For more information, see [License requirements](access-reviews-overview.md#lice
A multi-stage review allows the administrator to define two or three sets of reviewers to complete a review one after another. In a single-stage review, all reviewers make a decision within the same period and the last reviewer to make a decision "wins". In a multi-stage review, two or three independent sets of reviewers make a decision within their own stage, and the next stage doesn't happen until a decision is made in the previous stage. Multi-stage reviews can be used to reduce the burden on later-stage reviewers, allow for escalation of reviewers, or have independent groups of reviewers agree on decisions. > [!WARNING]
-> Data of users included in multi-stage access reviews are a part of the audit record at the start of the review. Administrators may delete the data at any time by deleting the multi-stage access review series.
+> Data of users included in multi-stage access reviews are a part of the audit record at the start of the review. Administrators may delete the data at any time by deleting the multi-stage access review series. For general information about GDPR and protecting user data, see the [GDPR section of the Microsoft Trust Center](https://www.microsoft.com/trust-center/privacy/gdpr-overview) and the [GDPR section of the Service Trust portal](https://servicetrust.microsoft.com/ViewPage/GDPRGetStarted).
1. After you have selected the resource and scope of your review, move on to the **Reviews** tab.
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-multiple-domains.md
na Previously updated : 01/21/2022 Last updated : 03/09/2022
By using the PowerShell command `Get-MsolDomainFederationSettings -DomainName <y
![Screenshot that shows the federation settings updated on the original domain.](./media/how-to-connect-install-multiple-domains/MsolDomainFederationSettings.png)
-And the IssuerUri on the new domain has been set to `https://bmfabrikam.com/adfs/services/trust`
+And the IssuerUri on the new domain has been set to `https://bmcontoso.com/adfs/services/trust`
![Get-MsolDomainFederationSettings](./media/how-to-connect-install-multiple-domains/settings2.png)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 02/03/2022 Last updated : 03/09/2022
reviewer: napuri
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## February 2022
+
+### New articles
+
+- [Tutorial: Manage application access and security](tutorial-manage-access-security.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)
+- [Tutorial: Govern and monitor applications](tutorial-govern-monitor.md)
+- [Properties of an enterprise application](application-properties.md)
+
+### Updated articles
+
+- [Tutorial: Manage application access and security](tutorial-manage-access-security.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)
+- [Configure sign-in behavior using Home Realm Discovery](configure-authentication-for-federated-users-portal.md)
+- [Home Realm Discovery for an application](home-realm-discovery-policy.md)
+- [Disable auto-acceleration sign-in](prevent-domain-hints-with-home-realm-discovery.md)
+- [Configure enterprise application properties](add-application-portal-configure.md)
+- [What is application management in Azure Active Directory?](what-is-application-management.md)
+- [Overview of enterprise application ownership in Azure Active Directory](overview-assign-app-owners.md)
+- [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)
+- [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md)
+- [Tutorial: Configure F5 BIG-IPΓÇÖs Access Policy Manager for header-based SSO](f5-big-ip-header-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)
+- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)
+ ## January 2022 ### New articles
api-management Api Management Sample Send Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-send-request.md
Once you have this information, you can make requests to all the backend systems
```xml <send-request mode="new" response-variable-name="revenuedata" timeout="20" ignore-error="true">
- <set-url>@($"https://accounting.acme.com/salesdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
+ <set-url>@($"https://accounting.acme.com/salesdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
<set-method>GET</set-method> </send-request> <send-request mode="new" response-variable-name="materialdata" timeout="20" ignore-error="true">
- <set-url>@($"https://inventory.acme.com/materiallevels?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
+ <set-url>@($"https://inventory.acme.com/materiallevels?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
<set-method>GET</set-method> </send-request> <send-request mode="new" response-variable-name="throughputdata" timeout="20" ignore-error="true">
-<set-url>@($"https://production.acme.com/throughput?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
+<set-url>@($"https://production.acme.com/throughput?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
<set-method>GET</set-method> </send-request> <send-request mode="new" response-variable-name="accidentdata" timeout="20" ignore-error="true">
-<set-url>@($"https://production.acme.com/accidentdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
+<set-url>@($"https://production.acme.com/accidentdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")</set-url>
<set-method>GET</set-method> </send-request> ```
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
For configurations specific to the *external* mode, where the service endpoints
1. Continue configuring VNet settings for the remaining locations of your API Management instance. 1. In the top navigation bar, select **Save**, then select **Apply network configuration**.
- It can take 15 to 45 minutes to update the API Management instance.
+ It can take 15 to 45 minutes to update the API Management instance. The Developer tier has downtime during the process. The Basic and higher SKUs don't have downtime during the process.
After successful deployment, you should see your API Management service's **private** virtual IP address and **public** virtual IP address on the **Overview** blade. For more information about the IP addresses, see [Routing](#routing) in this article.
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
For configurations specific to the *internal* mode, where the endpoints are acce
7. In the top navigation bar, select **Save**, then select **Apply network configuration**.
- It can take 15 to 45 minutes to update the API Management instance.
+It can take 15 to 45 minutes to update the API Management instance. The Developer tier has downtime during the process. The Basic and higher SKUs don't have downtime during the process.
### Enable connectivity using a Resource Manager template (`stv2` compute platform)
The API Management service depends on several Azure services. When API Managemen
* For guidance on custom DNS setup, including forwarding for Azure-provided hostnames, see [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). * Outbound network access on port `53` is required for communication with DNS servers. For more settings, see [Virtual network configuration reference](virtual-network-reference.md).
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you plan to use a custom DNS server(s) for the VNet, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS Server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates). + ## Routing + A load-balanced public IP address (VIP) is reserved to provide access to all service endpoints and resources outside the VNet.
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
+
+ Title: Upcoming Breaking Changes in Azure API Management | Microsoft Docs
+description: A list of all the upcoming breaking changes for Azure API Management
+
+documentationcenter: ''
++++ Last updated : 02/07/2022+++
+# Upcoming breaking changes
+
+The following table lists all the upcoming breaking changes and feature retirements for Azure API Management.
+
+| Change Title | Effective Date |
+|:-|:|
+| [Resource Provider Source IP Address Update][bc1] | March 31, 2023 |
+
+<!-- Links -->
+[bc1]: ./rp-source-ip-address-change-mar2023.md
api-management Rp Source Ip Address Change Mar2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar2023.md
+
+ Title: Azure API Management IP address change (March 2023) | Microsoft Docs
+description: Azure API Management is updating the source IP address of the resource provider in certain regions. If your service is hosted in a Microsoft Azure Virtual Network, you may need to update network settings to continue managing your service.
+
+documentationcenter: ''
+++ Last updated : 02/07/2022+++
+# Resource Provider source IP address updates (March 2023)
+
+On 31 March, 2023 as part of our continuing work to increase the resiliency of API Management services, we're making the resource providers for Azure API Management zone redundant in each region. The IP address that the resource provider uses to communicate with your service will change in seven regions:
+
+| Region | Old IP Address | New IP Address |
+|:-|:--:|:--:|
+| Canada Central | 52.139.20.34 | 20.48.201.76 |
+| Brazil South | 191.233.24.179 | 191.238.73.14 |
+| Germany West Central | 51.116.96.0 | 20.52.94.112 |
+| South Africa North | 102.133.0.79 | 102.37.166.220 |
+| Korea Central | 40.82.157.167 | 20.194.74.240 |
+| Central India | 13.71.49.1 | 20.192.45.112 |
+| South Central US | 20.188.77.119 | 20.97.32.190 |
+
+This change will have NO effect on the availability of your API Management service. However, you **may** have to take steps described below to configure your API Management service beyond 31 March, 2023.
+
+## Is my service affected by this change?
+
+Your service is impacted by this change if:
+
+* The API Management service is in one of the seven regions listed in the table above.
+* The API Management service is running inside an Azure virtual network.
+* The Network Security Group (NSG) or User-defined Routes (UDRs) for the virtual network are configured with explicit source IP addresses.
+
+## What is the deadline for the change?
+
+The source IP addresses for the affected regions will be changed on 31 March, 2023. Complete all required networking changes before then.
+
+After 31 March 2023, if you prefer not to make changes to your IP addresses, your services will continue to run but you will not be able to add or remove APIs, or change API policy, or otherwise configure your API Management service.
+
+## Can I avoid this sort of change in the future?
+
+Yes, you can.
+
+API Management publishes a _service tag_ that you can use to configure the NSG for your virtual network. The service tag includes information about the source IP addresses that API Management uses to manage your service. For more information on this topic, read [Configure NSG Rules] in the API Management documentation.
+
+## What do I need to do?
+
+Update the NSG security rules that allow the API Management resource provider to communicate with your API Management instance. For detailed instructions on how to manage a NSG, review [Create, change, or delete a network security group] in the Azure Virtual Network documentation.
+
+1. Go to the [Azure portal](https://portal.azure.com) to view your NSGs. Search for and select **Network security groups**.
+2. Select the name of the NSG associated with the virtual network hosting your API Management service.
+3. In the menu bar, choose **Inbound security rules**.
+4. The inbound security rules should already have an entry that mentions a Source address matching the _Old IP Address_ from the table above. If it doesn't, you're not using explicit source IP address filtering, and can skip this update.
+5. Select **Add**.
+6. Fill in the form with the following information:
+
+ 1. Source: **Service Tag**
+ 2. Source Service Tag: **ApiManagement**
+ 3. Source port ranges: __*__
+ 4. Destination: **VirtualNetwork**
+ 5. Destination port ranges: **3443**
+ 6. Protocol: **TCP**
+ 7. Action: **Allow**
+ 8. Priority: Pick a suitable priority to place the new rule next to the existing rule.
+
+ The Name and Description fields can be set to anything you wish. All other fields should be left blank.
+
+7. Select **OK**.
+
+In addition, you may have to adjust the network routing for the virtual network to accommodate the new control plane IP addresses. If you've configured a default route (`0.0.0.0/0`) forcing all traffic from the API Management subnet to flow through a firewall instead of directly to the Internet, then additional configuration is required.
+
+If you configured user-defined routes (UDRs) for control plane IP addresses, the new IP addresses must be routed the same way. For more details on the changes necessary to handle network routing of management requests, review [Force tunneling traffic] documentation.
+
+Finally, check for any other systems that may impact the communication from the API Management resource provider to your API Management service subnet. For more information about virtual network configuration, review the [Virtual Network] documentation.
+
+## More Information
+
+* [Virtual Network](/azure/virtual-network)
+* [API Management VNET Reference](../virtual-network-reference.md)
+* [Microsoft Q&A](/answers/topics/azure-api-management.html)
+
+<!-- Links -->
+[Configure NSG Rules]: ../api-management-using-with-internal-vnet.md#configure-nsg-rules
+[Virtual Network]: /azure/virtual-network
+[Force tunneling traffic]: ../virtual-network-reference.md#force-tunneling-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance
+[Create, change, or delete a network security group]: /azure/virtual-network/manage-network-security-group
api-management Import Logic App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-logic-app-as-api.md
Title: Import a Logic App as an API with the Azure portal | Microsoft Docs
-description: This article shows you how to use API Management (APIM) to import Logic App as an API.
+description: This article shows you how to use API Management to import a Logic App (Consumption) resource as an API.
documentationcenter: ''
In this article, you learn how to:
> - Import a Logic App as an API > - Test the API in the Azure portal
+> [!NOTE]
+> API Management supports automated import of a Logic App (Consumption) resource. which runs in the multi-tenant Logic Apps environment. Learn more about [single-tenant versus muti-tenant Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+ ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)-- Make sure there is a Logic App in your subscription that exposes an HTTP endpoint. For more information, [Trigger workflows with HTTP endpoints](../logic-apps/logic-apps-http-endpoint.md)
+- Make sure there is a Consumption plan-based Logic App resource in your subscription that exposes an HTTP endpoint. For more information, [Trigger workflows with HTTP endpoints](../logic-apps/logic-apps-http-endpoint.md)
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
Title: Set up private endpoint for Azure API Management
+ Title: Set up private endpoint for Azure API Management Preview
description: Learn how to restrict access to an Azure API Management instance by using an Azure private endpoint and Azure Private Link.
To connect to 'Microsoft.ApiManagement/service/my-apim-service', please use the
* Use [policy expressions](api-management-policy-expressions.md#ref-context-request) with the `context.request` variable to identify traffic from the private endpoint. * Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md). * Learn more about [managing private endpoint connections](../private-link/manage-private-endpoint.md).
-* Use a [Resource Manager template](https://azure.microsoft.com/resources/templates/api-management-private-endpoint/) to create an API Management instance and a private endpoint with private DNS integration.
+* Use a [Resource Manager template](https://azure.microsoft.com/resources/templates/api-management-private-endpoint/) to create an API Management instance and a private endpoint with private DNS integration.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
| **Storage accounts** | Azure Storage account. It must contain an Azure Files share. | | **Share name** | Files share to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
- | **Mount path** | Directory inside the Windows container to mount to Azure Storage. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`).|
+ | **Mount path** | Directory inside your file/blob storage that you want to mount. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`).|
::: zone-end ::: zone pivot="container-linux" | Setting | Description |
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
Title: 'App Service on Azure Arc' description: An introduction to App Service integration with Azure Arc for Azure operators. Previously updated : 03/08/2022 Last updated : 03/09/2022 # App Service, Functions, and Logic Apps on Azure Arc (Preview)
If your extension was in the stable version and auto-upgrade-minor-version is se
```azurecli-interactive az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.12.0 ```+ ### Application services extension v 0.12.1 (March 2022) - Resolved issue with outbound proxy support to enable logging to Log Analytics Workspace
If your extension was in the stable version and auto-upgrade-minor-version is se
az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.12.1 ```
+### Application services extension v 0.12.2 (March 2022)
+
+- Update to resolve upgrade failures when upgrading from v 0.12.0 when extension name length is over 35 characters
+
+If your extension was in the stable version and auto-upgrade-minor-version is set to true, the extension upgrades automatically. To manually upgrade the extension to the latest version, you can run the command:
+
+```azurecli-interactive
+ az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.12.2
+```
+ ## Next steps [Create an App Service Kubernetes environment (Preview)](manage-create-arc-environment.md)
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
If the virtual network is in a different subscription than the app, you must ens
### Routes
-There are two types of routing to consider when you configure regional virtual network integration. Application routing defines what traffic is routed from your application and into the virtual network. Network routing is the ability to control how traffic is routed from your virtual network and out.
+There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of you app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic is routed from your virtual network and out.
#### Application routing
-When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound traffic into your virtual network, make sure that **Route All** is enabled.
+Application routing affects all the traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
> [!NOTE]
-> * When **Route All** is enabled, all traffic is subject to the NSGs and UDRs that are applied to your integration subnet. When all traffic routing is enabled, outbound traffic is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
-> * Windows containers don't support routing App Service Key Vault references or pulling custom container images over virtual network integration.
+> * When **Route All** is enabled, all app traffic is subject to the NSGs and UDRs that are applied to your integration subnet. When **Route All** is enabled, outbound traffic is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
> * Regional virtual network integration can't use port 25. Learn [how to configure application routing](./configure-vnet-integration-routing.md).
-We recommend that you use the **Route All** configuration setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing WEBSITE_VNET_ROUTE_ALL app setting can still be used, and you can enable all traffic routing with either setting.
+We recommend that you use the **Route All** configuration setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing `WEBSITE_VNET_ROUTE_ALL` app setting can still be used, and you can enable all traffic routing with either setting.
#### Configuration routing
-When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, the mentioned configurations will go directly to the internet unless you actively configure it to be routed through the virtual network integration.
+When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, configuration traffic will go directly over the public route, but individual components you actively configure it to be routed through the virtual network integration.
+
+> [!NOTE]
+> * Windows containers don't support routing App Service Key Vault references or pulling custom container images over virtual network integration.
+> * Backup/restore to private storage accounts is currently not supported.
+> * Configure SSL/TLS certificates from private Key Vaults is currently not supported.
+> * Diagnostics logs to private storage accounts is currently not supported.
##### Content storage
To route content storage traffic through the virtual network integration, you ne
When using custom containers for Linux, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must add an app setting named `WEBSITE_PULL_IMAGE_OVER_VNET` with the value `true`.
+##### App settings using Key Vault references
+
+App settings using Key Vault references will attempt to get secrets over the public route. If the Key Vault is blocking public traffic and the app is using virtual network integration, an attempt will then be made to get the secrets through the virtual network integration.
+ #### Network routing You can use route tables to route outbound traffic from your app to wherever you want. Route tables affect your destination traffic. When **Route All** is disabled in [application routing](#application-routing), only private traffic (RFC1918) is affected by your route tables. Common destinations can include firewall devices or gateways. Routes that are set on your integration subnet won't affect replies to inbound app requests.
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Title: Deploy a Node.js web app using MongoDB to Azure
-description: This article shows you have to deploy a Node.js app using Express.js and a MongoDB database to Azure. Azure App Service is used to host the web application and Azure Cosmos DB to host the database using the 100% compatible MongoDB API built into Cosmos DB.
+description: This article shows you have to deploy a Node.js app using Express.js and a MongoDB database to Azure. Azure App Service is used to host the web application and Azure Cosmos DB to host the database using the 100% compatible MongoDB API built into Cosmos DB.
Previously updated : 01/31/2022 Last updated : 03/07/2022 ms.role: developer ms.devlang: javascript-+ # Deploy a Node.js + MongoDB web app to Azure
-In this tutorial, you'll deploy a sample **Express.js** app using a **MongoDB** database to Azure. The Express.js app will be hosted in Azure App Service which supports hosting Node.js apps in both Linux (Node versions 12, 14, and 16) and Windows (versions 12 and 14) server environments. The MongoDB database will be hosted in Azure Cosmos DB, a cloud native database offering a [100% MongoDB compatible API](../cosmos-db/mongodb/mongodb-introduction.md).
+In this tutorial, you'll deploy a sample **Express.js** app using a **MongoDB** database to Azure. The Express.js app will be hosted in Azure App Service, which supports hosting Node.js apps in both Linux (Node versions 12, 14, and 16) and Windows (versions 12 and 14) server environments. The MongoDB database will be hosted in Azure Cosmos DB, a cloud native database offering a [100% MongoDB compatible API](../cosmos-db/mongodb/mongodb-introduction.md).
:::image type="content" source="./media/tutorial-nodejs-mongodb-app/app-diagram.png" alt-text="A diagram showing how the Express.js app will be deployed to Azure App Service and the MongoDB data will be hosted inside of Azure Cosmos DB." lightbox="./media/tutorial-nodejs-mongodb-app/app-diagram-large.png":::
-This article assumes you are already familiar with [Node.js development](/learn/paths/build-javascript-applications-nodejs/) and have Node and MongoDB installed locally. You'll also need an Azure account with an active subscription. If you do not have an Azure account, you [can create one for free](https://azure.microsoft.com/free/nodejs/).
+This article assumes you're already familiar with [Node.js development](/learn/paths/build-javascript-applications-nodejs/) and have Node and MongoDB installed locally. You'll also need an Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/nodejs/).
## Sample application
git clone https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-ap
Follow these steps to run the application locally:
-* Install the package dependencies by running `npm install`
-* Copy the `.env.sample` file to `.env` and populate the DATABASE_URL value with your MongoDB URL (for example *mongodb://localhost:27017/*)
-* Start the application using `npm start`
-* To view the app, browse to `http://localhost:3000`
+* Install the package dependencies by running `npm install`.
+* Copy the `.env.sample` file to `.env` and populate the DATABASE_URL value with your MongoDB URL (for example *mongodb://localhost:27017/*).
+* Start the application using `npm start`.
+* To view the app, browse to `http://localhost:3000`.
## 1 - Create the Azure App Service
-Azure App Service is used to host the Express.js web app. When setting up the App Service for the application, you will specify:
+Azure App Service is used to host the Express.js web app. When setting up the App Service for the application, you'll specify:
-* The **Name** for the web app. This name is used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
-* The **Runtime** for the app. This is where you select the version of Node to use for your app.
+* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
+* The **Runtime** for the app. It's where you select the version of Node to use for your app.
* The **App Service plan** which defines the compute resources (CPU, memory) available for the application.
-* The **Resource Group** for the app. A resource group lets you group all of the Azure resources needed for the application together in a logical container.
+* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
-Azure resources can be created using the [Azure portal](https://portal.azure.com/), VS Code using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or the Azure CLI.
+Create Azure resources using the [Azure portal](https://portal.azure.com/), VS Code using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or the Azure CLI.
### [Azure portal](#tab/azure-portal)
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
| [!INCLUDE [Create app service step 1](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-1.png"::: | | [!INCLUDE [Create app service step 2](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot showing the create button on the App Services page used to create a new web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-2.png"::: | | [!INCLUDE [Create app service step 3](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot showing the form to fill out to create a web app in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-3.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of the Spec Picker dialog that allows you to select the App Service plan to use for your web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4.png"::: |
+| [!INCLUDE [Create app service step 4](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of the Spec Picker dialog that lets you select the App Service plan to use for your web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4.png"::: |
| [!INCLUDE [Create app service step 4](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the main web app create page showing the button to select on to create your web app in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-5.png"::: | ### [VS Code](#tab/vscode-aztools)
To create Azure resources in VS Code, you must have the [Azure Tools extension p
## 2 - Create an Azure Cosmos DB in MongoDB compatibility mode
-Azure Cosmos DB is a fully managed NoSQL database for modern app development. Among its features is a 100% MongoDB compatible API allowing you to use your existing MongoDB tools, packages, and applications with Cosmos DB.
+Azure Cosmos DB is a fully managed NoSQL database for modern app development. Among its features are a 100% MongoDB compatible API allowing you to use your existing MongoDB tools, packages, and applications with Cosmos DB.
### [Azure portal](#tab/azure-portal)
-You must be signed in to the [Azure portal](https://portal.azure.com/) to complete these steps to create a Cosmos DB.
+You must sign in to the [Azure portal](https://portal.azure.com/) to finish these steps to create a Cosmos DB.
| Instructions | Screenshot | |:-|--:|
You must be signed in to the [Azure portal](https://portal.azure.com/) to comple
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Create Cosmos DB step 1](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1-240px.png" alt-text="A screenshot showing the databases component of the Azure Tools VS Code extension and the location of the button to create a new database." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1.png"::: |
+| [!INCLUDE [Create Cosmos DB step 1](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1-240px.png" alt-text="A screenshot showing the database component of the Azure Tools VS Code extension and the location of the button to create a new database." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1.png"::: |
| [!INCLUDE [Create Cosmos DB step 2](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-2-240px.png" alt-text="A screenshot showing the dialog box used to select the subscription for the new database in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-2.png"::: | | [!INCLUDE [Create Cosmos DB step 3](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-3-240px.png" alt-text="A screenshot showing the dialog box used to select the type of database you want to create in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-3.png"::: | | [!INCLUDE [Create Cosmos DB step 4](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-4-240px.png" alt-text="A screenshot of dialog box used to enter the name of the new database in Visual Studio Code." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-4.png"::: |
You must be signed in to the [Azure portal](https://portal.azure.com/) to comple
## 3 - Connect your App Service to your Cosmos DB
-To connect to your Cosmos DB database, you need to provide the connection string for the database to your application. This is done in the sample application by reading the `DATABASE_URL` environment variable. When running locally, the sample application uses the [dotenv package](https://www.npmjs.com/package/dotenv) to read the connection string value from the `.env` file.
+To connect to your Cosmos DB database, you need to provide the connection string for the database to your application. It's done in the sample application by reading the `DATABASE_URL` environment variable. When you locally run it, the sample application uses the [dotenv package](https://www.npmjs.com/package/dotenv) to read the connection string value from the `.env` file.
-When running in Azure, configuration values like connection strings can be stored in the *application settings* of the App Service hosting the web app. These values are then made available to your application as environment variables during runtime. In this way, the application accesses the connection string from `process.env` the same way whether being run locally or in Azure. Further, this eliminates the need to manage and deploy environment specific config files with your application.
+When you run in Azure, configuration values like connection strings can be stored in the *application settings* of the App Service hosting the web app. These values are then made available to your application as environment variables during runtime. In this way, the application uses the connection string from `process.env` the same way whether being run locally or in Azure. Further, it eliminates the need to manage and deploy environment specific config files with your application.
### [Azure portal](#tab/azure-portal) | Instructions | Screenshot | |:-|--:| | [!INCLUDE [Connection string step 1](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-1-240px.png" alt-text="A screenshot showing the location of the Cosmos DB connection string on the Cosmos DB quick start page." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-1.png"::: |
-| [!INCLUDE [Connection string step 2](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2-240px.png" alt-text="A screenshot showing how to search for and navigate to the App Service where the connection string needs to store the connection string." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2.png"::: |
-| [!INCLUDE [Connection string step 3](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3-240px.png" alt-text="A screenshot showing how to access the Application settings within an App Service." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3.png"::: |
+| [!INCLUDE [Connection string step 2](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2-240px.png" alt-text="A screenshot showing how to search for and go to the App Service, where the connection string needs to store the connection string." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2.png"::: |
+| [!INCLUDE [Connection string step 3](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3-240px.png" alt-text="A screenshot showing how to use the Application settings within an App Service." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3.png"::: |
| [!INCLUDE [Connection string step 4](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-4-240px.png" alt-text="A screenshot showing the dialog used to set an application setting in Azure App Service." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-4.png"::: | ### [VS Code](#tab/vscode-aztools)
Use the form elements in the application to add and complete tasks.
## 6 - Configure and view application logs
-Azure App Service captures all messages logged to the console to assist you in diagnosing issues with your application. The sample app outputs console log messages in each of its endpoints to demonstrate this capability. For example, the `get` endpoint outputs a message about the number of tasks retrieved from the database and an error message if something goes wrong.
+Azure App Service captures all messages logged to the console to assist you in diagnosing issues with your application. The sample app outputs console log messages in each of its endpoints to demonstrate this capability. For example, the `get` endpoint outputs a message about the number of tasks retrieved from the database and an error message appears if something goes wrong.
:::code language="javascript" source="~/msdocs-nodejs-mongodb-azure-sample-app/routes/index.js" range="7-21" highlight="8,12":::
The contents of the App Service diagnostic logs can be reviewed in the Azure por
## 7 - Inspect deployed files using Kudu
-Azure App Service provides a web-based diagnostics console named [Kudu](./resources-kudu.md) that allows you to examine the server hosting environment for your web app. Using Kudu, you can view the files deployed to Azure, review the deployment history of the application, and even open an SSH session into the hosting environment.
+Azure App Service provides a web-based diagnostics console named [Kudu](./resources-kudu.md) that lets you examine the server hosting environment for your web app. Using Kudu, you can view the files deployed to Azure, review the deployment history of the application, and even open an SSH session into the hosting environment.
-To access Kudu, navigate to one of the following URLs. You will need to sign into the Kudu site with your Azure credentials.
+To access Kudu, go to one of the following URLs. You'll need to sign into the Kudu site with your Azure credentials.
-* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https://<app-name>.scm.azurewebsites.net`
-* For apps deployed in Isolated service plans - `https://<app-name>.scm.<ase-name>.p.azurewebsites.net`
+* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https://<app-name>.scm.azurewebsites.net`.
+* For apps deployed in Isolated service plans - `https://<app-name>.scm.<ase-name>.p.azurewebsites.net`.
From the main page in Kudu, you can access information about the application hosting environment, app settings, deployments, and browse the files in the wwwroot directory.
Selecting the *Deployments* link under the REST API header will show you a histo
![A screenshot of the deployments JSON in the Kudu SCM app showing the history of deployments to this web app.](./media/tutorial-nodejs-mongodb-app/kudu-deployments-list.png)
-Selecting the *Site wwwroot* link under the Browse Directory heading allows you to browse and view the files on the web server.
+Selecting the *Site wwwroot* link under the Browse Directory heading lets you browse and view the files on the web server.
-![A screenshot of files in the wwwroot directory showing how Kudu allows you to see what has been deployed to Azure.](./media/tutorial-nodejs-mongodb-app/kudu-wwwroot-files.png)
+![A screenshot of files in the wwwroot directory showing how Kudu lets you to see what has been deployed to Azure.](./media/tutorial-nodejs-mongodb-app/kudu-wwwroot-files.png)
## Clean up resources
-When you are finished, you can delete all of the resources from Azure by deleting the resource group for the application.
+When you're finished, you can delete all the resources from Azure by deleting the resource group for the application.
### [Azure portal](#tab/azure-portal)
-Follow these steps while signed-in to the Azure portal to delete a resource group.
+Follow these steps while you're signed-in to the Azure portal to delete a resource group.
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Remove resource group Azure portal 1](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 1](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot showing how to search for and go to a resource group in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1.png"::: |
| [!INCLUDE [Remove resource group Azure portal 2](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-2-240px.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-2.png"::: | | [!INCLUDE [Remove resource group Azure portal 3](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-3.png"::: |
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
description: Learn how to get a PHP app working in Azure, with connection to a M
ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Previously updated : 06/15/2020- Last updated : 03/04/2022+ zone_pivot_groups: app-service-platform-windows-linux
In this tutorial, you learn how to:
> * Stream diagnostic logs from Azure > * Manage the app in the Azure portal ## Prerequisites
mysql -u root -p
If you're prompted for a password, enter the password for the `root` account. If you don't remember your root account password, see [MySQL: How to Reset the Root Password](https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html).
-If your command runs successfully, then your MySQL server is running. If not, make sure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html).
+If your command runs successfully, then your MySQL server is running. If not, ensure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html).
### Create a database locally
In the terminal window, `cd` to a working directory.
cd laravel-tasks ```
-1. Make sure the default branch is `main`.
+1. Ensure the default branch is `main`.
```bash git branch -m main ``` > [!TIP]
- > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
+ > The branch name change isn't required by App Service. But, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
1. Install the required packages.
For information on how Laravel uses the _.env_ file, see [Laravel Environment Co
php artisan serve ```
-1. Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
+1. Go to `http://localhost:8000` in a browser. Add a few tasks in the page.
![PHP connects successfully to MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
-1. To stop PHP, type `Ctrl + C` in the terminal.
+1. To stop PHP, enter `Ctrl + C` in the terminal.
## Create MySQL in Azure
-In this step, you create a MySQL database in [Azure Database for MySQL](../mysql/index.yml). Later, you configure the PHP application to connect to this database.
+In this step, you create a MySQL database in [Azure Database for MySQL](../mysql/index.yml). Later, you set up the PHP application to connect to this database.
### Create a resource group
When the MySQL server is created, the Azure CLI shows information similar to the
GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser'; ```
-1. Exit the server connection by typing `quit`.
+1. Exit the server connection by entering `quit`.
```sql quit
MYSQL_SSL=true
Save the changes. > [!TIP]
-> To secure your MySQL connection information, this file is already excluded from the Git repository (See _.gitignore_ in the repository root). Later, you learn how to configure environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
+> To secure your MySQL connection information, this file is already excluded from the Git repository (See _.gitignore_ in the repository root). Later, you learn how to set up the environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
> ### Configure TLS/SSL certificate
The certificate `BaltimoreCyberTrustRoot.crt.pem` is provided in the repository
php artisan serve --env=production ```
-1. Navigate to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
+1. Go to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
1. Add a few tasks in the page. ![PHP connects successfully to Azure Database for MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
-1. To stop PHP, type `Ctrl + C` in the terminal.
+1. To stop PHP, enter `Ctrl + C` in the terminal.
### Commit your changes
By default, Azure App Service points the root virtual application path (_/_) to
::: zone pivot="platform-linux"
-[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the _public_ directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. However, you can use `.htaccess` to rewrite all requests to point to _/public_ instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
+[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the _public_ directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. But you can use `.htaccess` to rewrite all requests to point to _/public_ instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
For more information, see [Change site root](configure-language-php.md#change-site-root).
Congratulations, you're running a data-driven PHP app in Azure App Service.
In this step, you make a simple change to the `task` data model and the webapp, and then publish the update to Azure.
-For the tasks scenario, you modify the application so that you can mark a task as complete.
+For the tasks scenario, you change the application so that you can mark a task as complete.
### Add a column
-1. In the local terminal window, navigate to the root of the Git repository.
+1. In the local terminal window, go to the root of the Git repository.
1. Generate a new database migration for the `tasks` table:
For the tasks scenario, you modify the application so that you can mark a task a
php artisan serve ```
-1. To see the task status change, navigate to `http://localhost:8000` and select the checkbox.
+1. To see the task status change, go to `http://localhost:8000` and select the checkbox.
![Added check box to task](./media/tutorial-php-mysql-app/complete-checkbox.png)
-1. To stop PHP, type `Ctrl + C` in the terminal.
+1. To stop PHP, enter `Ctrl + C` in the terminal.
### Publish changes to Azure
For the tasks scenario, you modify the application so that you can mark a task a
git push azure main ```
-1. Once the `git push` is complete, navigate to the Azure app and test the new functionality.
+1. Once the `git push` is complete, go to the Azure app and test the new functionality.
![Model and database changes published to Azure](media/tutorial-php-mysql-app/complete-checkbox-published.png)
-If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
+If you add any task, they're retained in the database. Updates to the data schema leave existing data intact.
## Stream diagnostic logs
az webapp log tail --name <app_name> --resource-group myResourceGroup
Once log streaming has started, refresh the Azure app in the browser to get some web traffic. You can now see console logs piped to the terminal. If you don't see console logs immediately, check again in 30 seconds.
-To stop log streaming at any time, type `Ctrl`+`C`.
+To stop log streaming at any time, enter `Ctrl`+`C`.
::: zone-end
To stop log streaming at any time, type `Ctrl`+`C`.
1. Go to the [Azure portal](https://portal.azure.com) to manage the app you created.
-1. From the left menu, click **App Services**, and then click the name of your Azure app.
+1. From the left menu, select **App Services**, and then select the name of your Azure app.
![Portal navigation to Azure app](./media/tutorial-php-mysql-app/access-portal.png)
- You see your app's Overview page. Here, you can perform basic management tasks like stop, start, restart, browse, and delete.
+ You see your app's Overview page. In this page, you can do basic management tasks like stop, start, restart, browse, and delete.
The left menu provides pages for configuring your app.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django app with Postgres'
description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework and the app is hosted on Azure App Service on Linux. ms.devlang: python Previously updated : 11/30/2021- Last updated : 02/22/2022+ zone_pivot_groups: postgres-server-options # Tutorial: Deploy a Django web app with PostgreSQL in Azure App Service ::: zone pivot="postgres-single-server"
-This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an Azure Database for Postgres database. You can also try the PostgresSQL Flexible Server by selecting the option above. Flexible Server provides a simpler deployment mechanism and lower ongoing costs.
+This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an Azure Database for Postgres database. You can also select the option above to try the PostgresSQL Flexible Server. Flexible Server provides a simpler deployment mechanism and lower ongoing costs.
In this tutorial, you use the Azure CLI to complete the following tasks: > [!div class="checklist"]
-> * Set up your initial environment with Python and the Azure CLI
-> * Create an Azure Database for PostgreSQL database
-> * Deploy code to Azure App Service and connect to PostgreSQL
-> * Update your code and redeploy
-> * View diagnostic logs
-> * Manage the web app in the Azure portal
+> * Set up your initial environment with Python and the Azure CLI.
+> * Create an Azure Database for PostgreSQL database.
+> * Deploy code to Azure App Service and connect to PostgreSQL.
+> * Update your code and redeploy.
+> * View diagnostic logs.
+> * Manage the web app in the Azure portal.
You can also use the [Azure portal version of this tutorial](/azure/developer/python/tutorial-python-postgresql-app-portal?pivots=postgres-single-server).
You can also use the [Azure portal version of this tutorial](/azure/developer/py
::: zone pivot="postgres-flexible-server"
-This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible Server](../postgresql/flexible-server/index.yml) database. If you cannot use PostgreSQL Flexible Server, then select the Single Server option above.
+This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible Server](../postgresql/flexible-server/index.yml) database. If you can't use PostgreSQL Flexible Server, then select the Single Server option above.
In this tutorial, you use the Azure CLI to complete the following tasks: > [!div class="checklist"]
-> * Set up your initial environment with Python and the Azure CLI
-> * Create an Azure Database for PostgreSQL Flexible Server database
-> * Deploy code to Azure App Service and connect to PostgreSQL Flexible Server
-> * Update your code and redeploy
-> * View diagnostic logs
-> * Manage the web app in the Azure portal
+> * Set up your initial environment with Python and the Azure CLI.
+> * Create an Azure Database for PostgreSQL Flexible Server database.
+> * Deploy code to Azure App Service and connect to PostgreSQL Flexible Server.
+> * Update your code and redeploy.
+> * View diagnostic logs.
+> * Manage the web app in the Azure portal.
You can also use the [Azure portal version of this tutorial](/azure/developer/python/tutorial-python-postgresql-app-portal?pivots=postgres-flexible-server).
You can also use the [Azure portal version of this tutorial](/azure/developer/py
1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). 1. Install <a href="https://www.python.org/downloads/" target="_blank">Python 3.6 or higher</a>.
-1. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.18.0 or higher, with which you run commands in any shell to provision and configure Azure resources.
+1. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.18.0 or higher, with which you run commands in any shell, to set up and configure Azure resources.
-Open a terminal window and check your Python version is 3.6 or higher:
+Open a terminal window and check that your Python version is 3.6 or higher:
# [Bash](#tab/bash)
az --version
If you need to upgrade, try the `az upgrade` command (requires version 2.11+) or see <a href="/cli/azure/install-azure-cli" target="_blank">Install the Azure CLI</a>.
-Then sign in to Azure through the CLI:
+Then sign in to Azure using the CLI:
```azurecli az login
Clone the sample repository:
git clone https://github.com/Azure-Samples/djangoapp ```
-Then navigate into that folder:
+Then go into that folder:
```terminal cd djangoapp
cd djangoapp
::: zone pivot="postgres-flexible-server"
-For Flexible Server, use the flexible-server branch of the sample, which contains a few necessary changes, such as how the database server URL is set and adding `'OPTIONS': {'sslmode': 'require'}` to the Django database configuration as required by Azure PostgreSQL Flexible Server.
+For Flexible Server, use the flexible-server branch of the sample that contains a few necessary changes such as:
+- How the database server URL is set.
+- Adding `'OPTIONS': {'sslmode': 'require'}` to the Django database configuration as required by Azure PostgreSQL Flexible Server.
```terminal git checkout flexible-server
git checkout flexible-server
# [Download](#tab/download)
-Visit [https://github.com/Azure-Samples/djangoapp](https://github.com/Azure-Samples/djangoapp).
+Go to [https://github.com/Azure-Samples/djangoapp](https://github.com/Azure-Samples/djangoapp).
::: zone pivot="postgres-flexible-server" For Flexible Server, select the branches control that says "master" and select the flexible-server branch instead.
Install the `db-up` extension for the Azure CLI:
az extension add --name db-up ```
-If the `az` command is not recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#1-set-up-your-initial-environment).
+If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#1-set-up-your-initial-environment).
Then create the Postgres database in Azure with the [`az postgres up`](/cli/azure/postgres#az_postgres_up) command:
az postgres up --resource-group DjangoPostgres-tutorial-rg --location centralus
``` - **Replace** *\<postgres-server-name>* with a name that's **unique across all Azure** (the server endpoint becomes `https://<postgres-server-name>.postgres.database.azure.com`). A good pattern is to use a combination of your company name and another unique value.-- For *\<admin-username>* and *\<admin-password>*, specify credentials to create an administrator user for this Postgres server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, #, %). The password cannot contain username.-- Do not use the `$` character in the username or password. Later you create environment variables with these values where the `$` character has special meaning within the Linux container used to run Python apps.
+- For *\<admin-username>* and *\<admin-password>*, specify credentials to create an admin user for this Postgres server.
+- The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*.
+- The password must contain 8 to 128 characters from either:
+ - English uppercase letters.
+ - English lowercase letters.
+ - Numbers (0 through 9).
+ - Non-alphanumeric characters (for example, !, #, %).
+- The password can't contain username.
+- Don't use the `$` character in the username or password. Later you create environment variables with these values where the `$` character has special meaning within the Linux container used to run Python apps.
- The B_Gen5_1 (Basic, Gen5, 1 core) [pricing tier](../postgresql/concepts-pricing-tiers.md) used here is the least expensive. For production databases, omit the `--sku-name` argument to use the GP_Gen5_2 (General Purpose, Gen 5, 2 cores) tier instead.
-This command performs the following actions, which may take a few minutes:
+This command does the following actions, which may take a few minutes:
- Create a [resource group](../azure-resource-manager/management/overview.md#terminology) called `DjangoPostgres-tutorial-rg`, if it doesn't already exist. - Create a Postgres server named by the `--server-name` argument.-- Create an administrator account using the `--admin-user` and `--admin-password` arguments. You can omit these arguments to allow the command to generate unique credentials for you.
+- Create an admin account using the `--admin-user` and `--admin-password` arguments. You can omit these arguments to allow the command to generate unique credentials for you.
- Create a `pollsdb` database as named by the `--database-name` argument.-- Enable access from your local IP address.-- Enable access from Azure services.
+- Allow access from your local IP address.
+- Allow access from Azure services.
- Create a database user with access to the `pollsdb` database. You can do all the steps separately with other `az postgres` and `psql` commands, but `az postgres up` does all the steps together.
When the command completes, it outputs a JSON object that contains different con
::: zone pivot="postgres-flexible-server"
-1. Enable parameters caching with the Azure CLI so you don't need to provide those parameters with every command. (Cached values are saved in the *.azure* folder.)
+1. Allow parameters caching with the Azure CLI so you don't need to provide those parameters with every command. (Cached values are saved in the *.azure* folder.)
```azurecli az config param-persist on
When the command completes, it outputs a JSON object that contains different con
az postgres flexible-server create --sku-name Standard_B1ms --public-access all ```
- If the `az` command is not recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#1-set-up-your-initial-environment).
+ If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#1-set-up-your-initial-environment).
- The [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az_postgres_flexible_server_create) command performs the following actions, which take a few minutes:
+ The [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az_postgres_flexible_server_create) command does the following actions, which take a few minutes:
- Create a default resource group if there's not a cached name already. - Create a PostgreSQL Flexible Server: - By default, the command uses a generated name like `server383813186`. You can specify your own name with the `--name` parameter. The name must be unique across all of Azure. - The command uses the lowest-cost `Standard_B1ms` pricing tier. Omit the `--sku-name` argument to use the default `Standard_D2s_v3` tier. - The command uses the resource group and location cached from the previous `az group create` command, which in this example is the resource group `Python-Django-PGFlex-rg` in the `centralus` region.
- - Create an administrator account with a username and password. You can specify these values directly with the `--admin-user` and `--admin-password` parameters.
+ - Create an admin account with a username and password. You can specify these values directly with the `--admin-user` and `--admin-password` parameters.
- Create a database named `flexibleserverdb` by default. You can specify a database name with the `--database-name` parameter.
- - Enables complete public access, which you can control using the `--public-access` parameter.
+ - Allows complete public access, which you can control using the `--public-access` parameter.
-1. When the command completes, **copy the command's JSON output to a file** as you need values from the output later in this tutorial, specifically the host, username, and password, along with the database name.
+1. When the command completes, **copy the command's JSON output to a file** because you need values from the output later in this tutorial, specifically the host, username, and password, along with the database name.
::: zone-end
In this section, you create app host in App Service app, connect this app to the
::: zone pivot="postgres-single-server"
-In the terminal, make sure you're in the *djangoapp* repository folder that contains the app code.
+In the terminal, ensure you're in the *djangoapp* repository folder that contains the app code.
Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command:
az webapp up --resource-group DjangoPostgres-tutorial-rg --location centralus --
- For the `--location` argument, use the same location as you did for the database in the previous section. - **Replace** *\<app-name>* with a unique name across all Azure (the server endpoint is `https://<app-name>.azurewebsites.net`). Allowed characters for *\<app-name>* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier.
-This command performs the following actions, which may take a few minutes:
+This command does the following actions, which may take a few minutes:
<!- <!-- No it doesn't. az webapp up doesn't respect --resource-group --> - Create the [resource group](../azure-resource-manager/management/overview.md#terminology) if it doesn't already exist. (In this command you use the same resource group in which you created the database earlier.) - Create the [App Service plan](overview-hosting-plans.md) *DjangoPostgres-tutorial-plan* in the Basic pricing tier (B1), if it doesn't exist. `--plan` and `--sku` are optional. - Create the App Service app if it doesn't exist.-- Enable default logging for the app, if not already enabled.
+- Allow default logging for the app, if not already enabled.
- Upload the repository using ZIP deployment with build automation enabled.-- Cache common parameters, such as the name of the resource group and App Service plan, into the file *.azure/config*. As a result, you don't need to specify all the same parameter with later commands. For example, to redeploy the app after making changes, you can just run `az webapp up` again without any parameters. Commands that come from CLI extensions, such as `az postgres up`, however, do not at present use the cache, which is why you needed to specify the resource group and location here with the initial use of `az webapp up`.
+- Cache common parameters, such as the name of the resource group and App Service plan, into the file *.azure/config*. As a result, you don't need to specify again all the same parameters with later commands. For example, to redeploy the app after making changes, you can just run `az webapp up` again without any parameters. Commands that come from CLI extensions, such as `az postgres up` don't use the cache now, which is why you must specify the resource group and location here with the initial use of `az webapp up`.
::: zone-end ::: zone pivot="postgres-flexible-server"
-1. In the terminal, make sure you're in the *djangoapp* repository folder that contains the app code.
+1. In the terminal, ensure you're in the *djangoapp* repository folder that contains the app code.
1. Switch to the sample app's `flexible-server` branch. This branch contains specific configuration needed for PostgreSQL Flexible Server:
This command performs the following actions, which may take a few minutes:
``` <!-- without --sku creates PremiumV2 plan -->
- This command performs the following actions, which may take a few minutes, using resource group and location cached from the previous `az group create` command (the group `Python-Django-PGFlex-rg` in the `centralus` region in this example).
+ This command does the following actions, which may take a few minutes, using resource group and location cached from the previous `az group create` command (the group `Python-Django-PGFlex-rg` in the `centralus` region in this example).
<!- <!-- No it doesn't. az webapp up doesn't respect --resource-group --> - Create an [App Service plan](overview-hosting-plans.md) in the Basic pricing tier (B1). You can omit `--sku` to use default values. - Create the App Service app.
- - Enable default logging for the app.
+ - Allow default logging for the app.
- Upload the repository using ZIP deployment with build automation enabled. ::: zone-end
-Upon successful deployment, the command generates JSON output like the following example:
+When deployed successfully, the command generates JSON output like the following example:
![Example az webapp up command output](./media/tutorial-python-postgresql-app/az-webapp-up-output.png)
Having issues? Refer first to the [Troubleshooting guide](configure-language-pyt
With the code now deployed to App Service, the next step is to connect the app to the Postgres database in Azure.
-The app code expects to find database information in four environment variables named `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`.
+The app code expects to find database information in four environment variables such as `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`.
To set environment variables in App Service, create "app settings" with the following [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command.
az webapp config appsettings set --settings DBHOST="<postgres-server-name>" DBUS
``` - Replace *\<postgres-server-name>* with the name you used earlier with the `az postgres up` command. The code in *azuresite/production.py* automatically appends `.postgres.database.azure.com` to create the full Postgres server URL.-- Replace *\<username>* and *\<password>* with the administrator credentials that you used with the earlier `az postgres up` command, or those that `az postgres up` generated for you. The code in *azuresite/production.py* automatically constructs the full Postgres username from `DBUSER` and `DBHOST`, so don't include the `@server` portion. (Also, as noted earlier, you should not use the `$` character in either value as it has a special meaning for Linux environment variables.)
+- Replace *\<username>* and *\<password>* with the admin credentials that you used with the earlier `az postgres up` command, or those that `az postgres up` generated for you. The code in *azuresite/production.py* automatically constructs the full Postgres username from `DBUSER` and `DBHOST`, so don't include the `@server` portion. (Also, as noted earlier, you shouldn't use the `$` character in either value because it has a special meaning for Linux environment variables.)
- The resource group and app names are drawn from the cached values in the *.azure/config* file. ::: zone-end
Also replace `flexibleserverdb` with the database name if you changed it with th
::: zone-end
-In your Python code, you access these settings as environment variables with statements like `os.environ.get('DBHOST')`. For more information, see [Access environment variables](configure-language-python.md#access-environment-variables).
+In your Python code, you use these settings as environment variables with statements like `os.environ.get('DBHOST')`. For more information, see [Access environment variables](configure-language-python.md#access-environment-variables).
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
-> [!NOTE]
-> If you want to try an alternative approach to connect your app to the Postgres database in Azure, see the [Service Connector version](../service-connector/tutorial-django-webapp-postgres-cli.md) of this tutorial. Service Connector is a new Azure service that is currently in public preview. [Section 4.2](../service-connector/tutorial-django-webapp-postgres-cli.md#42-configure-environment-variables-to-connect-the-database) of that tutorial introduces a simplified process for creating the connection.
- ### 4.3 Run Django database migrations
-Django database migrations ensure that the schema in the PostgreSQL on Azure database match those described in your code.
+Django database migrations ensure that the schema in the PostgreSQL on Azure database matches with those described in your code.
1. Run `az webpp ssh` to open an SSH session for the web app in the browser:
Django database migrations ensure that the schema in the PostgreSQL on Azure dat
az webapp ssh ```
- If you cannot connect to the SSH session, then the app itself has failed to start. [Check the diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you haven't created the necessary app settings in the previous section, the logs will indicate `KeyError: 'DBNAME'`.
+ If you can't connect to the SSH session, then the app itself has failed to start. [Check the diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you haven't created the necessary app settings in the previous section, the logs indicate `KeyError: 'DBNAME'`.
1. In the SSH session, run the following commands (you can paste commands using **Ctrl**+**Shift**+**V**):
Django database migrations ensure that the schema in the PostgreSQL on Azure dat
If you encounter any errors related to connecting to the database, check the values of the application settings created in the previous section.
-1. The `createsuperuser` command prompts you for superuser credentials. For the purposes of this tutorial, use the default username `root`, press **Enter** for the email address to leave it blank, and enter `Pollsdb1` for the password.
+1. The `createsuperuser` command prompts you for superuser credentials. For the purposes of this tutorial, use the default username `root`, press **Enter** for the email address to leave it blank, and type `Pollsdb1` for the password.
-1. If you see an error that the database is locked, make sure that you ran the `az webapp settings` command in the previous section. Without those settings, the migrate command cannot communicate with the database, resulting in the error.
+1. If you see an error that the database is locked, ensure that you ran the `az webapp settings` command in the previous section. Without those settings, the migrate command can't communicate with the database, resulting in the error.
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
Having issues? Refer first to the [Troubleshooting guide](configure-language-pyt
az webapp browse ```
- If you see "Application Error", then it's likely that you either didn't create the required settings in the previous step, [Configure environment variables to connect the database](#42-configure-environment-variables-to-connect-the-database), or that those value contain errors. Run the command `az webapp config appsettings list` to check the settings. You can also [check the diagnostic logs](#6-stream-diagnostic-logs) to see specific errors during app startup. For example, if you didn't create the settings, the logs will show the error, `KeyError: 'DBNAME'`.
+ If you see "Application Error", then it's likely that you either didn't create the required settings in the previous step, [Configure environment variables to connect the database](#42-configure-environment-variables-to-connect-the-database), or that those value contain errors. Run the command `az webapp config appsettings list` to check the settings. You can also [check the diagnostic logs](#6-stream-diagnostic-logs) to see specific errors during app startup. For example, if you didn't create the settings, the logs show the error, `KeyError: 'DBNAME'`.
After updating the settings to correct any errors, give the app a minute to restart, then refresh the browser. 1. Browse to the web app's admin page by appending `/admin` to the URL, for example, `http://<app-name>.azurewebsites.net/admin`. Sign in using Django superuser credentials from the previous section (`root` and `Pollsdb1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
-1. Return to the main the website (`http://<app-name>.azurewebsites.net`) to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
+1. Return to the main website (`http://<app-name>.azurewebsites.net`) to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
**Congratulations!** You're running a Python Django web app in Azure App Service for Linux, with an active Postgres database.
Test the app locally with the following steps:
1. Stop the Django server by pressing **Ctrl**+**C**.
-When running locally, the app is using a local Sqlite3 database and doesn't interfere with your production database. You can also use a local PostgreSQL database, if desired, to better simulate your production environment.
+When you run it locally, the app is using a local Sqlite3 database and doesn't interfere with your production database. If needed, to better simulate your production environment, you can also use a local PostgreSQL database.
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
Having issues? Refer first to the [Troubleshooting guide](configure-language-pyt
### 5.5 Review app in production
-Browse to the app again(using `az webapp browse` or navigating to `http://<app-name>.azurewebsites.net`)and test the app again in production. (Because you changed only the length of a database field, the change is only noticeable if you try to enter a longer response when creating a question.)
+Browse to the app again (using `az webapp browse` or navigating to `http://<app-name>.azurewebsites.net`) and test the app again in production. (You changed only the length of a database field. So the change is only noticeable if you typed a longer response when you create a question.)
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp). ## 6. Stream diagnostic logs
-You can access the console logs generated from inside the container that hosts the app on Azure.
+You can use the console logs generated from inside the container that hosts the app on Azure.
Run the following Azure CLI command to see the log stream. This command uses parameters cached in the *.azure/config* file.
Having issues? Refer first to the [Troubleshooting guide](configure-language-pyt
## 8. Clean up resources
-If you'd like to keep the app or continue to the additional tutorials, skip ahead to [Next steps](#next-steps). Otherwise, to avoid incurring ongoing charges you can delete the resource group created for this tutorial:
+If you'd like to keep the app or continue to the other tutorials, skip ahead to [Next steps](#next-steps). Otherwise, to avoid incurring ongoing charges you can delete the resource group created for this tutorial:
```azurecli az group delete --name Python-Django-PGFlex-rg --no-wait ```
-By deleting the resource group, you also deallocate and delete all the resources contained within it. Be sure you no longer need the resources in the group before using the command.
+If you delete the resource group, you also deallocate and delete all the resources contained within it. Ensure you no longer need the resources in the group before using the command.
Deleting all the resources can take some time. The `--no-wait` argument allows the command to return immediately.
Learn how to map a custom DNS name to your app:
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/basic-concepts.md
Azure Attestation provides a regional shared provider in every available region.
| South East Asia | `https://sharedsasia.sasia.attest.azure.net` | | North Central US | `https://sharedncus.ncus.attest.azure.net` | | South Central US | `https://sharedscus.scus.attest.azure.net` |
+| Australia East | `https://sharedeau.eau.attest.azure.net` |
+| Australia SouthEast | `https://sharedsau.sau.attest.azure.net` |
| US Gov Virginia | `https://sharedugv.ugv.attest.azure.us` | | US Gov Arizona | `https://shareduga.uga.attest.azure.us` |
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
# Microsoft Azure Attestation
-Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service supports attestation of the platforms backed by Trusted Platform Modules (TPMs) alongside the ability to attest to the state of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves and [Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) enclaves.
+Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service supports attestation of the platforms backed by Trusted Platform Modules (TPMs) alongside the ability to attest to the state of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves, [Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) enclaves, [Trusted Platform Modules (TPMs)](/windows/security/information-protection/tpm/trusted-platform-module-overview), [Trusted launch for Azure VMs](/azure/virtual-machines/trusted-launch#microsoft-defender-for-cloud-integration) and [Azure confidential VMs](/azure/confidential-computing/confidential-vm-overview).
Attestation is a process for demonstrating that software binaries were properly instantiated on a trusted platform. Remote relying parties can then gain confidence that only such intended software is running on trusted hardware. Azure Attestation is a unified customer-facing service and framework for attestation.
Azure Attestation is the preferred choice for attesting TEEs as it offers the fo
- Unified framework for attesting multiple environments such as TPMs, SGX enclaves and VBS enclaves - Allows creation of custom attestation providers and configuration of policies to restrict token generation-- Offers regional shared providers which can attest with no configuration from users
+- Offers [regional shared providers](basic-concepts.md#regional-shared-provider) which can attest with no configuration from users
- Protects its data while-in use with implementation in an SGX enclave - Highly available service
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/dsc-linux-powershell.md
Get-AzAutomationDscNodeConfiguration `
Register the Azure Linux VM as a Desired State Configuration (DSC) node for the Azure Automation account. The [Register-AzAutomationDscNode](/powershell/module/az.automation/register-azautomationdscnode) cmdlet only supports VMs running Windows OS. The Azure Linux VM will first need to be configured for DSC. For detailed steps, see [Get started with Desired State Configuration (DSC) for Linux](/powershell/dsc/getting-started/lnxgettingstarted).
-1. Construct a python script with the registration command using PowerShell for later execution on your Azure Linux VM by running the following code:
+1. Construct a Python script with the registration command using PowerShell for later execution on your Azure Linux VM by running the following code:
```powershell $primaryKey = (Get-AzAutomationRegistrationInfo `
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
To resolve this issue, remove the following registry key, restart `HealthService
#### Issue
-You receive the following message when you try to add a Hybrid Runbook Worker by using the `sudo python /opt/microsoft/omsconfig/.../onboarding.py --register` python script:
+You receive the following message when you try to add a Hybrid Runbook Worker by using the `sudo python /opt/microsoft/omsconfig/.../onboarding.py --register` Python script:
`Unable to register, an existing worker was found. Please deregister any existing worker and try again.`
-Additionally, attempting to deregister a Hybrid Runbook Worker by using the `sudo python /opt/microsoft/omsconfig/.../onboarding.py --deregister` python script:
+Additionally, attempting to deregister a Hybrid Runbook Worker by using the `sudo python /opt/microsoft/omsconfig/.../onboarding.py --deregister` Python script:
`Failed to deregister worker. [response_status=404]`
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
To schedule a new update deployment, perform the following steps. Depending on t
* If you want to install only security and critical updates, along with one or more optional updates, you should select **Security** and **Critical** under **Update classifications**. Then for the **Include** option, specify the KBIDs for the optional updates.
- * If you want to install only security and critical updates, but skip one or more updates for python to avoid breaking your legacy application, you should select **Security** and **Critical** under **Update classifications**. Then for the **Exclude** option add the python packages to skip.
+ * If you want to install only security and critical updates, but skip one or more updates for Python to avoid breaking your legacy application, you should select **Security** and **Critical** under **Update classifications**. Then for the **Exclude** option add the Python packages to skip.
9. Select **Schedule settings**. The default start time is 30 minutes after the current time. You can set the start time to any time from 10 minutes in the future.
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python.md
except Exception as ex:
print(ex) ```
-In your console window, navigate to the directory containing the *app-configuration-quickstart.py* file and execute the following python command to run the app:
+In your console window, navigate to the directory containing the *app-configuration-quickstart.py* file and execute the following Python command to run the app:
```console python app-configuration-quickstart.py
azure-arc Offline Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/offline-deployment.md
Because monthly updates are provided for Azure Arc-enabled data services and the
A [sample script](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/scripts/pull-and-push-arc-data-services-images-to-private-registry.py) can be found in the Azure Arc GitHub repository. > [!NOTE]
-> This script requires the installation of python and the [Docker CLI](https://docs.docker.com/install/).
+> This script requires the installation of Python and the [Docker CLI](https://docs.docker.com/install/).
The script will interactively prompt for the following information. Alternatively, if you want to have the script run without interactive prompts, you can set the corresponding environment variables before running the script.
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 03/08/2022 Last updated : 03/09/2022 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
For complete release version information, see [Version log](version-log.md).
### Data Controller - Fixed the issue "ConfigMap sql-config-[SQL MI] does not exist" from the February 2022 release. This issue occurs when deploying a SQL Managed Instance with service type of `loadBalancer` with certain load balancers.
-### SQL Managed Instance
--- Support for readable secondary replicas:
- - To set readable secondary replicas use `--readable-secondaries` when you create or update an Arc-enabled SQL Managed Instance deployment.
- - Set `--readable secondaries` to any value between 0 and the number of replicas minus 1.
- - `--readable-secondaries` only applies to Business Critical tier.
-- Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary. -- RWX capable storage class is required for backups, for both General Purpose and Business Critical service tiers.-- Billing support when using multiple read replicas.-
-For additional information about service tiers, see [High Availability with Azure Arc-enabled SQL Managed Instance (preview)](managed-instance-high-availability.md).
-
-### User experience improvements
-
-The following improvements are available in [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
--- Azure Arc and Azure CLI extensions now generally available. -- Changed edit commands for SQL Managed Instance for Azure Arc dashboard to use `update`, reflecting Azure CLI changes. This works in indirect or direct mode. -- Data controller deployment wizard step for connectivity mode is now earlier in the process.-- Removed an extra backups field in SQL MI deployment wizard.- ## February 2022 This release is published February 25, 2022.
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
Previously updated : 03/08/2022 Last updated : 03/09/2022 # Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
This article identifies the component versions with each release of Azure Arc-en
|Container images tag |`v1.4.1_2022-03-08` |CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1| |ARM API version|2021-11-01|
-|`arcdata` Azure CLI extension version| 1.2.1|
-|Arc enabled Kubernetes helm chart extension version|1.1.18791000|
+|`arcdata` Azure CLI extension version| 1.2.3|
+|Arc enabled Kubernetes helm chart extension version|1.1.18911000|
|Arc Data extension for Azure Data Studio|1.0| ### February 25, 2022
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
| -- | - | | `https://management.azure.com` (for Azure Cloud), `https://management.usgovcloudapi.net` (for Azure US Government) | Required for the agent to connect to Azure and register the cluster. | | `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. |
-| `https://login.microsoftonline.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
+| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. | | `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 02/16/2022 Last updated : 03/09/2022 description: "Troubleshooting common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
Error: list: failed to list: secrets is forbidden: User "myuser" cannot list res
The user connecting the cluster to Azure Arc should have `cluster-admin` role assigned to them on the cluster. - ### Unable to connect OpenShift cluster to Azure Arc If `az connectedk8s connect` is timing out and failing when connecting an OpenShift cluster to Azure Arc, check the following:
az provider show -n Microsoft.KubernetesConfiguration --debug
az k8s-configuration create <parameters> --debug ```
-To help troubleshoot issues with `fluxConfigurations` resource (Flux v2), run these az commands with `--debug` parameter specified:
-
-```azurecli
-az provider show -n Microsoft.KubernetesConfiguration --debug
-az k8s-configuration flux create <parameters> --debug
-```
- ### Flux v1 - Create configurations Write permissions on the Azure Arc-enabled Kubernetes resource (`Microsoft.Kubernetes/connectedClusters/Write`) are necessary and sufficient for creating configurations on that cluster.
metadata:
selfLink: "" ```
+### Flux v2 - General
+
+To help troubleshoot issues with `fluxConfigurations` resource (Flux v2), run these az commands with `--debug` parameter specified:
+
+```azurecli
+az provider show -n Microsoft.KubernetesConfiguration --debug
+az k8s-configuration flux create <parameters> --debug
+```
+
+### Flux v2 - Webhook/dry run errors
+
+If you see Flux fail to reconcile with an error like `dry-run failed, error: admission webhook "<webhook>" does not support dry run`, you can resolve the issue by finding the `ValidatingWebhookConfiguration` or the `MutatingWebhookConfiguration` and setting the `sideEffects` to `None` or `NoneOnDryRun`:
+
+For more information, see [How do I resolve `webhook does not support dry run` errors?](https://fluxcd.io/docs/faq/#how-do-i-resolve-webhook-does-not-support-dry-run-errors)
+ ### Flux v2 - Error installing the `microsoft.flux` extension The `microsoft.flux` extension installs the Flux controllers and Azure GitOps agents into your Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. If the extension is not already installed in a cluster and you create a GitOps configuration resource for that cluster, the extension will be installed automatically.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 2/22/2022 Last updated : 03/09/2022
This tutorial describes how to use GitOps in a Kubernetes cluster. Before you di
General availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both AKS and Azure Arc-enabled Kubernetes. Flux v2 is the way forward, and Flux v1 will eventually be deprecated. >[!IMPORTANT]
->GitOps with Flux v2 is in public preview. In preparation for general availability, features are still being added to the preview. One important feature, multi-tenancy, could be a breaking change for some users. To prepare yourself for the release of multi-tenancy, [please review these details](#multi-tenancy).
+>GitOps with Flux v2 is in public preview. In preparation for general availability, features are still being added to the preview. One important feature, multi-tenancy, could affect some users when it is released. To prepare yourself for the release of multi-tenancy, [please review these details](#multi-tenancy).
## Prerequisites
The GitOps agents require TCP on port 443 (`https://:443`) to function. The agen
| `https://<region>.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information. Depends on `<region>` (the supported regions mentioned earlier). | | `https://login.microsoftonline.com` | Required to fetch and update Azure Resource Manager tokens. | | `https://mcr.microsoft.com` | Required to pull container images for Flux controllers. |
-| `https://azurearcfork8s.azurecr.io` | Required to pull container images for GitOps agents. |
## Enable CLI extensions
False whl k8s-configuration C:\Users\somename\.azure\c
False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.0.4 ```
+> [!TIP]
+> For help resolving any errors, see the Flux v2 suggestions in [Azure Arc-enabled Kubernetes and GitOps troubleshooting](troubleshooting.md#flux-v2general).
+ ## Apply a Flux configuration by using the Azure CLI Use the `k8s-configuration` Azure CLI extension (or the Azure portal) to enable GitOps in an AKS or Arc-enabled Kubernetes cluster. For a demonstration, use the public [flux2-kustomize-helm-example](https://github.com/fluxcd/flux2-kustomize-helm-example) repository.
By using this annotation, the HelmRelease that is deployed will be patched with
Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy) in [version 0.26](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). This capability will be integrated into Azure GitOps with Flux v2 prior to general availability. >[!NOTE]
->This will be a breaking change if you have any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects. It [may also be a breaking change](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default) if you use a Kubernetes version less than 1.20.6. To prepare for the release of this multi-tenancy feature, take these actions:
+>You need to prepare for the multi-tenancy feature release if you have any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects, or [if you use a Kubernetes version less than 1.20.6](https://fluxcd.io/blog/2022/01/january-update/#flux-v026-more-secure-by-default). To prepare, take these actions:
>
->* Upgrade to Kubernetes version 1.20.6 or greater.
->* In your Kubernetes manifests assure that all sourceRef are to objects within the same namespace as the GitOps configuration.
-> * If you need time to update your manifests, you can opt-out of multi-tenancy. However, you still need to upgrade your Kubernetes version.
+> * Upgrade to Kubernetes version 1.20.6 or greater.
+> * In your Kubernetes manifests assure that all sourceRef are to objects within the same namespace as the GitOps configuration.
+> * If you need time to update your manifests, you can opt-out of multi-tenancy. However, you still need to upgrade your Kubernetes version.
### Update manifests for multi-tenancy
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
A comma-delimited list of beta features to enable. Beta features enabled by thes
||| |AzureWebJobsFeatureFlags|`feature1,feature2`|
-## AzureWebJobsSecretStorageType
+## AzureWebJobsKubernetesSecretName
+
+Indicates the Kubernetes Secrets resource used for storing keys. Supported only when running in Kubernetes. Requires that `AzureWebJobsSecretStorageType` be set to `kubernetes`. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.
+
+|Key|Sample value|
+|||
+|AzureWebJobsKubernetesSecretName|`<SECRETS_RESOURCE>`|
+
+To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
+
+## AzureWebJobsSecretStorageKeyVaultClientId
+
+The client ID of the user-assigned managed identity or the app registration used to access the vault where keys are stored. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`. Supported in version 4.x and later versions of the Functions runtime.
+
+|Key|Sample value|
+|||
+|AzureWebJobsSecretStorageKeyVaultClientId|`<CLIENT_ID>`|
-Specifies the repository or provider to use for key storage. Currently, the supported repositories are blob storage ("Blob") and the local file system ("Files"). The default is blob in version 2 and file system in version 1.
+To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
+
+## AzureWebJobsSecretStorageKeyVaultClientSecret
+
+The secret for client ID of the user-assigned managed identity or the app registration used to access the vault where keys are stored. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`. Supported in version 4.x and later versions of the Functions runtime.
|Key|Sample value| |||
-|AzureWebJobsSecretStorageType|Files|
+|AzureWebJobsSecretStorageKeyVaultClientSecret|`<CLIENT_SECRET>`|
+
+To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
+
+## AzureWebJobsSecretStorageKeyVaultName
+
+The name of a key vault instance used to store keys. This setting is only supported for version 3.x of the Functions runtime. For version 4.x, instead use `AzureWebJobsSecretStorageKeyVaultUri`. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`.
+
+The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file).
+
+|Key|Sample value|
+|||
+|AzureWebJobsSecretStorageKeyVaultName|`<VAULT_NAME>`|
+
+To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
+
+## AzureWebJobsSecretStorageKeyVaultTenantId
+
+The tenant ID of the app registration used to access the vault where keys are stored. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`. Supported in version 4.x and later versions of the Functions runtime. To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
+
+|Key|Sample value|
+|||
+|AzureWebJobsSecretStorageKeyVaultTenantId|`<TENANT_ID>`|
+
+## AzureWebJobsSecretStorageKeyVaultUri
+
+The URI of a key vault instance used to store keys. Supported in version 4.x and later versions of the Functions runtime. This is the recommended setting for using a key vault instance for key storage. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`.
+
+The `AzureWebJobsSecretStorageKeyVaultTenantId` value should be the full value of **Vault URI** displayed in the **Key Vault overview** tab, including `https://`.
+
+The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file).
+
+|Key|Sample value|
+|||
+|AzureWebJobsSecretStorageKeyVaultUri|`https://<VAULT_NAME>.vault.azure.net`|
+
+To learn more, see [Use Key Vault references for Azure Functions](../app-service/app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json).
+
+## AzureWebJobsSecretStorageSas
+
+A Blob Storage SAS URL for a second storage account used for key storage. By default, Functions uses the account set in `AzureWebJobsStorage`. When using this secret storage option, make sure that `AzureWebJobsSecretStorageType` isn't explicitly set or is set to `blob`. To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
+
+|Key|Sample value|
+|--|--|
+|AzureWebJobsSecretStorageSa| `<BLOB_SAS_URL>` |
+
+## AzureWebJobsSecretStorageType
+
+Specifies the repository or provider to use for key storage. Keys are always encrypted before being stored using a secret unique to your function app.
+
+|Key| Value| Description|
+||||
+|AzureWebJobsSecretStorageType|`blob`|Keys are stored in a Blob storage container in the account provided by the `AzureWebJobsStorage` setting. This is the default behavior when `AzureWebJobsSecretStorageType` isn't set.<br/>To specify a different storage account, use the `AzureWebJobsSecretStorageSas` setting to indicate the SAS URL of a second storage account. |
+|AzureWebJobsSecretStorageType | `files` | Keys are persisted on the file system. This is the default for Functions v1.x.|
+|AzureWebJobsSecretStorageType |`keyvault` | Keys are stored in a key vault instance set by `AzureWebJobsSecretStorageKeyVaultName`. |
+|Kubernetes Secrets | `kubernetes` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.|
+
+To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
## AzureWebJobsStorage
azure-functions Functions How To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md
pool:
vmImage: ubuntu-latest steps: - task: UsePythonVersion@0
- displayName: "Setting python version to 3.7 as required by functions"
+ displayName: "Setting Python version to 3.7 as required by functions"
inputs: versionSpec: '3.7' architecture: 'x64'
pool:
vmImage: ubuntu-latest steps: - task: UsePythonVersion@0
- displayName: "Setting python version to 3.6 as required by functions"
+ displayName: "Setting Python version to 3.6 as required by functions"
inputs: versionSpec: '3.6' architecture: 'x64'
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
on:
env: AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your application's name AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
- PYTHON_VERSION: '3.7' # set this to the python version to use (supports 3.6, 3.7, 3.8)
+ PYTHON_VERSION: '3.7' # set this to the Python version to use (supports 3.6, 3.7, 3.8)
jobs: build-and-deploy:
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
When you scale up or down in size, the required address space is doubled for a s
<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
-Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /26 and /24 is required for Windows and Linux respectively.
+Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /24 and /26 is required for Windows and Linux respectively.
When you want your apps in another plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
az functionapp config set --name <FUNCTION_APP> \
--linux-fx-version <LINUX_FX_VERSION> ```
-Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the python version you want to use, prefixed by `python|` e.g. `python|3.9`
+Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the Python version you want to use, prefixed by `python|` e.g. `python|3.9`
You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
azure-functions Python Scale Performance Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md
Here are a few examples of client libraries that has implemented async pattern:
- [Janus Queue](https://pypi.org/project/janus/) - Thread-safe asyncio-aware queue for Python - [pyzmq](https://pypi.org/project/pyzmq/) - Python bindings for ZeroMQ
-##### Understanding async in python worker
+##### Understanding async in Python worker
When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop and allow event loop to process next task during the wait time. In our Python Worker, the worker shares the event loop with the customer's `async` function and it is capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries (e.g. [aiohttp](https://pypi.org/project/aiohttp/), [pyzmq](https://pypi.org/project/pyzmq/)). Employing these recommendations will greatly increase your function's throughput compared to those libraries implemented in synchronous fashion. > [!NOTE]
-> If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibit the python worker to handle concurrent requests.
+> If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibit the Python worker to handle concurrent requests.
#### Use multiple language worker processes
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
To learn more about access keys, see the [HTTP trigger binding article](function
#### Secret repositories
-By default, keys are stored in a Blob storage container in the account provided by the `AzureWebJobsStorage` setting. You can use specific application settings to override this behavior and store keys in a different location.
+By default, keys are stored in a Blob storage container in the account provided by the `AzureWebJobsStorage` setting. You can use the [AzureWebJobsSecretStorageType](functions-app-settings.md#azurewebjobssecretstoragetype) setting to override this behavior and store keys in a different location.
-|Location |Setting | Value | Description |
-|||||
-|Different storage account | `AzureWebJobsSecretStorageSas` | `<BLOB_SAS_URL>` | Stores keys in Blob storage of a second storage account, based on the provided SAS URL. Keys are encrypted before being stored using a secret unique to your function app. |
-|File system | `AzureWebJobsSecretStorageType` | `files` | Keys are persisted on the file system, encrypted before storage using a secret unique to your function app. |
-|Azure Key Vault | `AzureWebJobsSecretStorageType`<br/>`AzureWebJobsSecretStorageKeyVaultName` | `keyvault`<br/>`<VAULT_NAME>` | The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file). |
-|Kubernetes Secrets |`AzureWebJobsSecretStorageType`<br/>`AzureWebJobsKubernetesSecretName` (optional) | `kubernetes`<br/>`<SECRETS_RESOURCE>` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The Azure Functions Core Tools generates the values automatically when deploying to Kubernetes.|
+|Location | Value | Description |
+||||
+|Second storage account | `blob` | Stores keys in Blob storage of a different storage account, based on the SAS URL in [AzureWebJobsSecretStorageSas](functions-app-settings.md#azurewebjobssecretstoragesas). |
+|File system | `files` | Keys are persisted on the file system, which is the default in Functions v1.x. |
+|Azure Key Vault | `keyvault` | The key vault set in [AzureWebJobsSecretStorageKeyVaultUri](functions-app-settings.md#azurewebjobssecretstoragekeyvaulturi) is used to store keys. To learn more, see [Use Key Vault references for Azure Functions](../app-service/app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json). |
+|Kubernetes Secrets |`kubernetes` | The resource set in [AzureWebJobsKubernetesSecretName](functions-app-settings.md#azurewebjobskubernetessecretname) is used to store keys. Supported only when running the Functions runtime in Kubernetes. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.|
-#### Using Key Vault in Functions v4
+When using Key Vault for key storage, the app settings you need depend on the managed identity type. Functions runtime version 3.x only supports system-assigned managed identities.
-The application settings for using Azure Key Vault as the secret repository in Functions v4:
+# [Version 4.x](#tab/v4)
-##### System-assigned managed identity
+| Setting name | System-assigned | User-assigned | App registration |
+| | | | |
+| [AzureWebJobsSecretStorageKeyVaultUri](functions-app-settings.md#azurewebjobssecretstoragekeyvaulturi) | Γ£ô | Γ£ô | Γ£ô |
+| [AzureWebJobsSecretStorageKeyVaultClientId](functions-app-settings.md#azurewebjobssecretstoragekeyvaultclientid) | X | Γ£ô |Γ£ô |
+| [AzureWebJobsSecretStorageKeyVaultClientSecret](functions-app-settings.md#azurewebjobssecretstoragekeyvaultclientsecret) | X | X | Γ£ô |
+| [AzureWebJobsSecretStorageKeyVaultTenantId](functions-app-settings.md#azurewebjobssecretstoragekeyvaulttenantid) | X | X | Γ£ô |
-| Setting | Value |
-||-|
-| `AzureWebJobsSecretStorageType` | `keyvault` |
-| `AzureWebJobsSecretStorageKeyVaultUri` | `<VAULT_URI>` |
+# [Version 3.x](#tab/v3)
-##### User-assigned managed identity
+| Setting name | System-assigned | User-assigned | App registration |
+| | | | |
+| [AzureWebJobsSecretStorageKeyVaultName](functions-app-settings.md#azurewebjobssecretstoragekeyvaultname) | Γ£ô | X | X |
-| Setting | Value |
-||-|
-| `AzureWebJobsSecretStorageType` | `keyvault` |
-| `AzureWebJobsSecretStorageKeyVaultUri` | `<VAULT_URI>` |
-| `AzureWebJobsSecretStorageKeyVaultClientId` | `<CLIENT_ID>` |
-
-##### App registration
-
-| Setting | Value |
-||-|
-| `AzureWebJobsSecretStorageType` | `keyvault` |
-| `AzureWebJobsSecretStorageKeyVaultUri` | `<VAULT_URI>` |
-| `AzureWebJobsSecretStorageKeyVaultTenantId` | `<TENANT_ID>` |
-| `AzureWebJobsSecretStorageKeyVaultClientId` | `<CLIENT_ID>` |
-| `AzureWebJobsSecretStorageKeyVaultClientSecret` | `<CLIENT_SECRET>` |
-
-> [!NOTE]
-> The Vault URI should be the full value displayed in the Key Vault overview tab, including `https://`.
+ ### Authentication/authorization
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
The following is a list of open-source projects that extend the capabilities of
| [Azure Maps Gov Cloud Code Samples](https://github.com/Azure-Samples/AzureMapsCodeSamples) | A collection of code samples for using Azure Maps through Azure Government Cloud. | | [Azure Maps & Azure Active Directory Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples) | A collection of samples that show how to use Azure Active Directory with Azure Maps. | | [LiveMaps](https://github.com/Azure-Samples/LiveMaps) | Sample application to provide live indoor maps visualization of IoT data on top of Azure Maps using Azure Maps Creator |
-| [Azure Maps Jupyter Notebook samples](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook) | A collection of python samples using the Azure Maps REST services. |
+| [Azure Maps Jupyter Notebook samples](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook) | A collection of Python samples using the Azure Maps REST services. |
| [Azure Maps .NET UWP IoT Remote Control](https://github.com/Azure-Samples/azure-maps-dotnet-webgl-uwp-iot-remote-control) | This is a sample application that shows how to build a remotely controlled map using Azure Maps and IoT hub services. | | [Implement IoT spatial analytics using Azure Maps](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing) | Tracking and capturing relevant events that occur in space and time is a common IoT scenario. |
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
To get around these limitations for specific metrics, you can manually extract t
### Destination limitations
-Any destinations for the diagnostic setting must be created before creating the diagnostic settings. The destination does not have to be in the same subscription as the resource sending logs as long as the user who configures the setting has appropriate Azure RBAC access to both subscriptions. Using Azure Lighthouse, it is also possible to have diagnostic settings sent to a workspace in another Azure Active Directory tenant. The following table provides unique requirements for each destination including any regional restrictions.
+Any destinations for the diagnostic setting must be created before creating the diagnostic settings. The destination does not have to be in the same subscription as the resource sending logs as long as the user who configures the setting has appropriate Azure RBAC access to both subscriptions. Using Azure Lighthouse, it is also possible to have diagnostic settings sent to a workspace, storage account or Event Hub in another Azure Active Directory tenant. The following table provides unique requirements for each destination including any regional restrictions.
| Destination | Requirements | |:|:|
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 02/28/2022 Last updated : 03/08/2022 # Guidelines for Azure NetApp Files network planning
In the topology illustrated above, the on-premises network is connected to a hub
* [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md) * [Configure network features for an Azure NetApp Files volume](configure-network-features.md)
+* [Virtual network peering](../virtual-network/virtual-network-peering-overview.md)
azure-netapp-files Faq Application Volume Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-volume-group.md
Previously updated : 01/19/2022 Last updated : 03/09/2022 # Application volume group FAQs
General recommendations for snapshots in an SAP HANA environment are as follows:
* Closely monitor the data volume snapshots. HANA tends to have a high change rate. Keeping snapshots for a long period might increase your capacity needs. Be sure to monitor the used capacity vs. allocated capacity. * If you automatically create snapshots for your (log and file) backups, be sure to monitor their retention to avoid unpredicted volume growth.
+## The mount instructions of a volume include a list of IP addresses. Which IP address should I use?
+
+Application volume group ensures that SAP HANA data and log volumes for one HANA host will always have separate storage endpoints with different IP addresses to achieve best performance. To host your data, log and shared volumes across the Azure NetApp Files storage resource(s) up to six storage endpoints can be created per used Azure NetApp Files storage resource. For this reason, it is recommended to size the delegated subnet accordingly. See [Requirements and considerations for application volume group for SAP HANA](application-volume-group-considerations.md). Although all listed IP addresses can be used for mounting, the first listed IP address is the one that provides the lowest latency. It is recommended to always use the first IP address.
+
+## What is the optimal mount option for SAP HANA?
+
+To have an optimal SAP HANA experience, there is more to do on the Linux client than just mounting the volumes. A complete setup and configuration guide is available for SAP HANA on Azure NetApp Files. It includes many recommended Linux settings and recommended mount options. See the SAP HANA solutions overview on [SAP HANA on Azure NetApp Files](azure-netapp-files-solution-architectures.md#sap-hana) to select the guide for your system architecture.
+ ## The deployment failed and not even a single volume was created. Why is that? This is the normal behavior. Application volume group for SAP HANA will provision the volumes in an atomic fashion. Deployment fails typically because the given PPG doesnΓÇÖt have enough available resources to accommodate your requirements. Azure NetApp Files team will investigate this situation to provide sufficient resources.
azure-percept Create People Counting Solution With Azure Percept Devkit Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-people-counting-solution-with-azure-percept-devkit-vision.md
This section guides users on modifying the cloned people counting repo with thei
3. Now you will build the module image and push it to your container registry. Open Visual Studio Code integrated terminal by selecting `View > Terminal `
-4. Sign into Docker with the Azure Container registry (ACR) credentials that you saved after creating the registry using below command in terminal-
+4. Sign into Docker with the Azure Container registry (ACR) credentials that you saved after creating the registry using below command in terminal. Note that this command would give a warning that using --password or -p via CLI is insecure. Therefore, if you want a more secure login for your future solution development, use `--password-stdin` instead by following [this instruction](https://docs.docker.com/engine/reference/commandline/login/).
`docker login -u <ACR username> -p <ACR password> <ACR login server>`-
+
5. Visual Studio Code now has access to your container registry. In the next steps you will turn the solution code into a container image. In Visual Studio Code explorer, right click the `deployment.template.json` file and select `Build and Push IoT Edge Solution` ![Build and Push IoT Edge Solution](./media/create-people-counting-solution-with-azure-percept-vision-images/build-and-push.png)
Step 5 guides users through creating, configuring, and running a Stream Analytic
3. On the new input pane, enter the following information - - `Input alias` - Enter a unique alias for the input -
+
- `Select IoT Hub from your subscription` - Select this radio button - - `Subscription` - Select the Azure subscription you are using for this lab - - `IoT Hub` - Select the IoT Hub you are using for this lab - - `Consumer group` - Select the consumer group you created previously - - `Shared access policy name` - Select the name of the shared access policy you want the Stream Analytics job to use for your IoT hub. For this lab, you can select service - - `Shared access policy key` - This field is auto filled based on your selection for the shared access policy name - - `Endpoint` - Select Messaging Leave all other fields as default-
Step 5 guides users through creating, configuring, and running a Stream Analytic
4. Enter the following information-
- a. `Output alias` - A unique alias for the output
-
- b. `Group workspace` - Select your target group workspace.
-
- c. `Dataset name` - Enter a dataset name
-
- d. `Table name` - Enter a table name
-
- e. `Authentication mode` - Leave as the default
+ - `Output alias` - A unique alias for the output
+
+ - `Select Group workspace from your subscriptions` - Select this radio button
+ - `Group workspace` - Select your target group workspace
+ - `Dataset name` - Enter a dataset name
+ - `Table name` - Enter a table name
+ - `Authentication mode` - User token
:::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-output-fields.png" alt-text="Power BI new output fields.":::
This step will guide users on how to create a Power BI report from the People Co
7. This will generate a graph as follows-
- ![graph is generated](./media/create-people-counting-solution-with-azure-percept-vision-images/ power-bi-graph.png)
+ ![graph is generated](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-graph.png)
8. Click `Refresh` periodically to update the graph
azure-percept Voice Control Your Inventory Then Visualize With Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md
In this section, you will use Visual Studio Code to create a local Azure Functio
![the directory location](./media/voice-control-your-inventory-images/select-airlift-folder.png) 4. <strong>Provide the following information at the prompts</strong>: Select a language for your function project: Choose <strong>Python</strong>. ![following information at the prompts](./media/voice-control-your-inventory-images/language-python.png)
- 5. <strong>Select a Python alias to create a virtual environment</strong>: Choose the location of your Python interpreter. If the location isn't shown, type in the full path to your Python binary. Select skip virtual environment you donΓÇÖt have python installed.
+ 5. <strong>Select a Python alias to create a virtual environment</strong>: Choose the location of your Python interpreter. If the location isn't shown, type in the full path to your Python binary. Select skip virtual environment you donΓÇÖt have Python installed.
![create a virtual environment](./media/voice-control-your-inventory-images/skip-virtual-env.png) 6. <strong>Select a template for your project's first function</strong>: Choose <strong>HTTP trigger</strong>. ![Select a template](./media/voice-control-your-inventory-images/http-trigger.png)
azure-sql-edge Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/connect.md
The [SQL Server command-line tools](/sql/linux/sql-server-linux-setup-tools) are
## Connect to Azure SQL Edge from another container on the same host
-Because two containers that are running on the same host are on the same Docker network, you can easily access them by using the container name and the port address for the service. For example, if you're connecting to the instance of Azure SQL Edge from another python module (container) on the same host, you can use a connection string similar to the following. (This example assumes that Azure SQL Edge is configured to listen on the default port.)
+Because two containers that are running on the same host are on the same Docker network, you can easily access them by using the container name and the port address for the service. For example, if you're connecting to the instance of Azure SQL Edge from another Python module (container) on the same host, you can use a connection string similar to the following. (This example assumes that Azure SQL Edge is configured to listen on the default port.)
```python
conn = pyodbc.connect(db_connection_string, autocommit=True)
## Connect to Azure SQL Edge from another network machine
-You might want to connect to the instance of Azure SQL Edge from another machine on the network. To do so, use the IP address of the Docker host and the host port to which the Azure SQL Edge container is mapped. For example, if the IP address of the Docker host is *xxx.xxx.xxx.xxx*, and the Azure SQL Edge container is mapped to host port *1600*, then the server address for the instance of Azure SQL Edge would be *xxx.xxx.xxx.xxx,1600*. The updated python script is:
+You might want to connect to the instance of Azure SQL Edge from another machine on the network. To do so, use the IP address of the Docker host and the host port to which the Azure SQL Edge container is mapped. For example, if the IP address of the Docker host is *xxx.xxx.xxx.xxx*, and the Azure SQL Edge container is mapped to host port *1600*, then the server address for the instance of Azure SQL Edge would be *xxx.xxx.xxx.xxx,1600*. The updated Python script is:
```python
azure-sql Authentication Azure Ad Only Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication.md
Previously updated : 02/14/2022 Last updated : 03/08/2022
[!INCLUDE[appliesto-sqldb-sqlmi-asa-dedicated-only](../includes/appliesto-sqldb-sqlmi-asa-dedicated-only.md)]
-Azure AD-only authentication is a feature within [Azure SQL](../azure-sql-iaas-vs-paas-what-is-overview.md) that allows the service to only support Azure AD authentication, and is supported for [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md). Azure AD-only authentication is also available for dedicated SQL pools (formerly SQL DW) in standalone servers, but not yet available for dedicated SQL pools in Azure Synapse workspaces.
+Azure AD-only authentication is a feature within [Azure SQL](../azure-sql-iaas-vs-paas-what-is-overview.md) that allows the service to only support Azure AD authentication, and is supported for [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md).
+
+Azure AD-only authentication is also available for dedicated SQL pools (formerly SQL DW) in standalone servers. Azure AD-only authentication can be enabled for the Azure Synapse workspace. For more information, see [Azure AD-only authentication with Azure Synapse workspaces](../../synapse-analytics/sql/active-directory-authentication.md).
SQL authentication is disabled when enabling Azure AD-only authentication in the Azure SQL environment, including connections from SQL server administrators, logins, and users. Only users using [Azure AD authentication](authentication-aad-overview.md) are authorized to connect to the server or database.
For more limitations, see [T-SQL differences between SQL Server & Azure SQL Mana
> [Tutorial: Enable Azure Active Directory only authentication with Azure SQL](authentication-azure-ad-only-authentication-tutorial.md) > [!div class="nextstepaction"]
-> [Create server with Azure AD-only authentication enabled in Azure SQL](authentication-azure-ad-only-authentication-create-server.md)
+> [Create server with Azure AD-only authentication enabled in Azure SQL](authentication-azure-ad-only-authentication-create-server.md)
azure-sql Authentication Azure Ad User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity.md
Title: User-assigned managed identity in Azure AD for Azure SQL
-description: This article provides information on user-assigned managed identities in Azure Active Directory (AD) with Azure SQL Database and Azure SQL Managed Instance
+description: User-assigned managed identities (UMI) in Azure AD (AD) for Azure SQL Database, SQL Managed Instance, and dedicated SQL pools in Azure Synapse Analytics.
Previously updated : 12/15/2021 Last updated : 03/09/2022 # User-assigned managed identity in Azure AD for Azure SQL > [!NOTE] > User-assigned managed identity for Azure SQL is in **public preview**. Azure Active Directory (AD) supports two types of managed identities: System-assigned managed identity (SMI) and user-assigned managed identity (UMI). For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
-When using Azure AD authentication with Azure SQL Managed Instance, a managed identity must be assigned to the server identity. Previously, only a system-assigned managed identity could be assigned to the Managed Instance or SQL Database server identity. With support for user-assigned managed identity, the UMI can be assigned to Azure SQL Managed Instance or Azure SQL Database as the instance or server identity. This feature is now supported for SQL Database.
+A system-assigned managed identity is automatically assigned to a managed instance when it is created. When using Azure AD authentication with Azure SQL Managed Instance, a managed identity must be assigned to the server identity. Previously, only a system-assigned managed identity could be assigned to the Managed Instance or SQL Database server identity. With support for user-assigned managed identity, the UMI can be assigned to Azure SQL Managed Instance or Azure SQL Database as the instance or server identity. This feature is now supported for SQL Database.
> [!NOTE]
-> A system-assigned managed identity is automatically assigned to a managed instance when it is created.
->
-> User-assigned managed identity is not supported for Azure Synapse Analytics.
+> This article applies only to dedicated SQL pools (formerly SQL DW) in standalone Azure SQL servers. For more information on user-assigned managed identities for dedicated pools in Azure Synapse workspaces, see [Using a user-assigned managed identity](../../synapse-analytics/security/workspaces-encryption.md#using-a-user-assigned-managed-identity).
## Benefits of using user-assigned managed identities
Once the UMI is created, some permissions are needed to allow the UMI to read fr
- [**GroupMember.Read.All**](/graph/permissions-reference#group-permissions) ΓÇô allows access to Azure AD group information - [**Application.Read.ALL**](/graph/permissions-reference#application-resource-permissions) ΓÇô allows access to Azure AD service principal (applications) information
-### Granting permissions
+### Grant permissions
-The following is a sample PowerShell script that will grant the necessary permissions for UMI or SMI.
+The following is a sample PowerShell script that will grant the necessary permissions for UMI or SMI. This sample will assign permissions to the UMI `umiservertest`. To execute the script, you must sign in as a user with a "Global Administrator" or "Privileged Role Administrator" role, and have the following [Microsoft Graph permissions](/graph/auth/auth-concepts#microsoft-graph-permissions):
+- User.Read.All
+- GroupMember.Read.All
+- Application.Read.ALL
```powershell # Script to assign permissions to the UMI "umiservertest"
$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq "Application.Read.A
New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId $AAD_SP.ObjectId[0] -Id $AAD_AppRole.Id ```
+In the final steps of the script, if you have more UMIs with similar names, you have to use the proper `$MSI[ ]array` number, for example, `$AAD_SP.ObjectId[0]`.
+ ### Check permissions for user-assigned manage identity To check permissions for a UMI, go to the [Azure portal](https://portal.azure.com). In the **Azure Active Directory** resource, go to **Enterprise applications**. Select **All Applications** for the **Application type**, and search for the UMI that was created.
The ARM template used in [Creating an Azure SQL logical server using a user-assi
## Limitations and known issues -- This feature isn't supported for Azure Synapse Analytics. - After a Managed Instance is created, the **Active Directory admin** blade in the Azure portal shows a warning: `Managed Instance needs permissions to access Azure Active Directory. Click here to grant "Read" permissions to your Managed Instance.` If the user-assigned managed identity was given the appropriate permissions discussed in the above [Permissions](#permissions) section, this warning can be ignored. - If a system-assigned or user-assigned managed identity is used as the server or instance identity, deleting the identity will result in the server or instance inability to access Microsoft Graph. Azure AD authentication and other functions will fail. To restore Azure AD functionality, a new SMI or UMI must be assigned to the server with appropriate permissions. - Permissions to access Microsoft Graph using UMI or SMI can only be granted using PowerShell. These permissions can't be granted using the Azure portal.
The ARM template used in [Creating an Azure SQL logical server using a user-assi
> [Create an Azure SQL logical server using a user-assigned managed identity](authentication-azure-ad-user-assigned-managed-identity-create-server.md) > [!div class="nextstepaction"]
-> [Create an Azure SQL Managed Instance with a user-assigned managed identity](../managed-instance/authentication-azure-ad-user-assigned-managed-identity-create-managed-instance.md)
+> [Create an Azure SQL Managed Instance with a user-assigned managed identity](../managed-instance/authentication-azure-ad-user-assigned-managed-identity-create-managed-instance.md)
+
+> [!div class="nextstepaction"]
+> [Using a user-assigned managed identity in Azure Synapse workspaces](../../synapse-analytics/security/workspaces-encryption.md#using-a-user-assigned-managed-identity)
azure-sql Database Import Export Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import-export-private-link.md
$importRequest = New-AzSqlDatabaseExport -ResourceGroupName "<resourceGroupName>
### Create Import-Export Private link using REST API
-Existing APIs to perform Import and Export jobs have been enhanced to support Private Link. Refer to [Import Database API](/rest/api/sql/2021-08-01-preview/servers/import-database.md)
+Existing APIs to perform Import and Export jobs have been enhanced to support Private Link. Refer to [Import Database API](/rest/api/sql/2021-08-01-preview/servers/import-database)
## Next steps - [Import or Export Azure SQL Database without allowing Azure services to access the server](database-import-export-azure-services-off.md)-- [Import a database from a BACPAC file](database-import.md)
+- [Import a database from a BACPAC file](database-import.md)
azure-sql Firewall Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/firewall-configure.md
Previously updated : 07/14/2021 Last updated : 03/09/2022 # Azure SQL Database and Azure Synapse IP firewall rules [!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa.md)]
When a computer tries to connect to your server from the internet, the firewall
### Connections from inside Azure
-To allow applications hosted inside Azure to connect to your SQL server, Azure connections must be enabled. To enable Azure connections, there must be a firewall rule with starting and ending IP addresses set to 0.0.0.0.
+To allow applications hosted inside Azure to connect to your SQL server, Azure connections must be enabled. To enable Azure connections, there must be a firewall rule with starting and ending IP addresses set to 0.0.0.0. This recommended rule is only applicable to Azure SQL Database.
When an application from Azure tries to connect to the server, the firewall checks that Azure connections are allowed by verifying this firewall rule exists. This can be turned on directly from the Azure portal blade by switching the **Allow Azure Services and resources to access this server** to **ON** in the **Firewalls and virtual networks** settings. Switching the setting to ON creates an inbound firewall rule for IP 0.0.0.0 - 0.0.0.0 named **AllowAllWindowsAzureIps**. The rule can be viewed in your master database [sys.firewall_rules](/sql/relational-databases/system-catalog-views/sys-firewall-rules-azure-sql-database) view. Use PowerShell or the Azure CLI to create a firewall rule with start and end IP addresses set to 0.0.0.0 if youΓÇÖre not using the portal.
azure-sql Outbound Firewall Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/outbound-firewall-rule-overview.md
Last updated 11/10/2021
-# Outbound firewall rules for Azure SQL Database and Azure Synapse Analytics (preview)
+# Outbound firewall rules for Azure SQL Database and Azure Synapse Analytics
[!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa-formerly-sqldw.md)] Outbound firewall rules limit network traffic from the Azure SQL logical server to a customer defined list of Azure Storage accounts and Azure SQL logical servers. Any attempt to access storage accounts or SQL Databases not in this list is denied. The following Azure SQL Database features support this feature:
Outbound firewall rules limit network traffic from the Azure SQL logical server
> [!IMPORTANT] > This article applies to both Azure SQL Database and [dedicated SQL pool (formerly SQL DW)](../../synapse-analytics\sql-data-warehouse\sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics. These settings apply to all SQL Database and dedicated SQL pool (formerly SQL DW) databases associated with the server. For simplicity, the term 'database' refers to both databases in Azure SQL Database and Azure Synapse Analytics. Likewise, any references to 'server' is referring to the [logical SQL server](logical-servers.md) that hosts Azure SQL Database and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. This article does *not* apply to Azure SQL Managed Instance or dedicated SQL pools in Azure Synapse Analytics workspaces.
+> [!IMPORTANT]
+> Outbound firewall rules are defined at the [logical SQL server](logical-servers.md). Geo-replication and Auto-failover groups require the same set of rules to be defined on the primary and all secondarys
+ ## Set outbound firewall rules in the Azure portal 1. Browse to the **Outbound networking** section in the **Firewalls and virtual networks** blade for your Azure SQL Database and select **Configure outbound networking restrictions**.
azure-sql Service Tier Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-business-critical.md
Previously updated : 02/18/2022 Last updated : 03/09/2022 # Business Critical tier - Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
The key reasons why you should choose Business Critical service tier instead of
- **Higher resiliency and faster recovery from failures**. In a case of system failure, the database on primary instance will be disabled and one of the secondary replicas will be immediately became new read-write primary database that is ready to process queries. The database engine doesn't need to analyze and redo transactions from the log file and load all data in the memory buffer. - **Advanced data corruption protection**. The Business Critical tier leverages database replicas behind-the-scenes for business continuity purposes, and so the service also then leverages automatic page repair, which is the same technology used for SQL Server database [mirroring and availability groups](/sql/sql-server/failover-clusters/automatic-page-repair-availability-groups-database-mirroring). In the event that a replica can't read a page due to a data integrity issue, a fresh copy of the page will be retrieved from another replica, replacing the unreadable page without data loss or customer downtime. This functionality is applicable in General Purpose tier if the database has geo-secondary replica. - **Higher availability** - The Business Critical tier in Multi-AZ configuration provides resiliency to zonal failures and a higher availability SLA.-- **Fast geo-recovery** - The Business Critical tier configured with geo-replication has a guaranteed Recovery Point Objective (RPO) of 5 seconds and Recovery Time Objective (RTO) of 30 seconds for 100% of deployed hours.
+- **Fast geo-recovery** - When [active geo-replication](active-geo-replication-overview.md) is configured, the Business Critical tier has a guaranteed Recovery Point Objective (RPO) of 5 seconds and Recovery Time Objective (RTO) of 30 seconds for 100% of deployed hours.
## Compare Business Critical resource limits
azure-sql Data Virtualization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/data-virtualization-overview.md
WITH FULLSCAN, NORECOMPUTE
The `WITH` options are mandatory, and for the sample size, the allowed options are `FULLSCAN` and `SAMPLE n` percent. To create single-column statistics for multiple columns, execute the stored procedure for each of the columns. Multi-column statistics are not supported.
+## Troubleshooting
+
+Issues with query execution are typically caused by managed instance not being able to access file location. The related error messages may report insufficient access rights, non-existing location or file path, file being used by another process, or that directory cannot be listed. In most cases this indicates that access to files is blocked by network traffic control policies or due to lack of access rights. This is what should be checked:
+
+- Wrong or mistyped location path.
+- SAS key validity: it could be expired i.e. out of its validity period, containing a typo, starting with a question mark.
+- SAS key persmissions allowed: Read at minimum, and List if wildcards are used
+- Blocked inbound traffic on the storage account. Check [Managing virtual network rules for Azure Storage](../../storage/common/storage-network-security.md?tabs=azure-portal#managing-virtual-network-rules) for more details and make sure that access from managed instance VNet is allowed.
+- Outbound traffic blocked on the managed instance using [storage endpoint policy](service-endpoint-policies-configure.md#configure-policies). Allow outbound traffic to the storage account.
+ ## Next steps - To learn more about syntax options available with OPENROWSET, see [OPENROWSET T-SQL](/sql/t-sql/functions/openrowset-transact-sql).
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature.md
After a disastrous event, you can continue running your read-only workloads on S
## Sign-up for link
-To use the link feature, you will need:
+To use the link feature, you'll need:
- SQL Server 2019 Enterprise Edition with [CU15 (or above)](https://support.microsoft.com/en-us/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) installed on-premises, or on an Azure VM. - Network connectivity between your SQL Server and managed instance is required. If your SQL Server is running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two separate subnets.
Use the following link to sign-up for the limited preview of the link feature.
The underlying technology of near real-time data replication between SQL Server and SQL Managed Instance is based on distributed availability groups, part of the well-known and proven Always On availability group technology stack. Extend your SQL Server on-premises availability group to SQL Managed Instance in Azure in a safe and secure manner.
-There is no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can leverage the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
+There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can leverage the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
-You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if/when you are ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance.
+You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if/when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance.
## Supported scenarios
There could exist up to 100 links from the same, or various SQL Server sources t
> [!NOTE] > The link feature is released in limited public preview with support for currently only SQL Server 2019 Enterprise Edition CU13 (or above). [Sign-up now](https://aka.ms/mi-link-signup) to participate in the limited public preview.
+## Limitations
+
+This section describes the productΓÇÖs functional limitations.
+
+### General functional limitations
+
+Managed Instance link has a set of general limitations, and those are listed in this section. Listed limitations are of a technical nature and are unlikely to be addressed in the foreseeable future.
+
+- Only user databases can be replicated. Replication of system databases isn't supported.
+- The solution doesn't replicate server level objects, agent jobs, nor user logins from SQL Server to Managed Instance.
+- Only one database can be placed into a single Availability Group per one Distributed Availability Group link.
+- Link can't be established between SQL Server and Managed Instance if functionality used on SQL Server isn't support on Managed Instance.
+ - File tables and file streams aren't supported for replication, as Managed Instance doesn't support this.
+ - Replicating Databases using Hekaton (In-Memory OLTP) isn't supported on Managed Instance General Purpose service tier. Hekaton is only supported on Managed Instance Business Critical service tier.
+ - For the full list of differences between SQL Server and Managed Instance, see [this article](./transact-sql-tsql-differences-sql-server.md).
+- In case Change data capture (CDC), log shipping, or service broker are used with database replicated on the SQL Server, and in case of database migration to Managed Instance, on the failover to the Azure, clients will need to connect using instance name of the current global primary replica. you'll need to manually re-configure these settings.
+- In case Transactional Replication is used with database replicated on the SQL Server, and in case of migration scenario, on failover to Azure, transactional replication on Azure SQL Managed instance will not continue. you'll need to manually re-configure Transactional Replication.
+- In case distributed transactions are used with database replicated from the SQL Server, and in case of migration scenario, on the cutover to the cloud, the DTC capabilities will not be transferred. There will be no possibility for migrated database to get involved in distributed transactions with SQL Server, as Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, Managed Instance today supports distributed transactions only between other Managed Instances, see [this article](../database/elastic-transactions-overview.md#transactions-for-sql-managed-instance).
+- Managed Instance link can replicate database of any size if it fits into chosen storage size of target Managed Instance.
+
+### Additional limitations
+
+Some Managed Instance link features and capabilities are limited **at this time**. Details can be found in the following list.
+- SQL Server 2019, Enterprise Edition or Developer Edition, CU15 (or higher) on Windows or Linux host OS is supported.
+- Private endpoint (VPN/VNET) is supported to connect Distributed Availability Groups to Managed Instance. Public endpoint can't be used to connect to Managed Instance.
+- Managed Instance Link authentication between SQL Server instance and Managed Instance is certificate-based, available only through exchange of certificates. Windows authentication between instances isn't supported.
+- Replication of user databases from SQL Server to Managed Instance is one-way. User databases from Managed Instance can't be replicated back to SQL Server.
+- Auto failover groups replication to secondary Managed Instance can't be used in parallel while operating the Managed Instance Link with SQL Server.
+ ## Next steps For more information on the link feature, see the following:
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
+
+ Title: Managed Instance link - Use SSMS to failover database
+
+description: This tutorial teaches you how to use Managed Instance link and SSMS to failover database from SQL Server to Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/07/2022+
+# Tutorial: Perform Managed Instance link database failover with SSMS
++
+Managed Instance link is in preview.
+
+Managed Instance link feature enables you to replicate and optionally migrate your database hosted on SQL Server to Azure SQL Managed Instance.
+
+Once Managed Instance link database failover is performed from SSMS, the Managed Instance link is cut. Database hosted on SQL Server will become independent from database on Managed Instance and both databases will be able to perform read-write workload. This tutorial will cover performing Managed Instance link database failover by using latest version of SSMS (v18.11 and newer).
+
+## Managed Instance link database failover (migration)
+
+Follow the steps described in this section to perform Managed Instance link database failover.
+
+1. Managed Instance link database failover starts with connecting to SQL Server from SSMS.
+ To perform Managed Instance link database failover and migrate database from SQL Server to Managed Instance, open the context menu of the SQL Server database. Then select Azure SQL Managed Instance link and then choose Failover database option.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-database-context-failover-database.png" alt-text="Screenshot showing database's context menu option for database failover.":::
+
+2. When the wizard starts, click Next.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-introduction.png" alt-text="Screenshot showing Introduction window.":::
+
+3. On the Log in to Azure window, sign-in to your Azure account, select Subscription that is hosting the Managed Instance and click Next.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure window.":::
+
+4. On the Failover type window, select the failover type, fill in the required details and click Next.
+
+ In regular situations you should choose planned manual failover option and confirm that the workload on SQL Server database is stopped.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-failover-type.png" alt-text="Screenshot showing Failover Type window.":::
+
+> [!NOTE]
+> If you are performing planned manual failover, you should stop the workload on the database hosted on the SQL Server to allow Managed Instance link to completely catch up with the replication, so that failover without data loss is possible.
+
+5. In case Availability Group and Distributed Availability Group were created only for the purpose of Managed Instance link, you can choose to drop these objects on the Clean-up window. Dropping these objects is optional. Click Next.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-cleanup-optional.png" alt-text="Screenshot showing Cleanup (optional) window.":::
+
+6. In the Summary window, you will be able to review the upcoming process. Optionally you can create the script to save it, or to execute it manually. If everything is as expected and you want to proceed with the Managed Instance link database failover, click Finish.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-summary.png" alt-text="Screenshot showing Summary window.":::
+
+7. You will be able to track the progress of the process.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-executing-actions.png" alt-text="Screenshot showing Executing actions window.":::
+
+8. Once all steps are completed, click Close.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-results.png" alt-text="Screenshot showing Results window.":::
+
+9. After this, Managed Instance link no longer exists. Both databases on SQL Server and Managed Instance can execute read-write workload and are independent.
+ With this step, the migration of the database from SQL Server to Managed Instance is completed.
+
+ Database on SQL Server.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-sql-server-database.png" alt-text="Screenshot showing database on SQL Server in SSMS.":::
+
+ Database on Managed Instance.
+
+ :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-managed-instance-database.png" alt-text="Screenshot showing database on Managed Instance in SSMS.":::
+
+## Next steps
+
+For more information about Managed Instance link feature, see the following resources:
+
+- [Managed Instance link feature](./link-feature.md)
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Follow the steps described in this section to create Managed Instance link.
1. Managed Instance link database replication setup starts with connecting to SQL Server from SSMS. In the object explorer, select the database you want to replicate to Azure SQL Managed Instance. From databaseΓÇÖs context menu, choose ΓÇ£Azure SQL Managed Instance linkΓÇ¥ and then ΓÇ£Replicate databaseΓÇ¥, as shown in the screenshot below.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option for replicate database.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option for replicate database.":::
2. Wizard that takes you thought the process of creating Managed Instance link will be started. Once the link is created, your source database will get its read-only replica on your target Azure SQL Managed Instance. Once the wizard starts, you'll see the Introduction window. Click Next to proceed.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-introduction.png" alt-text="Screenshot showing the introduction window for Managed Instance link replicate database wizard.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-introduction.png" alt-text="Screenshot showing the introduction window for Managed Instance link replicate database wizard.":::
3. Wizard will check Managed Instance link requirements. If all requirements are met and you'll be able to click the Next button to continue.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-sql-server-requirements.png" alt-text="Screenshot showing SQL Server requirements window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-sql-server-requirements.png" alt-text="Screenshot showing SQL Server requirements window.":::
4. On the Select Databases window, choose one or more databases to be replicated via Managed Instance link. Make database selection and click Next.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-select-databases.png" alt-text="Screenshot showing Select Databases window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-select-databases.png" alt-text="Screenshot showing Select Databases window.":::
5. On the Login to Azure and select Managed Instance window you'll need to sign-in to Microsoft Azure, select Subscription, Resource Group and Managed Instance. Finally, you'll need to provide login details for the chosen Managed Instance.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-login-to-azure.png" alt-text="Screenshot showing Login to Azure and select Managed Instance window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-login-to-azure.png" alt-text="Screenshot showing Login to Azure and select Managed Instance window.":::
6. Once all of that is populated, you'll be able to click Next.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-login-to-azure-populated.png" alt-text="Screenshot showing Login to Azure and select Managed Instance populated window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-login-to-azure-populated.png" alt-text="Screenshot showing Login to Azure and select Managed Instance populated window.":::
7. On the Specify Distributed AG Options window, you'll see prepopulated values for the various parameters. Unless you need to customize something, you can proceed with the default options and click Next.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-distributed-ag-options.png" alt-text="Screenshot showing Specify Distributed AG options window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-distributed-ag-options.png" alt-text="Screenshot showing Specify Distributed AG options window.":::
8. On the Summary window you'll be able to see the steps for creating Managed Instance link. Optionally, you can generate the setup Script to save it or to run it yourself. Complete the wizard process by clicking on the Finish.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-summary.png" alt-text="Screenshot showing Summary window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-summary.png" alt-text="Screenshot showing Summary window.":::
9. The Executing actions window will display the progress of the process.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-executing-actions.png" alt-text="Screenshot showing Executing actions window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-executing-actions.png" alt-text="Screenshot showing Executing actions window.":::
10. Results window will show up once the process is completed and all steps are marked with a green check sign. At this point, you can close the wizard.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-replicate-database-wizard-results.png" alt-text="Screenshot showing Results window.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-results.png" alt-text="Screenshot showing Results window.":::
11. With this, Managed Instance link has been created and chosen databases are being replicated to the Managed Instance. In Object explorer, you'll see that the source database hosted on SQL Server is now in ΓÇ£SynchronizedΓÇ¥ state. Also, under Always On High Availability, Availability Groups that Availability Group and Distributed Availability Group are created for Managed Instance link.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-ssms-sql-server-database.png" alt-text="Screenshot showing the state of SQL Server database and Availability Group and Distributed Availability Group in SSMS.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-sql-server-database.png" alt-text="Screenshot showing the state of SQL Server database and Availability Group and Distributed Availability Group in SSMS.":::
We can also see a new database under target Managed Instance. Depending on the database size and network speed, initially you may see the database on the Managed Instance side in the ΓÇ£RestoringΓÇ¥ state. Once the seeding from the SQL Server to Managed Instance is done, the database will be ready for read-only workload and visible as in the screenshot below.
- :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/managed-instance-link-ssms-managed-instance-database.png" alt-text="Screenshot showing the state of Managed Instance database.":::
+ :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-managed-instance-database.png" alt-text="Screenshot showing the state of Managed Instance database.":::
## Next steps
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
To restore files or folders from the recovery point, go to the virtual machine a
4. From the **Select recovery point** drop-down menu, select the recovery point that holds the files you want. By default, the latest recovery point is already selected.
-5. Select **Download Executable** (for Windows Azure VMs) or **Download Script** (for Linux Azure VMs, a python script is generated) to download the software used to copy files from the recovery point.
+5. Select **Download Executable** (for Windows Azure VMs) or **Download Script** (for Linux Azure VMs, a Python script is generated) to download the software used to copy files from the recovery point.
![Download Executable](./media/backup-azure-restore-files-from-vm/download-executable.png)
The script also requires Python and bash components to execute and connect secur
|Component | Version | | | - | | bash | 4 and above |
-| python | 2.6.6 and above |
+| Python | 2.6.6 and above |
| .NET | 4.6.2 and above | | TLS | 1.2 should be supported |
If the file recovery process hangs after you run the file-restore script (for ex
### For Linux
-After you meet all the requirements listed in [Step 2](#step-2-ensure-the-machine-meets-the-requirements-before-executing-the-script), [Step 3](#step-3-os-requirements-to-successfully-run-the-script) and [Step 4](#step-4-access-requirements-to-successfully-run-the-script), generate a python script for Linux machines. See [Step 1 to learn how to generate and download script](#step-1-generate-and-download-script-to-browse-and-recover-files). Download the script and copy it to the relevant/compatible Linux server. You may have to modify the permissions to execute it with ```chmod +x <python file name>```. Then run the python file with ```./<python file name>```.
+After you meet all the requirements listed in [Step 2](#step-2-ensure-the-machine-meets-the-requirements-before-executing-the-script), [Step 3](#step-3-os-requirements-to-successfully-run-the-script) and [Step 4](#step-4-access-requirements-to-successfully-run-the-script), generate a Python script for Linux machines. See [Step 1 to learn how to generate and download script](#step-1-generate-and-download-script-to-browse-and-recover-files). Download the script and copy it to the relevant/compatible Linux server. You may have to modify the permissions to execute it with ```chmod +x <python file name>```. Then run the Python file with ```./<python file name>```.
In Linux, the volumes of the recovery point are mounted to the folder where the script is run. The attached disks, volumes, and the corresponding mount paths are shown accordingly. These mount paths are visible to users having root level access. Browse through the volumes mentioned in the script output.
backup Backup Azure Security Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature.md
Title: Security features that protect hybrid backups
description: Learn how to use security features in Azure Backup to make backups more secure Previously updated : 11/02/2021 Last updated : 03/08/2022 # Security features to help protect hybrid backups that use Azure Backup
-Concerns about security issues, like malware, ransomware, and intrusion, are increasing. These security issues can be costly, in terms of both money and data. To guard against such attacks, Azure Backup now provides security features to help protect hybrid backups. This article covers how to enable and use these features, by using an Azure Recovery Services agent and Azure Backup Server. These features include:
+Concerns about security issues, like malware, ransomware, and intrusion, are increasing. These security issues can be costly, in terms of both money and data. To guard against such attacks, Azure Backup now provides security features to help protect hybrid backups. This article covers how to enable and leverage these features to protect on-premises workloads using **Microsoft Azure Backup Server (MABS)**, **Data Protection Manager (DPM)**, and **Microsoft Azure Recovery Services (MARS) agent**. These features include:
- **Prevention**. An additional layer of authentication is added whenever a critical operation like changing a passphrase is performed. This validation is to ensure that such operations can be performed only by users who have valid Azure credentials. - **Alerting**. An email notification is sent to the subscription admin whenever a critical operation like deleting backup data is performed. This email ensures that the user is notified quickly about such actions.
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-protection-matrix.md
Title: MABS (Azure Backup Server) V3 UR1 protection matrix description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server protects. Previously updated : 02/15/2022 Last updated : 03/09/2022
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | -- | | | | | | Client computers (64-bit) | Windows 11, Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
-| Servers (64-bit) | Windows Server 2022, 2019, 2016, 2012 R2, 2012 | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
+| Servers (64-bit) | Windows Server 2022, 2019, 2016, 2012 R2, 2012 <br /><br />(Including Windows Server Core edition) | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
| Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file, system state/bare metal | | SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | All deployment scenarios: database <br><br> MABS v3 UR2 and later supports the backup of SQL database, stored on the Cluster Shared Volume. <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes <br><br> MABS doesn't support SQL Server databases hosted on Windows Server 2012 Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. | | Exchange | Exchange 2019, 2016 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack <br><br> Azure virtual machine (when workload is running as Azure virtual machine) | V3 UR1 and V3 UR2 | Protect (all deployment scenarios): Standalone Exchange server, database under a database availability group (DAG) <br><br> Recover (all deployment scenarios): Mailbox, mailbox databases under a DAG <br><br> Backup of Exchange over ReFS is supported with MABS v3 UR1 |
MABS doesn't support protecting the following data types:
## Next steps
-* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md)
+* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md)
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md
if (-not $is_emulated){
The **bin\LaunchWorker.ps1** was originally created to do a lot of prep work but it doesn't really work. Replace the contents in that file with the following script.
-This script calls the **worker.py** file from your python project. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.8 is used.
+This script calls the **worker.py** file from your Python project. If the **PYTHON2** environment variable is set to **on**, then Python 2.7 is used, otherwise Python 3.8 is used.
```powershell $is_emulated = $env:EMULATED -eq "true"
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-transcription.md
The Speech service works well with built-in models. However, you might want to f
## Sample code
-Sample code is available on GitHub for each of the Speech service features. These samples cover common scenarios, such as reading audio from a file or stream, continuous and at-start recognition, and working with custom models. To view SDK and REST samples, see:
+Sample code is available on GitHub for each of the Speech service features. These samples cover common scenarios, such as reading audio from a file or stream, continuous and single-shot recognition, and working with custom models. To view SDK and REST samples, see:
- [Speech-to-text and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
Follow these guidelines when preparing audio.
| File format | RIFF (.wav), grouped into a .zip file | | File name | Numeric, with .wav extension. No duplicate file names allowed. | | Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
-| Sample format | PCM, 16-bit |
+| Sample format | PCM, at least 16-bit |
| Audio length | Shorter than 15 seconds | | Archive format | .zip | | Maximum archive size | 2048 MB |
Follow these guidelines when preparing audio for segmentation.
| Property | Value | | -- | -- |
-| File format | RIFF (.wav) with a sampling rate of at least 16 khz-16-bit in PCM or .mp3 with a bit rate of at least 256 KBps, grouped into a .zip file |
+| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
| File name | ASCII and Unicode characters supported. No duplicate names allowed. | | Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
| Audio length | Longer than 20 seconds | | Archive format | .zip | | Maximum archive size | 2048 MB |
Follow these guidelines when preparing audio.
| Property | Value | | -- | -- |
-| File format | RIFF (.wav) with a sampling rate of at least 16 khz-16-bit in PCM or .mp3 with a bit rate of at least 256 KBps, grouped into a .zip file |
+| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
| File name | ASCII and Unicode characters supported. No duplicate name allowed. | | Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
| Audio length | No limit | | Archive format | .zip | | Maximum archive size | 2048 MB |
cognitive-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-intents-from-speech-csharp.md
Instead of adding individual intents, you can also use the `AddAllIntents` metho
## Start recognition
-With the recognizer created and the intents added, recognition can begin. The Speech SDK supports both at-start and continuous recognition.
+With the recognizer created and the intents added, recognition can begin. The Speech SDK supports both single-shot and continuous recognition.
| Recognition mode | Methods to call | Result | | - | | |
-| At-start | `RecognizeOnceAsync()` | Returns the recognized intent, if any, after one utterance. |
+| Single-shot | `RecognizeOnceAsync()` | Returns the recognized intent, if any, after one utterance. |
| Continuous | `StartContinuousRecognitionAsync()`<br>`StopContinuousRecognitionAsync()` | Recognizes multiple utterances; emits events (for example, `IntermediateResultReceived`) when results are available. |
-The application uses at-start mode and so calls `RecognizeOnceAsync()` to begin recognition. The result is an `IntentRecognitionResult` object containing information about the intent recognized. You extract the LUIS JSON response by using the following expression:
+The application uses single-shot mode and so calls `RecognizeOnceAsync()` to begin recognition. The result is an `IntentRecognitionResult` object containing information about the intent recognized. You extract the LUIS JSON response by using the following expression:
```csharp result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
Previously updated : 11/23/2020 Last updated : 03/09/2022
After you've had a chance to get started with the Speech service, try our tutori
## Get sample code
-Sample code is available on GitHub for the Speech service. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition, and working with custom models. Use these links to view SDK and REST samples:
+Sample code is available on GitHub for the Speech service. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models. Use these links to view SDK and REST samples:
- [Speech-to-text, text-to-speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
Other products offer speech models tuned for specific purposes, like healthcare
## Deploy on-premises by using Docker containers
-[Use Speech service containers](speech-container-howto.md) to deploy API features on-premises. By using these Docker containers, you can bring the service closer to your data for compliance, security, or other operational reasons. The Speech service offers the following containers:
-
-* Standard Speech-to-Text
-* Custom Speech-to-Text
-* Prebuilt Neural Text-to-Speech
-* Custom Neural Text-to-Speech (preview)
-* Speech Language Identification (preview)
+[Use Speech service containers](speech-container-howto.md) to deploy API features on-premises. By using these Docker containers, you can bring the service closer to your data for compliance, security, or other operational reasons.
## Reference docs
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
In the following tables, the parameters without the **Adjustable** row aren't ad
| Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| *Max number of transactions per second (TPS) per Speech service resource* | | |
-| Real-time API. Prebuilt neural voices and custom neural voices. | 200<sup>4</sup> | 200<sup>4</sup> |
+| **Max number of transactions per second (TPS) per Speech service resource** | | |
+| Real-time API. Prebuilt neural voices and custom neural voices. | 20 per 60 seconds | 200<sup>4</sup> |
| Adjustable | No<sup>4</sup> | Yes<sup>4</sup> |
-| *HTTP-specific quotas* | | |
+| **HTTP-specific quotas** | | |
| Max audio length produced per request | 10 min | 10 min | | Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
-| *Websocket specific quotas* | | |
+| **Websocket specific quotas** | | |
| Max audio length produced per turn | 10 min | 10 min | | Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 | | Max SSML message size per turn | 64 KB | 64 KB |
In the following tables, the parameters without the **Adjustable** row aren't ad
| Default value | N/A | 10 | | Adjustable | N/A | Yes<sup>5</sup> |
+#### Audio Content Creation tool
+
+| Quota | Free (F0)| Standard (S0) |
+|--|--|--|
+| File size | 3,000 characters per file | 20,000 characters per file |
+| Export to audio library | 1 concurrent task | N/A |
+ <sup>3</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> <sup>4</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices) and [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).<br/> <sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit-for-custom-neural-voices).<br/>
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
To get started with speech-to-text, see the [quickstart](get-started-speech-to-t
## Sample code
-Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition, and working with custom models:
+Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models:
- [Speech-to-text samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
As your first step, see [Get started with speech translation](get-started-speech
## Sample code
-You'll find [Speech SDK speech-to-text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and at-start recognition and translation, and working with custom models.
+You'll find [Speech SDK speech-to-text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and single-shot recognition and translation, and working with custom models.
## Migration guides
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current)
+Release note for `3.0.1-amd64-<locale>`:
+
+**Features**
+* Support new locale `uk-UA`.
+ Release note for `3.0.0-amd64-<locale>`: **Features**
Note that due to the phrase lists feature, the size of this container image has
| Image Tags | Notes | |-|:--| | `latest` | Container image with the `en-US` locale. |
-| `3.0.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.0.0-amd64-en-us`.|
+| `3.0.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.0.0-amd64-en-us`. |
This container has the following locales available.
+| Locale for v3.0.1 | Notes | Digest |
+|--|:--|:--|
+| `uk-ua` | Container image with the `uk-UA` locale. | `sha256:af8c370a7ed3e231a611ea37053e809fa7e52ea514c70f4c85f133c7b28a4fba` |
+ | Locale for v3.0.0 | Notes | Digest | |--|:--|:--| | `ar-ae` | Container image with the `ar-AE` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
az containerapp env create `
> [!NOTE]
-> As you call `az conatinerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
+> As you call `az containerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
The following table describes the parameters used in for `containerapp env create`.
az group delete `
## Next steps > [!div class="nextstepaction"]
-> [Managing autoscaling behavior](scale-app.md)
+> [Managing autoscaling behavior](scale-app.md)
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
description: Learn how to diagnose and fix slow requests when using Azure Cosmos
Previously updated : 02/17/2022 Last updated : 03/09/2022
Consider the following when developing your application:
Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
+## Slow requests on bulk mode
+
+[Bulk mode](tutorial-sql-api-dotnet-bulk-import.md) is a throughput optimized mode meant for high data volume operations, not a latency optimized mode; it's meant to saturate the available throughput. If you are experiencing slow requests when using bulk mode make sure that:
+
+* Your application is compiled in Release configuration.
+* You are not measuring latency while debugging the application (no debuggers attached).
+* The volume of operations is high, don't use bulk for less than 1000 operations. Your provisioned throughput dictates how many operations per second you can process, your goal with bulk would be to utilize as much of it as possible.
+* Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container or reduce the volume of data (maybe create smaller batches of data at a time).
+* You are correctly using the `async/await` pattern to process all concurrent Tasks and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
+ ## <a name="capture-diagnostics"></a>Capture the diagnostics All the responses in the SDK including `CosmosException` have a Diagnostics property. This property records all the information related to the single request including if there were retries or any transient failures.
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
tags: billing
Previously updated : 12/06/2021 Last updated : 03/09/2022
An account administrator without the service administrator or subscription owner
If you purchased your support plan through the Azure website, Azure portal, or if you have one under a Microsoft Customer Agreement, you can cancel a support plan. If you purchased your support plan through a Microsoft representative or partner, contact them for assistance.
+### Cancel a support plan bought from the Azure portal
+
+1. In the Azure portal, navigate to **Cost Management + Billing**.
+1. On the Overview page, find your plan and then select it.
+1. On the support plan page, select **Cancel**.
+1. In the Cancel support window, verify that you want to cancel and select **Yes, cancel**.
+ :::image type="content" source="./media/cancel-azure-subscription/cancel-legacy-support-plan.png" alt-text="Screenshot showing the legacy Cancel support plan page." lightbox="./media/cancel-azure-subscription/cancel-legacy-support-plan.png" :::
+
+### Cancel a support plan for a Microsoft Customer Agreement
+ 1. In the Azure portal, navigate to **Cost Management + Billing**. 1. Under **Billing**, select **Recurring charges**. 1. On the right-hand side for the support plan line item, select the ellipsis (**...**) and select **Turn off auto-renewal**.
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md
By default, the following users can view and manage reservations:
Currently, the reservation administrator and reservation reader roles are are only available to assign using PowerShell. They can't be viewed or assigned in the Azure portal. For more information, see [Grant access with PowerShell](#grant-access-with-powershell).
-The reservation administrator and reservation reader roles provide access to only reservations and not to reservation orders, hence any operation that requires to have access to reservation order is not permitted with these roles. For providing access to reservation orders, see [Grant access to individual reservations](#grant-access-to-individual-reservations).
- The reservation lifecycle is independent of an Azure subscription, so the reservation isn't a resource under the Azure subscription. Instead, it's a tenant-level resource with its own Azure RBAC permission separate from subscriptions. Reservations don't inherit permissions from subscriptions after the purchase. ## View and manage reservations
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
If you select the **Install licensed component** type for your express custom se
* If you select the **oh22's SQLPhonetics.NET** component, you can install the [SQLPhonetics.NET](https://appsource.microsoft.com/product/web-apps/oh22.sqlphonetics-ssis) data quality/matching component from oh22 on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **1.0.45**.
-* If you select the **KingswaySoft's SSIS Integration Toolkit** component, you can install the [SSIS Integration Toolkit](https://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365) suite of connectors for CRM/ERP/marketing/collaboration apps, such as Microsoft Dynamics/SharePoint/Project Server, Oracle/Salesforce Marketing Cloud, etc. from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **20.2**.
+* If you select the **KingswaySoft's SSIS Integration Toolkit** component, you can install the [SSIS Integration Toolkit](https://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365) suite of connectors for CRM/ERP/marketing/collaboration apps, such as Microsoft Dynamics/SharePoint/Project Server, Oracle/Salesforce Marketing Cloud, etc. from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **21.2**.
-* If you select the **KingswaySoft's SSIS Productivity Pack** component, you can install the [SSIS Productivity Pack](https://www.kingswaysoft.com/products/ssis-productivity-pack) suite of components from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **20.2**.
+* If you select the **KingswaySoft's SSIS Productivity Pack** component, you can install the [SSIS Productivity Pack](https://www.kingswaysoft.com/products/ssis-productivity-pack) suite of components from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **21.2**.
* If you select the **Theobald Software's Xtract IS** component, you can install the [Xtract IS](https://theobald-software.com/en/xtract-is/) suite of connectors for SAP system (ERP, S/4HANA, BW) from Theobald Software on your Azure-SSIS IR by dragging & dropping/uploading the product license file that you purchased from them into the **License file** box. The current integrated version is **6.5.13.18**. * If you select the **AecorSoft's Integration Service** component, you can install the [Integration Service](https://www.aecorsoft.com/en/products/integrationservice) suite of connectors for SAP and Salesforce systems from AecorSoft on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **3.0.00**.
-* If you select the **CData's SSIS Standard Package** component, you can install the [SSIS Standard Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#standard) suite of most popular components from CData, such as Microsoft SharePoint connectors, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **19.7354**.
+* If you select the **CData's SSIS Standard Package** component, you can install the [SSIS Standard Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#standard) suite of most popular components from CData, such as Microsoft SharePoint connectors, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **21.7867**.
-* If you select the **CData's SSIS Extended Package** component, you can install the [SSIS Extended Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#extended) suite of all components from CData, such as Microsoft Dynamics 365 Business Central connectors and other components in their **SSIS Standard Package**, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **19.7354**. Due to its large size, to avoid installation timeout, please ensure that your Azure-SSIS IR has at least 4 CPU cores per node.
+* If you select the **CData's SSIS Extended Package** component, you can install the [SSIS Extended Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#extended) suite of all components from CData, such as Microsoft Dynamics 365 Business Central connectors and other components in their **SSIS Standard Package**, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **21.7867**. Due to its large size, to avoid installation timeout, please ensure that your Azure-SSIS IR has at least 4 CPU cores per node.
Your added express custom setups will appear on the **Advanced settings** page. To remove them, select their check boxes, and then select **Delete**.
data-factory Monitor Shir In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-shir-in-azure.md
By default, the Self Hosted Integration RuntimeΓÇÖs diagnostic and performance t
## Event logs
-When logged on locally to the Self Hosted Integration Runtime, specific events can be viewed using the [event viewer](/windows/win32/eventlog/viewing-the-event-log.md). The relevant events are captured in two event viewer journals named: **Connectors ΓÇô Integration Runtime** and **Integration Runtime** respectively. While itΓÇÖs possible to log on to to the Self Hosted Integration Runtime hosts individually to view these events, it's also possible to stream these events to a Log Analytics workspace in Azure monitor for ease of query and centralization purposes.
+When logged on locally to the Self Hosted Integration Runtime, specific events can be viewed using the [event viewer](/windows/win32/eventlog/viewing-the-event-log). The relevant events are captured in two event viewer journals named: **Connectors ΓÇô Integration Runtime** and **Integration Runtime** respectively. While itΓÇÖs possible to log on to to the Self Hosted Integration Runtime hosts individually to view these events, it's also possible to stream these events to a Log Analytics workspace in Azure monitor for ease of query and centralization purposes.
## Performance counters
-Performance counters in Windows and Linux provide insight into the performance of hardware components, operating systems, and applications such as the Self Hosted Integration Runtime. The performance counters can be viewed and collected locally on the VM using the performance monitor tool. See the article on [using performance counters](/windows/win32/perfctrs/using-performance-counters.md) for more details.
+Performance counters in Windows and Linux provide insight into the performance of hardware components, operating systems, and applications such as the Self Hosted Integration Runtime. The performance counters can be viewed and collected locally on the VM using the performance monitor tool. See the article on [using performance counters](/windows/win32/perfctrs/using-performance-counters) for more details.
## Centralize log collection and analysis
When a deployment requires a more in-depth level of analysis or has reached a ce
- [How to configure SHIR for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md) - [Review integration runtime concepts in Azure Data Factory.](concepts-integration-runtime.md)-- Learn how to [create a self-hosted integration runtime in the Azure portal.](create-self-hosted-integration-runtime.md)
+- Learn how to [create a self-hosted integration runtime in the Azure portal.](create-self-hosted-integration-runtime.md)
databox-online Azure Stack Edge Gpu Create Certificates Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-certificates-tool.md
Use these steps to prepare the Azure Stack Edge Pro device certificates:
|Input |Description | ||| |`OutputRequestPath`|The file path on your local client where you want the certificate requests to be created. |
- |`DeviceName`|The name of your device in the **Devices** page in the local web UI of your device. <br> This field isn't required for a VPN certificate. |
- |`NodeSerialNumber`|The serial number of the device node in the **Network** page in the local web UI of your device. <br> This field isn't required for a VPN certificate. |
- |`ExternalFQDN`|The DNSDomain value in the **Devices** page in the local web UI of your device. |
+ |`DeviceName`|The name of your device in the **Device** page in the local web UI of your device. <br> This field isn't required for a VPN certificate. |
+ |`NodeSerialNumber`|The `Node serial number` of the device node shown on the **Overview** page in the local web UI of your device. <br> This field isn't required for a VPN certificate. |
+ |`ExternalFQDN`|The `DNS domain` value in the **Device** page in the local web UI of your device. |
|`RequestType`|The request type can be for `MultipleCSR` - different certificates for the various endpoints, or `SingleCSR` - a single certificate for all the endpoints. <br> This field isn't required for a VPN certificate. | For all the certificates except the VPN certificate, type:
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
The integration between Microsoft Defender for Endpoint and Microsoft Defender f
- **To manually onboard one or more machines** to threat and vulnerability management, use the security recommendation "[Machines should have a vulnerability assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)":
- :::image type="content" source="media/deploy-vulnerability-assessment-tvm/deploy-vulnerability-assessment-solutions.png" alt-text="Selecting a vulnerability assessment solution from the recommendation":::
+ :::image type="content" source="media/deploy-vulnerability-assessment-tvm/deploy-vulnerability-assessment-solutions.png" alt-text="Selecting a vulnerability assessment solution from the recommendation.":::
- **To automatically find and view the vulnerabilities** on existing and new machines without the need to manually remediate the preceding recommendation, see [Automatically configure vulnerability assessment for your machines](auto-deploy-vulnerability-assessment.md).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 02/23/2022 Last updated : 03/09/2022 zone_pivot_groups: connect-gcp-accounts
To have full visibility to Microsoft Defender for Servers security content, ensu
- Microsoft Defender for servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article. - Azure Arc for servers installed on your VM instances.
- - **(Recommended) Auto-provisioning** - Auto-provisioning is enabled by default in the onboarding process and requires owner permissions on the subscription. Arc auto-provisioning process is using the OS config agent on GCP end. Additional information on Arc auto-provisioning can be found in the [OS config agent on the GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager) article.
+ - **(Recommended) Auto-provisioning** - Auto-provisioning is enabled by default in the onboarding process and requires owner permissions on the subscription. Arc auto-provisioning process is using the OS config agent on GCP end. Learn more about the [OS config agent availability on GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager).
+
+ > [!NOTE]
+ > The Arc auto-provisioning process leverages the VM manager on your Google Cloud Platform, to enforce policies on the your VMs through the OS config agent. A VM with an [Active OS agent](https://cloud.google.com/compute/docs/manage-os#agent-state), will incur a cost according to GCP. Refer to [GCP's technical documentation](https://cloud.google.com/compute/docs/vm-manager#pricing) to see how this may affect your account.
+ > <br><br> Microsoft Defender for Servers does not install the OS config agent to a VM that does not have it installed. However, Microsoft Defender for Servers will enable communication between the OS config agent and the OS config service if the agent is already installed but not communicating with the service.
+ > <br><br> This can change the OS config agent from `inactive` to `active`, and will lead to additional costs.
+ - **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that are not connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines. - The following extensions should be enabled on the Arc-connected machines according to your needs:
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Malware engine alerts describe detected malicious network activity.
| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | | Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices. The file isn't malware. It's used to confirm that the antivirus software is installed correctly; demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
+| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly; demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may be a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical | | Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md
The following predefined reports are available. These queries are generated in r
- **Programming commands**: Devices that send industrial programming. - **Remote access**: Devices that communicate through remote session protocols. - **Internet activity**: Devices that are connected to the internet.-- **CVEs**: A list of devices detected with known vulnerabilities within the last 24 hours.-- **Excluded CVEs**: A list of all the CVEs that were manually excluded. To achieve more accurate results in VA reports and attack vectors, you can customize the CVE list manually by including and excluding CVEs.
+- **CVEs**: A list of devices detected with known vulnerabilities, along with CVSSv2 risk scores.
+- **Excluded CVEs**: A list of all the CVEs that were manually excluded. It is possible to customize the CVE list manually so that the VA reports and attack vectors more accurately reflect your network by excluding or including particular CVEs and updating the CVSSv2 score accordingly.
- **Nonactive devices**: Devices that have not communicated for the past seven days. - **Active devices**: Active network devices within the last 24 hours.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
The following rack mount appliances are available:
| **Model** | HPE ProLiant DL360 | HPE ProLiant DL20 | HPE ProLiant DL20 | HPE EL300 | | **Monitoring ports** | up to 15 RJ45 or 8 OPT | up to 8 RJ45 or 6 OPT | up to 4 RJ45 | Up to 5 RJ45 | | **Max Bandwidth\*** | 3 Gb/Sec | 1 Gb/Sec | 200 Mb/Sec | 100 Mb/Sec |
-| **Max Protected Devices** | 10,000 | 10,000 | 1,000 | 800 |
+| **Max Protected Devices** | 12,000 | 10,000 | 1,000 | 800 |
*Maximum bandwidth capacity might vary depending on protocol distribution.
The following virtual appliances are available:
|--|--|--|--| | **Description** | Virtual appliance for corporate deployments | Virtual appliance for enterprise deployments | Virtual appliance for SMB deployments | | **Max Bandwidth\*** | 2.5 Gb/Sec | 800 Mb/sec | 160 Mb/sec |
-| **Max protected devices** | 10,000 | 10,000 | 800 |
+| **Max protected devices** | 12,000 | 10,000 | 800 |
| **Deployment Type** | Corporate | Enterprise | SMB | *Maximum bandwidth capacity might vary depending on protocol distribution.
event-hubs Move Cluster Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/move-cluster-across-regions.md
Title: Move an Azure Event Hubs dedicated cluster to another region | Microsoft Docs description: This article shows you how to move an Azure Event Hubs dedicated cluster from the current region to another region. Previously updated : 09/01/2020 Last updated : 03/09/2022 # Move an Azure Event Hubs dedicated cluster to another region
To get started, export a Resource Manager template. This template contains setti
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **All resources** and then select your Event Hubs dedicated cluster.
-3. Select > **Settings** > **Export template**.
+3. On the **Event Hubs Cluster** page, select **Export template** in the **Automation** section on the left menu.
4. Choose **Download** in the **Export template** page. :::image type="content" source="./media/move-cluster-across-regions/download-template.png" alt-text="Download Resource Manager template" lightbox="./media/move-cluster-across-regions/download-template.png":::
Deploy the template to create an Event Hubs dedicated cluster in the target regi
1. In the Azure portal, select **Create a resource**. 2. In **Search the Marketplace**, type **template deployment**, and select **Template deployment (deploy using custom templates)**.
-5. Select **Build your own template in the editor**.
-6. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
+1. On the **Template deplyment** page, select **Create**.
+1. Select **Build your own template in the editor**.
+1. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
1. Update the value of the `location` property to point to the new region. To obtain location codes, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, for example, `West US` is equal to `westus`. 1. Select **Save** to save the template. 1. On the **Custom deployment** page, follow these steps:
event-hubs Store Captured Data Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/store-captured-data-data-warehouse.md
Title: 'Tutorial: Migrate event data to Azure Synapse Analytics - Azure Event Hubs' description: Describes how to use Azure Event Grid and Functions to migrate Event Hubs captured data to Azure Synapse Analytics. Previously updated : 12/07/2020 Last updated : 03/08/2022
expressroute Circuit Placement Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/circuit-placement-api.md
The ExpressRoute partner can list all port pairs within the target provider subs
### To get a list of all port pairs for a provider
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts
-
-#### Get Operation
- ```rest
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts
{ "parameters": { "api-version": "2020-03-01",
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.
* 200 (OK)  The request is success. It will fetch list of ports. * 4XX (Bad Request)  One of validations failed – for example: Provider subid isn't valid.
-### List of all port for a provider for a particular peering location
-
-#### GET
-
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts?location={locationName}
-
-#### GET Operation
+### To get a list of all port pairs by location
```rest
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts?location={locationName}
{ "parameters": { "api-version": "2020-03-01",
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.
* 200 (OK) The request is success. It will fetch list of ports. * 4XX (Bad Request) One of validations failed ΓÇô for example: Provider subid isn't valid or location isn't valid.
-To get port details of a particular port using port pair descriptor ID.
-
-#### GET
+### To get a specific port pair using the port pair descriptor ID.
+```rest
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteProviderPorts/{portPairDescriptor}
-#### GET Operation
-
-```rest
{ "parameters": { "api-version": "2020-03-01",
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.
* 204 The port pair with the mentioned descriptor ID isn't available. * 4XX (Bad Request) One of validations failed ΓÇô For example: Provider subid isn't valid.
-### PUT expressRouteCrossConnection API to move a circuit to a specific port pair
+### Move a target ExpressRoute Circuit to a specific port pair
Once the portPairDescriptor of the target port pair is identified, the ExpressRoute partner can use the [ExpressRouteCrossConnection API](/rest/api/expressroute/express-route-cross-connections/create-or-update) to move the ExpressRoute circuit to a specific port pair.
Currently this API is used by providers to update provisioning state of circuit.
Currently the primaryAzurePort and secondaryAzurePort are read-only properties. Now we've disabled the read-only properties for these ports.
-#### PUT
-
-https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCrossConnections/{crossConnectionName}?api-version=2021-02-01
-
-#### PUT Operation
- ```rest
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCrossConnections/{crossConnectionName}?api-version=2021-02-01
{ "parameters": { "api-version": "2020-03-01",
Response:
## Next steps
-For more information on all ExpressRoute REST APIs, see [ExpressRoute REST APIs](/rest/api/expressroute/).
+For more information on all ExpressRoute REST APIs, see [ExpressRoute REST APIs](/rest/api/expressroute/).
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 03/01/2022 Last updated : 03/09/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall Standard has the following known issues:
|DNAT rule for allow *any* (*) will SNAT traffic.|If a DNAT rule allows *any* (*) as the Source IP address, then an implicit Network rule will match VNet-VNet traffic and will always SNAT the traffic.|This is a current limitation.| |Adding a DNAT rule to a secured virtual hub with a security provider is not supported.|This results in an asynchronous route for the returning DNAT traffic, which goes to the security provider.|Not supported.| | Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. |
-|Unable to see Network Rule Name in Azure Firewall Logs|Azure Firewall network rule log data does not show the Rule name for network traffic.|A feature is being investigated to support this.|
+|Unable to see Network Rule Name in Azure Firewall Logs|Azure Firewall network rule log data does not show the Rule name for network traffic.|Network rule name logging is in preview. For for information, see [Azure Firewall preview features](firewall-preview.md#network-rule-name-logging-preview).|
|XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.| | Firewall logs (Resource specific tables - Preview) | Resource specific log queries are in preview mode and aren't currently supported. | A fix is being investigated.| |Can't upgrade to Premium with Availability Zones in the Southeast Asia region|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
Title: Azure Front Door - caching | Microsoft Docs description: This article helps you understand behavior for Front Door with routing rules that have enabled caching. Previously updated : 09/13/2021 Last updated : 03/08/2022
+zone_pivot_groups: front-door-tiers
# Caching with Azure Front Door++
+In this article, you'll learn how Front Door Standard/Premium (Preview) Routes and Rule set behaves when you have caching enabled. Azure Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing.
+
+## Request methods
+
+Only the GET request method can generate cached content in Azure Front Door. All other request methods are always proxied through the network.
+++ The following document specifies behaviors for Front Door with routing rules that have enabled caching. Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing, it also supports caching behaviors just like any other CDN. + ## Delivery of large files+ Azure Front Door delivers large files without a cap on file size. Front Door uses a technique called object chunking. When a large file is requested, Front Door retrieves smaller pieces of the file from the backend. After receiving a full or byte-range file request, the Front Door environment requests the file from the backend in chunks of 8 MB. After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
For more information on the byte-range request, read [RFC 7233](https://www.rfc-
Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the backend. This optimization relies on the backend's ability to support byte-range requests. If the backend doesn't support byte-range requests, this optimization isn't effective. ## File compression++
+Refer to [improve performance by compressing files](standard-premium/how-to-compression.md) in Azure Front Door.
+++ Front Door can dynamically compress content on the edge, resulting in a smaller and faster response time to your clients. In order for a file to be eligible for compression, caching must be enabled and the file must be of a MIME type to be eligible for compression. Currently, Front Door doesn't allow this list to be changed. The current list is: - "application/eot" - "application/font"
When a request for an asset specifies compression and the request results in a c
> [!NOTE] > Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the `accept-encoding` header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on Origin/Azure Front Door or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests. + ## Query string behavior+ With Front Door, you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, `http://www.contoso.com/content.mov?field1=value1&field2=value2`. If there's more than one key-value pair in a query string of a request then their order doesn't matter.-- **Ignore query strings**: In this mode, Front Door passes the query strings from the requestor to the backend on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires. -- **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the backend for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
+* **Ignore query strings**: In this mode, Front Door passes the query strings from the requestor to the backend on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
+
+* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the backend for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
++
+* You can also use Rule Set to specify **cache key query string** behavior, to include, or exclude specified parameters when cache key gets generated. For example, the default cache key is: /foo/image/asset.html, and the sample request is `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. There's a rule set rule to exclude query string 'userid'. Then the query string cache-key would be `/foo/image/asset.html?language=EN&sessionid=200`.
+ ## Cache purge +
+See [Cache purging in Azure Front Door Standard/Premium (Preview)](standard-premium/how-to-cache-purge.md) to learn how to configure cache purge.
+++ Front Door caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache. The best practice to make sure your users always obtain the latest copy of your assets is to version your assets for each update and publish them as new URLs. Front Door will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached content from all edge nodes and force them all to retrieve new updated assets. The reason could be because of updates to your web application, or to quickly update assets that contain incorrect information.
These formats are supported in the lists of paths to purge:
Cache purges on the Front Door are case-insensitive. Additionally, they're query string agnostic, meaning purging a URL will purge all query-string variations of it. + ## Cache expiration+ The following order of headers is used to determine how long an item will be stored in our cache:</br> 1. Cache-Control: s-maxage=\<seconds> 2. Cache-Control: max-age=\<seconds>
Cache behavior and duration can be configured in both the Front Door designer ro
## Next steps + - Learn how to [create a Front Door](quickstart-create-front-door.md). - Learn [how Front Door works](front-door-routing-architecture.md).+++
+* Learn more about [Rule Set Match Conditions](standard-premium/concept-rule-set-match-conditions.md)
+* Learn more about [Rule Set Actions](front-door-rules-engine-actions.md)
+
frontdoor Front Door Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-health-probes.md
Azure Front Door uses the same three-step process below across all algorithms to
## Complete health probe failure
-If health probes fail for every backend in a backend pool, then Front Door considers all backends healthy and routes traffic in a round robin distribution across all of them.
+If health probes fail for every backend in a backend pool, then Front Door considers all backends unhealthy and routes traffic in a round robin distribution across all of them.
Once any backend returns to a healthy state, then Front Door will resume the normal load-balancing algorithm.
frontdoor Front Door Route Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-route-matching.md
Title: Azure Front Door - Routing rule matching monitoring | Microsoft Docs
-description: This article helps you understand how Azure Front Door matches which routing rule to use for an incoming request
+ Title: Azure Front Door - Routing rule matching
+description: This article helps you understand how Azure Front Door match incoming requests to a routing rule.
Previously updated : 09/28/2020 Last updated : 03/08/2022
+zone_pivot_groups: front-door-tiers
# How requests are matched to a routing rule +
+In Azure Front Door Standard/Premium tier a route defines how the traffic is handled when the incoming request arrives at the Azure Front Door environment. Through the route settings, an association is defined between a domain and a backend origin group. By turning on advance features such as Pattern to Match and Rule set, you can have a more granular control over traffic to your backend resources.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+> [!NOTE]
+> When you use the [Front Door rules engine](front-door-rules-engine.md), you can configure a rule to [override the origin group](front-door-rules-engine-actions.md#origin-group-override) for a request. The origin group set by the rules engine overrides the routing process described in this article.
+++ After establishing a connection and completing a TLS handshake, when a request lands on a Front Door environment one of the first things that Front Door does is determine which particular routing rule to match the request to and then take the defined action in the configuration. The following document explains how Front Door determines which Route configuration to use when processing an HTTP request. + ## Structure of a Front Door route configuration+ A Front Door routing rule configuration is composed of two major parts: a "left-hand side" and a "right-hand side". We match the incoming request to the left-hand side of the route while the right-hand side defines how we process the request. ### Incoming match (left-hand side)
The following properties determine whether the incoming request matches the rout
These properties are expanded out internally so that every combination of Protocol/Host/Path is a potential match set. ### Route data (right-hand side)
-The decision of how to process the request, depends on whether caching is enabled or not for the specific route. So, if we don't have a cached response for the request, we'll forward the request to the appropriate backend in the configured backend pool.
+
+The decision of how to process the request, depends on whether caching is enabled or not for the Route. If a cached response isn't available, then the request is forwarded to the appropriate backend.
## Route matching+ This section will focus on how we match to a given Front Door routing rule. The basic concept is that we always match to the **most-specific match first** looking only at the "left-hand side". We first match based on HTTP protocol, then Frontend host, then the Path. ### Frontend host matching
If the following incoming requests were sent to Front Door, they would match aga
| www\.northwindtraders.com | Error 400: Bad Request | ### Path matching
-After determining the specific frontend host and filtering possible routing rules to just the routes with that frontend host, Front Door then filters the routing rules based on the request path. We use a similar logic as frontend hosts:
+
+After Azure Front Door determines the specific frontend host and filtering possible routing rules to just the routes with that frontend host, Front Door then filters the routing rules based on the request path. We use a similar logic as frontend hosts:
1. Look for any routing rule with an exact match on the Path.
-2. If no exact match Paths, look for routing rules with a wildcard Path that matches.
-3. If no routing rules are found with a matching Path, then reject the request and return a 400: Bad Request error HTTP response.
+1. If no exact match Paths, look for routing rules with a wildcard Path that matches.
+1. If no routing rules are found with a matching Path, then reject the request and return a 400: Bad Request error HTTP response.
++
+>[!NOTE]
+> * Any Paths without a wildcard are considered to be exact-match Paths. Even if the Path ends in a slash, it's still considered exact match.
+++ >[!NOTE] > * Any Paths without a wildcard are considered to be exact-match Paths. Even if the Path ends in a slash, it's still considered exact match. > * Patterns to match paths are case insensitive, meaning paths with different casings are treated as duplicates. For example, you have the same host using the same protocol with paths `/FOO` and `/foo`. These paths are considered duplicates which is not allowed in the Patterns to match setting. > + To explain further, let's look at another set of examples: | Routing rule | Frontend host | Path |
Given that configuration, the following example matching table would result:
### Routing decision +
+Once Azure Front Door Standard/Premium has matched to a single routing rule, it then needs to choose how to process the request. If Azure Front Door Standard/Premium has a cached response available for the matched routing rule, then the request gets served back to the client.
+
+Finally, Azure Front Door Standard/Premium evaluates whether or not you have a [rule set](front-door-rules-engine.md) for the matched routing rule. If there's no rule set defined, then the request gets forwarded to the origin group as-is. Otherwise, the rule sets get executed in the order they're configured. [Rule sets can override the route](front-door-rules-engine-actions.md#origin-group-override), forcing traffic to a specific origin group.
+++ After you have matched to a single Front Door routing rule, choose how to process the request. If Front Door has a cached response available for the matched routing rule, the cached response is served back to the client. If Front Door doesn't have a cached response for the matched routing rule, what's evaluated next is whether you have configured [URL rewrite (a custom forwarding path)](front-door-url-rewrite.md) for the matched routing rule. If no custom forwarding path is defined, the request is forwarded to the appropriate backend in the configured backend pool as-is. If a custom forwarding path has been defined, the request path is updated per the defined [custom forwarding path](front-door-url-rewrite.md) and then forwarded to the backend. + ## Next steps +
+Learn how to [create a Front Door Standard/Premium](standard-premium/create-front-door-portal.md).
+++ - Learn how to [create a Front Door](quickstart-create-front-door.md). - Learn [how Front Door works](front-door-routing-architecture.md).+
frontdoor Front Door Routing Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-routing-architecture.md
If you have defined [rules engines](front-door-rules-engine.md) for the route, t
::: zone pivot="front-door-standard-premium"
-If the Front Door routing rule has [caching](standard-premium/concept-caching.md) enabled, and the Front Door edge location's cache includes a valid response for the request, then Front Door returns the cached response.
+If the Front Door routing rule has [caching](front-door-caching.md) enabled, and the Front Door edge location's cache includes a valid response for the request, then Front Door returns the cached response.
If caching is disabled or no response is available, the request is forwarded to the origin.
frontdoor Front Door Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine.md
With Azure Front Door Rule Set, you can create a combination of Rules Set config
For more quota limit, refer to [Azure subscription and service limits, quotas and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
-* *Rule Set*: A set of rules that gets associated to one or multiple [routes](standard-premium/concept-route.md).
+* *Rule Set*: A set of rules that gets associated to one or multiple [routes](front-door-route-matching.md).
* *Rule Set rule*: A rule composed of up to 10 match conditions and 5 actions. Rules are local to a Rule Set and cannot be exported to use across Rule Sets. Users can create the same rule in multiple Rule Sets.
frontdoor Concept Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-caching.md
- Title: 'Azure Front Door: Caching'
-description: This article helps you understand the behavior of Azure Front Door Standard/Premium with routing rules that have enabled caching.
----- Previously updated : 02/18/2021---
-# Caching with Azure Front Door Standard/Premium (Preview)
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-In this article, you'll learn how Front Door Standard/Premium (Preview) Routes and Rule set behaves when you have caching enabled. Azure Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Request methods
-
-Only the GET request method can generate cached content in Azure Front Door. All other request methods are always proxied through the network.
-
-## Delivery of large files
-
-Front Door Standard/Premium (Preview) delivers large files without a cap on file size. Front Door uses a technique called object chunking. When a large file is requested, Front Door retrieves smaller pieces of the file from the origin. After receiving a full or byte-range file request, the Front Door environment requests the file from the originS in chunks of 8 MB.
-
-After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
-
-For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
-Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the backend. This optimization relies on the origin's ability to support byte-range requests. If the origin doesn't support byte-range requests, this optimization isn't effective.
-
-## File compression
-
-Refer to [improve performance by compressing files](how-to-compression.md) in Azure Front Door.
-
-## Query string behavior
-
-With Front Door, you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, `http://www.contoso.com/content.mov?field1=value1&field2=value2`. If there's more than one key-value pair in a query string of a request then their order doesn't matter.
-
-* **Ignore query strings**: In this mode, Front Door passes the query strings from the requestor to the origin on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
-
-* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
-* You can also use Rule Set to specify **cache key query string** behavior, to include, or exclude specified parameters when cache key gets generated. For example, the default cache key is: /foo/image/asset.html, and the sample request is `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. There's a rule set rule to exclude query string 'userid'. Then the query string cache-key would be `/foo/image/asset.html?language=EN&sessionid=200`.
-
-## Cache purge
-
-Refer to cache purge.
-
-## Cache expiration
-The following order of headers is used to determine how long an item will be stored in our cache:</br>
-1. Cache-Control: s-maxage=\<seconds>
-2. Cache-Control: max-age=\<seconds>
-3. Expires: \<http-date>
-
-Cache-Control response headers that indicate that the response won't be cached such as Cache-Control: private, Cache-Control: no-cache, and Cache-Control: no-store are honored. If no Cache-Control is present, the default behavior is that Front Door will cache the resource for X amount of time. Where X gets randomly picked between 1 to 3 days.
-
-> [!NOTE]
-> Cache expiration can't be greater than **366 days**.
->
-
-## Request headers
-
-The following request headers won't be forwarded to an origin when using caching.
-* Content-Length
-* Transfer-Encoding
-
-## Cache behavior and duration
-
-Cache behavior and duration can be configured in both the Front Door designer routing rule and in Rules Engine. Rules Engine caching configuration will always override the Front Door designer routing rule configuration.
-
-* When *caching* is **disabled**, Front Door doesnΓÇÖt cache the response contents, irrespective of origin response directives.
-
-* When *caching* is **enabled**, the cache behavior is different for different values of *Use cache default duration*.
- * When *Use cache default duration* is set to **Yes**, Front Door will always honor origin response header directive. If the origin directive is missing, Front Door will cache contents anywhere from 1 to 3 days.
- * When *Use cache default duration* is set to **No**, Front Door will always override with the *cache duration* (required fields), meaning that it will cache the contents for the cache duration ignoring the values from origin response directives.
-
-> [!NOTE]
-> * The *cache duration* set in the Front Door designer routing rule is the **minimum cache duration**. This override won't work if the cache control header from the origin has a greater TTL than the override value.
-> * Azure Front Door makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Front Door might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your origins are offline.
->
-
-## Next steps
-
-* Learn more about [Rule Set Match Conditions](concept-rule-set-match-conditions.md)
-* Learn more about [Rule Set Actions](../front-door-rules-engine-actions.md)
frontdoor Concept Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-route.md
- Title: What is Azure Front Door Standard/Premium Route?
-description: This article helps you understand how Azure Front Door Standard/Premium matches which routing rule to use for an incoming request.
----- Previously updated : 01/12/2022---
-# What is Azure Front Door Standard/Premium (Preview) Route?
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-Azure Front Door Standard/Premium Route defines how the traffic is handled when the incoming request arrives at the Azure Front Door environment. Through the Route settings, an association is defined between a domain and a backend origin group. By turning on the advance features such as Pattern to Mach, Rule set, more granular control over the traffic is achievable.
-
-A Front Door Standard/Premium routing configuration is composed of two major parts: "left-hand side" and "right-hand side". We match the incoming request to the left-hand side of the route and the right-hand side defines how we process the request.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-> [!NOTE]
-> When you use the [Front Door rules engine](../front-door-rules-engine.md), you can configure a rule to [override the origin group](../front-door-rules-engine-actions.md#origin-group-override) for a request. The origin group set by the rules engine overrides the routing process described in this article.
-
-### Incoming match (left-hand side)
-
-The following properties determine whether the incoming request matches the routing rule (or left-hand side):
-
-* **HTTP Protocols** (HTTP/HTTPS)
-* **Hosts** (for example, www\.foo.com, \*.bar.com)
-* **Paths** (for example, /\*, /users/\*, /file.gif)
-
-These properties are expanded out internally so that every combination of Protocol/Host/Path is a potential match set.
-
-### Route data (right-hand side)
-
-The decision of how to process the request, depends on whether caching is enabled or not for the Route. If a cached response isn't available, then the request is forwarded to the appropriate backend.
-
-## Route matching
-
-This section will focus on how we match to a given Front Door routing rule. The basic concept is that we always match to the **most-specific match first** looking only at the "left-hand side". We first match based on HTTP protocol, then Frontend host, then the Path.
-
-### Frontend host matching
-
-When matching Frontend hosts, we use the logic defined below:
-
-1. Look for any routing with an exact match on the host.
-2. If no exact frontend hosts match, reject the request and send a 400 Bad Request error.
-
-To explain this process further, let's look at an example configuration of Front Door routes (left-hand side only):
-
-| Routing rule | Frontend hosts | Path |
-|-|--|-|
-| A | foo.contoso.com | /\* |
-| B | foo.contoso.com | /users/\* |
-| C | www\.fabrikam.com, foo.adventure-works.com | /\*, /images/\* |
-
-If the following incoming requests were sent to Front Door, they would match against the following routing rules from above:
-
-| Incoming frontend host | Matched routing rule(s) |
-|||
-| foo.contoso.com | A, B |
-| www\.fabrikam.com | C |
-| images.fabrikam.com | Error 400: Bad Request |
-| foo.adventure-works.com | C |
-| contoso.com | Error 400: Bad Request |
-| www\.adventure-works.com | Error 400: Bad Request |
-| www\.northwindtraders.com | Error 400: Bad Request |
-
-### Path matching
-
-After Azure Front Door Standard/Premium determines the specific frontend host and filtering possible routing rules to just the routes with that frontend host. Front Door then filters the routing rules based on the request path. We use a similar logic as frontend hosts:
-
-1. Look for any routing rule with an exact match on the Path
-2. If no exact match Paths, look for routing rules with a wildcard Path that matches
-3. If no routing rules are found with a matching Path, then reject the request and return a 400: Bad Request error HTTP response.
-
->[!NOTE]
-> Any Paths without a wildcard are considered to be exact-match Paths. Even if the Path ends in a slash, it's still considered exact match.
-
-To explain further, let's look at another set of examples:
-
-| Routing rule | Frontend host | Path |
-|-||-|
-| A | www\.contoso.com | / |
-| B | www\.contoso.com | /\* |
-| C | www\.contoso.com | /ab |
-| D | www\.contoso.com | /abc |
-| E | www\.contoso.com | /abc/ |
-| F | www\.contoso.com | /abc/\* |
-| G | www\.contoso.com | /abc/def |
-| H | www\.contoso.com | /path/ |
-
-Given that configuration, the following example matching table would result:
-
-| Incoming Request | Matched Route |
-|||
-| www\.contoso.com/ | A |
-| www\.contoso.com/a | B |
-| www\.contoso.com/ab | C |
-| www\.contoso.com/abc | D |
-| www\.contoso.com/abzzz | B |
-| www\.contoso.com/abc/ | E |
-| www\.contoso.com/abc/d | F |
-| www\.contoso.com/abc/def | G |
-| www\.contoso.com/abc/defzzz | F |
-| www\.contoso.com/abc/def/ghi | F |
-| www\.contoso.com/path | B |
-| www\.contoso.com/path/ | H |
-| www\.contoso.com/path/zzz | B |
-
->[!WARNING]
-> </br> If there are no routing rules for an exact-match frontend host with a catch-all route Path (`/*`), then there will not be a match to any routing rule.
->
-> Example configuration:
->
-> | Route | Host | Path |
-> |-|||
-> | A | profile.contoso.com | /api/\* |
->
-> Matching table:
->
-> | Incoming request | Matched Route |
-> |||
-> | profile.domain.com/other | None. Error 400: Bad Request |
-
-### Routing decision
-
-Once Azure Front Door Standard/Premium has matched to a single routing rule, it then needs to choose how to process the request. If Azure Front Door Standard/Premium has a cached response available for the matched routing rule, then the request gets served back to the client.
-
-Finally, Azure Front Door Standard/Premium evaluates whether or not you have a [rule set](../front-door-rules-engine.md) for the matched routing rule. If there's no rule set defined, then the request gets forwarded to the origin group as-is. Otherwise, the rule sets get executed in the order they're configured. [Rule sets can override the route](../front-door-rules-engine-actions.md#origin-group-override), forcing traffic to a specific origin group.
-
-## Next steps
-
-Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor How To Cache Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge.md
Best practice is to make sure your users always obtain the latest copy of your a
## Prerequisites
-Review [Azure Front Door Caching](concept-caching.md) to understand how caching works.
+Review [Azure Front Door Caching](../front-door-caching.md) to understand how caching works.
## Configure cache purge
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Grant Azure Front Door permission to access the certificates in your Azure Key
## Next steps
-Learn about [caching with Azure Front Door Standard/Premium](concept-caching.md).
+Learn about [caching with Azure Front Door Standard/Premium](../front-door-caching.md).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
provide the following features:
- Ability to query resources with complex filtering, grouping, and sorting by resource properties. - Ability to iteratively explore resources based on governance requirements. - Ability to assess the impact of applying policies in a vast cloud environment.-- Ability to [detail changes made to resource properties](./how-to/get-resource-changes.md)
+- Ability to [query changes made to resource properties](./how-to/get-resource-changes.md)
(preview). In this documentation, you'll go over each feature in detail.
With Azure Resource Graph, you can:
- Access the properties returned by resource providers without needing to make individual calls to each resource provider.-- View the last 14 days of change history made to the resource to see what properties changed and
+- View the last seven days of resource configuration changes to see what properties changed and
when. (preview) > [!NOTE]
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
To get an API token, you can use the IoT Central UI or a REST API call. Administ
In the IoT Central UI:
-1. Navigate to **Administration > API tokens**.
-1. Select **+ Create token**.
+1. Navigate to **Permissions > API tokens**.
+1. Click **+ New** or **Create an API token**.
1. Enter a name for the token and select a role and [organization](howto-create-organizations.md). 1. Select **Generate**. 1. IoT Central displays the token that looks like the following example:
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Last updated 12/20/2021
For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
-You can use the IoT Central _in-store analytics condition monitoring_ application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
+You can use the IoT Central _in-store analytics checkout_ application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
Use the application template to: -- Connect different kinds of IoT sensors to an IoT Central application instance.-- Monitor and manage the health of the sensor network and any gateway devices in the environment.-- Create custom rules around the environmental conditions within a store to trigger alerts for store managers.-- Transform the environmental conditions within your store into insights that the retail store team can use to improve the customer experience.-- Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
+1. Connect different kinds of IoT sensors to an IoT Central application instance.
+2. Monitor and manage the health of the sensor network and any gateway devices in the environment.
+3. Create custom rules around the environmental conditions within a store to trigger alerts for store managers.
+4. Transform the environmental conditions within your store into insights that the retail store team can use to improve the customer experience.
+5. Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard.
iot-dps How To Roll Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-roll-certificates.md
Title: Roll X.509 certificates in Azure IoT Hub Device Provisioning Service
description: How to roll X.509 certificates with your Device Provisioning Service (DPS) instance Previously updated : 08/06/2018 Last updated : 03/08/2022
# How to roll X.509 device certificates
-During the lifecycle of your IoT solution, you'll need to roll certificates. Two of the main reasons for rolling certificates would be a security breach, and certificate expirations.
+During the lifecycle of your IoT solution, you'll need to roll certificates. Two of the main reasons for rolling certificates would be a security breach, and certificate expirations.
Rolling certificates is a security best practice to help secure your system in the event of a breach. As part of [Assume Breach Methodology](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), Microsoft advocates the need for having reactive security processes in place along with preventative measures. Rolling your device certificates should be included as part of these security processes. The frequency in which you roll your certificates will depend on the security needs of your solution. Customers with solutions involving highly sensitive data may roll certificate daily, while others roll their certificates every couple years. Rolling device certificates will involve updating the certificate stored on the device and the IoT hub. Afterwards, the device can reprovision itself with the IoT hub using normal [provisioning](about-iot-dps.md#provisioning-process) with the Device Provisioning Service (DPS). - ## Obtain new certificates
-There are many ways to obtain new certificates for your IoT devices. These include obtaining certificates from the device factory, generating your own certificates, and having a third party manage certificate creation for you.
+There are many ways to obtain new certificates for your IoT devices. These include obtaining certificates from the device factory, generating your own certificates, and having a third party manage certificate creation for you.
Certificates are signed by each other to form a chain of trust from a root CA certificate to a [leaf certificate](concepts-x509-attestation.md#end-entity-leaf-certificate). A signing certificate is the certificate used to sign the leaf certificate at the end of the chain of trust. A signing certificate can be a root CA certificate, or an intermediate certificate in chain of trust. For more information, see [X.509 certificates](concepts-x509-attestation.md#x509-certificates).
-
-There are two different ways to obtain a signing certificate. The first way, which is recommended for production systems, is to purchase a signing certificate from a root certificate authority (CA). This way chains security down to a trusted source.
+
+There are two different ways to obtain a signing certificate. The first way, which is recommended for production systems, is to purchase a signing certificate from a root certificate authority (CA). This way chains security down to a trusted source.
The second way is to create your own X.509 certificates using a tool like OpenSSL. This approach is great for testing X.509 certificates but provides few guarantees around security. We recommend you only use this approach for testing unless you prepared to act as your own CA provider.
-
## Roll the certificate on the device
-Certificates on a device should always be stored in a safe place like a [hardware security module (HSM)](concepts-service.md#hardware-security-module). The way you roll device certificates will depend on how they were created and installed in the devices in the first place.
+Certificates on a device should always be stored in a safe place like a [hardware security module (HSM)](concepts-service.md#hardware-security-module). The way you roll device certificates will depend on how they were created and installed in the devices in the first place.
+
+If you got your certificates from a third party, you must look into how they roll their certificates. The process may be included in your arrangement with them, or it may be a separate service they offer.
-If you got your certificates from a third party, you must look into how they roll their certificates. The process may be included in your arrangement with them, or it may be a separate service they offer.
+If you're managing your own device certificates, you'll have to build your own pipeline for updating certificates. Make sure both old and new leaf certificates have the same common name (CN). By having the same CN, the device can reprovision itself without creating a duplicate registration record.
-If you're managing your own device certificates, you'll have to build your own pipeline for updating certificates. Make sure both old and new leaf certificates have the same common name (CN). By having the same CN, the device can reprovision itself without creating a duplicate registration record.
+The mechanics of installing a new certificate on a device will often involve one of the following approaches:
+- You can trigger affected devices to send a new certificate signing request (CSR) to your PKI Certificate Authority (CA). In this case, each device will likely be able to download its new device certificate directly from the CA.
+
+- You can retain a CSR from each device and use that to get a new device certificate from the PKI CA. In this case, you'll need to push the new certificate to each device in a firmware update using a secure OTA update service like [Device Update for IoT Hub](/azure/iot-hub-device-update/).
## Roll the certificate in the IoT hub The device certificate can be manually added to an IoT hub. The certificate can also be automated using a Device Provisioning Service instance. In this article, we'll assume a Device Provisioning Service instance is being used to support auto-provisioning.
-When a device is initially provisioned through auto-provisioning, it boots-up, and contacts the provisioning service. The provisioning service responds by performing an identity check before creating a device identity in an IoT hub using the deviceΓÇÖs leaf certificate as the credential. The provisioning service then tells the device which IoT hub it's assigned to, and the device then uses its leaf certificate to authenticate and connect to the IoT hub.
+When a device is initially provisioned through auto-provisioning, it boots-up, and contacts the provisioning service. The provisioning service responds by performing an identity check before creating a device identity in an IoT hub using the deviceΓÇÖs leaf certificate as the credential. The provisioning service then tells the device which IoT hub it's assigned to, and the device then uses its leaf certificate to authenticate and connect to the IoT hub.
-Once a new leaf certificate has been rolled to the device, it can no longer connect to the IoT hub because itΓÇÖs using a new certificate to connect. The IoT hub only recognizes the device with the old certificate. The result of the device's connection attempt will be an "unauthorized" connection error. To resolve this error, you must update the enrollment entry for the device to account for the device's new leaf certificate. Then the provisioning service can update the IoT Hub device registry information as needed when the device is reprovisioned.
+Once a new leaf certificate has been rolled to the device, it can no longer connect to the IoT hub because itΓÇÖs using a new certificate to connect. The IoT hub only recognizes the device with the old certificate. The result of the device's connection attempt will be an "unauthorized" connection error. To resolve this error, you must update the enrollment entry for the device to account for the device's new leaf certificate. Then the provisioning service can update the IoT Hub device registry information as needed when the device is reprovisioned.
One possible exception to this connection failure would be a scenario where you've created an [Enrollment Group](concepts-service.md#enrollment-group) for your device in the provisioning service. In this case, if you aren't rolling the root or intermediate certificates in the device's certificate chain of trust, then the device will be recognized if the new certificate is part of the chain of trust defined in the enrollment group. If this scenario arises as a reaction to a security breach, you should at least disallow the specific device certificates in the group that are considered to be breached. For more information, see [Disallow specific devices in an enrollment group](./how-to-revoke-device-access-portal.md#disallow-specific-devices-in-an-enrollment-group).
Updating enrollment entries for rolled certificates is accomplished on the **Man
![Manage enrollments](./media/how-to-roll-certificates/manage-enrollments-portal.png) - How you handle updating the enrollment entry will depend on whether you're using individual enrollments, or group enrollments. Also the recommended procedures differ depending on whether you're rolling certificates because of a security breach, or certificate expiration. The following sections describe how to handle these updates. - ## Individual enrollments and security breaches If you're rolling certificates in response to a security breach, you should use the following approach that deletes the current certificate immediately:
-1. Click **Individual Enrollments**, and click the registration ID entry in the list.
+1. Click **Individual Enrollments**, and click the registration ID entry in the list.
2. Click the **Delete current certificate** button and then, click the folder icon to select the new certificate to be uploaded for the enrollment entry. Click **Save** when finished.
If you're rolling certificates in response to a security breach, you should use
![Manage individual enrollments with a security breach](./media/how-to-roll-certificates/manage-individual-enrollments-portal.png)
-3. Once the compromised certificate has been removed from the provisioning service, the certificate can still be used to make device connections to the IoT hub as long as a device registration for it exists there. You can address this two ways:
+3. Once the compromised certificate has been removed from the provisioning service, the certificate can still be used to make device connections to the IoT hub as long as a device registration for it exists there. You can address this two ways:
- The first way would be to manually navigate to your IoT hub and immediately remove the device registration associated with the compromised certificate. Then when the device provisions again with an updated certificate, a new device registration will be created.
+ The first way would be to manually navigate to your IoT hub and immediately remove the device registration associated with the compromised certificate. Then when the device provisions again with an updated certificate, a new device registration will be created.
![Remove IoT hub device registration](./media/how-to-roll-certificates/remove-hub-device-registration.png)
If you're rolling certificates to handle certificate expirations, you should use
Later when the secondary certificate also nears expiration, and needs to be rolled, you can rotate to using the primary configuration. Rotating between the primary and secondary certificates in this way reduces downtime for devices attempting to provision. -
-1. Click **Individual Enrollments**, and click the registration ID entry in the list.
+1. Click **Individual Enrollments**, and click the registration ID entry in the list.
2. Click **Secondary Certificate** and then, click the folder icon to select the new certificate to be uploaded for the enrollment entry. Click **Save**.
To update a group enrollment in response to a security breach, you should use on
6. Once the compromised certificate has been removed from the provisioning service, the certificate can still be used to make device connections to the IoT hub as long as device registrations for it exists there. You can address this two ways:
- The first way would be to manually navigate to your IoT hub and immediately remove the device registration associated with the compromised certificate. Then when your devices provision again with updated certificates, a new device registration will be created for each one.
+ The first way would be to manually navigate to your IoT hub and immediately remove the device registration associated with the compromised certificate. Then when your devices provision again with updated certificates, a new device registration will be created for each one.
![Remove IoT hub device registration](./media/how-to-roll-certificates/remove-hub-device-registration.png) The second way would be to use reprovisioning support to reprovision your devices to the same IoT hub. This approach can be used to replace certificates for device registrations on the IoT hub. For more information, see [How to reprovision devices](how-to-reprovision.md). -- #### Update compromised intermediate certificates 1. Click **Enrollment Groups**, and then click the group name in the list.
To update a group enrollment in response to a security breach, you should use on
![Manage individual enrollments for a compromised intermediate](./media/how-to-roll-certificates/enrollment-group-delete-intermediate-cert.png) - 3. Once the compromised certificate has been removed from the provisioning service, the certificate can still be used to make device connections to the IoT hub as long as device registrations for it exists there. You can address this two ways:
- The first way would be to manually navigate to your IoT hub and immediately remove the device registration associated with the compromised certificate. Then when your devices provision again with updated certificates, a new device registration will be created for each one.
+ The first way would be to manually navigate to your IoT hub and immediately remove the device registration associated with the compromised certificate. Then when your devices provision again with updated certificates, a new device registration will be created for each one.
![Remove IoT hub device registration](./media/how-to-roll-certificates/remove-hub-device-registration.png) The second way would be to use reprovisioning support to reprovision your devices to the same IoT hub. This approach can be used to replace certificates for device registrations on the IoT hub. For more information, see [How to reprovision devices](how-to-reprovision.md). - ## Enrollment groups and certificate expiration If you are rolling certificates to handle certificate expirations, you should use the secondary certificate configuration as follows to ensure no downtime for devices attempting to provision.
Later when the secondary certificate also nears expiration, and needs to be roll
2. Click the **Manage enrollments** tab for your Device Provisioning Service instance, and click the **Enrollment Groups** list. Click your enrollment group name in the list.
-3. Click **CA Certificate**, and select your new root CA certificate under the **Secondary Certificate** configuration. Then click **Save**.
+3. Click **CA Certificate**, and select your new root CA certificate under the **Secondary Certificate** configuration. Then click **Save**.
![Select the new root CA certificate for expiration](./media/how-to-roll-certificates/select-new-root-secondary-cert.png)
Later when the secondary certificate also nears expiration, and needs to be roll
![Delete root CA certificate](./media/how-to-roll-certificates/delete-root-cert.png) -- #### Update expiring intermediate certificates -
-1. Click **Enrollment Groups**, and click the group name in the list.
+1. Click **Enrollment Groups**, and click the group name in the list.
2. Click **Secondary Certificate** and then, click the folder icon to select the new certificate to be uploaded for the enrollment entry. Click **Save**.
Later when the secondary certificate also nears expiration, and needs to be roll
3. Later when the primary certificate has expired, come back and delete that primary certificate by clicking the **Delete current certificate** button. - ## Reprovision the device
-Once the certificate is rolled on both the device and the Device Provisioning Service, the device can reprovision itself by contacting the Device Provisioning Service.
+Once the certificate is rolled on both the device and the Device Provisioning Service, the device can reprovision itself by contacting the Device Provisioning Service.
One easy way of programming devices to reprovision is to program the device to contact the provisioning service to go through the provisioning flow if the device receives an ΓÇ£unauthorizedΓÇ¥ error from attempting to connect to the IoT hub.
Another way is for both the old and the new certificates to be valid for a short
Once reprovisioning is complete, devices will be able to connect to IoT Hub using their new certificates. - ## Disallow certificates In response to a security breach, you may need to disallow a device certificate. To disallow a device certificate, disable the enrollment entry for the target device/certificate. For more information, see disallowing devices in the [Manage disenrollment](how-to-revoke-device-access-portal.md) article. Once a certificate is included as part of a disabled enrollment entry, any attempts to register with an IoT hub using that certificates will fail even if it is enabled as part of another enrollment entry.
-
-- ## Next steps - To learn more about X.509 certificates in the Device Provisioning Service, see [X.509 certificate attestation](concepts-x509-attestation.md) - To learn about how to do proof-of-possession for X.509 CA certificates with the Azure IoT Hub Device Provisioning Service, see [How to verify certificates](how-to-verify-certificates.md)-- To learn about how to use the portal to create an enrollment group, see [Managing device enrollments with Azure portal](how-to-manage-enrollments.md).
+- To learn about how to use the portal to create an enrollment group, see [Managing device enrollments with Azure portal](how-to-manage-enrollments.md).
iot-hub Quickstart Send Telemetry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-send-telemetry-cli.md
Previously updated : 11/06/2019 Last updated : 03/08/2022 # Quickstart: Send telemetry from a device to an IoT hub and monitor it with the Azure CLI
In this section, you use the Azure CLI to create a resource group and an IoT Hub
1. Run the [az iot hub create](/cli/azure/iot/hub#az_iot_hub_create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your IoT hub name.
+ *YourIotHubName*. Replace this placeholder name and the curly brackets with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your IoT hub name.
```azurecli az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName}
load-balancer Tutorial Load Balancer Port Forwarding Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-port-forwarding-portal.md
Title: "Tutorial: Configure port forwarding - Azure portal"
+ Title: "Tutorial: Create a single instance inbound NAT rule - Azure portal"
-description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create connections to VMs in an Azure virtual network.
+description: This tutorial shows how to configure port forwarding using Azure Load Balancer to create a connection to a single virtual machine in an Azure virtual network.
Previously updated : 12/06/2021 Last updated : 03/08/2022
+# Tutorial: Create a single instance inbound NAT rule using the Azure portal
-
-# Tutorial: Configure port forwarding in Azure Load Balancer using the Azure portal
-
-Port forwarding lets you connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.
+Inbound NAT rules allow you to connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.
For more information about Azure Load Balancer rules, see [Manage rules for Azure Load Balancer using the Azure portal](manage-rules-how-to.md). In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a virtual network and virtual machines.
-> * Create a NAT gateway for outbound internet access for the backend pool.
-> * Create a standard SKU public load balancer with frontend IP, health probe, backend configuration, load-balancing rule, and inbound NAT rules.
-> * Install and configure a web server on the VMs to demonstrate the port forwarding and load-balancing rules.
+> * Create a virtual network and virtual machines
+> * Create a standard SKU public load balancer with frontend IP, health probe, backend configuration, load-balancing rule, and inbound NAT rules
+> * Create a NAT gateway for outbound internet access for the backend pool
+> * Install and configure a web server on the VMs to demonstrate the port forwarding and load-balancing rules
## Prerequisites
A virtual network and subnet is required for the resources in the tutorial. In t
2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-3. In **Virtual machines**, select **+ Create** > **Virtual machine**.
+3. In **Virtual machines**, select **+ Create** > **+ Virtual machine**.
-4. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+4. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
| Setting | Value | | - | -- |
A virtual network and subnet is required for the resources in the tutorial. In t
| Region | Enter **(US) West US 2**. | | Availability options | Select **Availability zone**. | | Availability zone | Enter **1**. |
+ | Security type | Select **Standard**. |
| Image | Select **Ubuntu Server 20.04 LTS - Gen2**. | | Azure Spot instance | Leave the default of unchecked. | | Size | Select a VM size. |
A virtual network and subnet is required for the resources in the tutorial. In t
| Authentication type | **SSH public key** | | SSH public key source | Select **Use existing key stored in Azure**. | | Stored Keys | Select **myKey**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
| **Networking** | | | **Network interface** | | | Public IP | Select **None**. | | NIC network security group | Select **Advanced**. | | Configure network security group | Select the existing **myNSG** |
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-
-For more information about outbound connections and Azure Virtual Network NAT, see [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md) and [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
-
-1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-
-2. In **NAT gateways**, select **+ Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **TutorialLBPF-rg**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Availability zone | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
-
-5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-
-6. Enter **myNATGatewayIP** in **Name** in **Add a public IP address**.
-
-7. Select **OK**.
-
-8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
-
-9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
-
-10. Select **myBackendSubnet** under **Subnet name**.
-
-11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
-
-12. Select **Create**.
- ## Create load balancer You'll create a load balancer in this section. The frontend IP, backend pool, load-balancing, and inbound NAT rules are configured as part of the creation.
You'll create a load balancer in this section. The frontend IP, backend pool, lo
| Resource group | Select **TutorialLBPF-rg**. | | **Instance details** | | | Name | Enter **myLoadBalancer** |
- | Region | Select **(US) West US 2**. |
- | Type | Select **Public**. |
+ | Region | Select **West US 2**. |
| SKU | Leave the default **Standard**. |
+ | Type | Select **Public**. |
| Tier | Leave the default **Regional**. | - 4. Select **Next: Frontend IP configuration** at the bottom of the page. 5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-6. Enter **LoadBalancerFrontend** in **Name**.
+6. Enter **myFrontend** in **Name**.
7. Select **IPv4** or **IPv6** for the **IP version**.
You'll create a load balancer in this section. The frontend IP, backend pool, lo
| - | -- | | Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
| Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. |
You'll create a load balancer in this section. The frontend IP, backend pool, lo
| Setting | Value | | - | -- | | Name | Enter **myNATRuleVM1-221**. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
- | Service | Select **Custom**. |
- | Protocol | Leave the default of **TCP**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | TCP Reset | Select **Enabled**. |
- | Port | Enter **221**. |
| Target virtual machine | Select **myVM1**. | | Network IP configuration | Select **ipconfig1 (10.1.0.4)**. |
- | Port mapping | Select **Custom**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Frontend Port | Enter **221**. |
+ | Service Tag | Select **Custom**. |
+ | Backend port | Enter **22**. |
+ | Protocol | Leave the default of **TCP**. |
+ | TCP Reset | Leave the default of unchecked. |
+ | Idle timeout (minutes) | Leave the default **4**. |
| Floating IP | Leave the default of **Disabled**. |
- | Target port | Enter **22**. |
28. Select **Add**.
You'll create a load balancer in this section. The frontend IP, backend pool, lo
| Setting | Value | | - | -- | | Name | Enter **myNATRuleVM2-222**. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
- | Service | Select **Custom**. |
- | Protocol | Leave the default of **TCP**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | TCP Reset | Select **Enabled**. |
- | Port | Enter **222**. |
| Target virtual machine | Select **myVM2**. | | Network IP configuration | Select **ipconfig1 (10.1.0.5)**. |
- | Port mapping | Select **Custom**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Frontend Port | Enter **222**. |
+ | Service Tag | Select **Custom**. |
+ | Backend port | Enter **22**. |
+ | Protocol | Leave the default of **TCP**. |
+ | TCP Reset | Leave the default of unchecked. |
+ | Idle timeout (minutes) | Leave the default **4**. |
| Floating IP | Leave the default of **Disabled**. |
- | Target port | Enter **22**. |
31. Select **Add**.
You'll create a load balancer in this section. The frontend IP, backend pool, lo
33. Select **Create**.
+## Create NAT gateway
+
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
+
+For more information about outbound connections and Azure Virtual Network NAT, see [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md) and [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+2. In **NAT gateways**, select **+ Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialLBPF-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **West US 2**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
+
+6. Enter **myNATGatewayIP** in **Name** in **Add a public IP address**.
+
+7. Select **OK**.
+
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
+
+9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
+
+10. Select **myBackendSubnet** under **Subnet name**.
+
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
+
+12. Select **Create**.
+ ## Install web server In this section, you'll SSH to the virtual machines through the inbound NAT rules and install a web server.
In this section, you'll SSH to the virtual machines through the inbound NAT rule
2. Select **myLoadBalancer**.
-3. In the **Overview** page of **myLoadBalancer**, make note of the **Public IP address**. In this example, it's **20.190.2.163**.
+3. Select **Fronted IP configuration** in **Settings**.
+
+3. In the **Frontend IP configuration**, make note of the **IP address** for **myFrontend**. In this example, it's **20.99.165.176**.
:::image type="content" source="./media/tutorial-load-balancer-port-forwarding-portal/get-public-ip.png" alt-text="Screenshot of public IP in Azure portal.":::
In this section, you'll SSH to the virtual machines through the inbound NAT rule
5. At your prompt, open an SSH connection to **myVM1**. Replace the IP address with the address you retrieved in the previous step and port **221** you used for the myVM1 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded. ```console
- ssh -i .\Downloads\myKey.pem azureuser@20.190.2.163 -p 221
+ ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 221
``` > [!TIP]
In this section, you'll SSH to the virtual machines through the inbound NAT rule
8. At your prompt, open an SSH connection to **myVM2**. Replace the IP address with the address you retrieved in the previous step and port **222** you used for the myVM2 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded. ```console
- ssh -i .\Downloads\myKey.pem azureuser@20.190.2.163 -p 222
+ ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 222
``` 9. From your SSH session, update your package sources and then install the latest NGINX package.
You'll open your web browser in this section and enter the IP address for the lo
1. Open your web browser.
-2. In the address bar, enter the IP address for the load balancer. In this example, it's **20.190.2.163**.
+2. In the address bar, enter the IP address for the load balancer. In this example, it's **20.99.165.176**.
3. The default NGINX website is displayed.
logic-apps Create Automation Tasks Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-automation-tasks-azure-resources.md
ms.suite: integration Previously updated : 11/02/2021 Last updated : 02/14/2022
> This capability is in preview and is subject to the > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-To help you manage [Azure resources](../azure-resource-manager/management/overview.md#terminology) more easily, you can create automated management tasks for a specific resource or resource group by using automation task templates, which vary in availability based on the resource type. For example, for an [Azure storage account](../storage/common/storage-account-overview.md), you can set up an automation task that sends you the monthly cost for that storage account. For an [Azure virtual machine](https://azure.microsoft.com/services/virtual-machines/), you can create an automation task that turns on or turns off that virtual machine on a predefined schedule.
+To help you manage [Azure resources](../azure-resource-manager/management/overview.md#terminology) more easily, you can create automated management tasks for a specific resource or resource group. These tasks vary in number and availability, based on the resource type. For example, for an [Azure storage account](../storage/common/storage-account-overview.md), you can set up an automation task that sends the monthly cost for that storage account. For an [Azure virtual machine](https://azure.microsoft.com/services/virtual-machines/), you can create an automation task that turns on or turns off that virtual machine on a predefined schedule.
-Here are the currently available task templates in this preview:
+You can create an automation task from a specific automation task template. The following table lists the currently supported resource types and available task templates in this preview:
| Resource type | Automation task templates | |||
This article shows you how to complete the following tasks:
## How do automation tasks differ from Azure Automation?
-Automation tasks are more basic and lightweight than [Azure Automation](../automation/automation-intro.md). Currently, you can create an automation task only at the Azure resource level. Behind the scenes, an automation task is actually a logic app resource that runs a workflow and is powered by the [*multi-tenant* Azure Logic Apps service](logic-apps-overview.md). After you create the automation task, you can view and edit the underlying workflow by opening the task in the workflow designer. After a task finishes at least one run, you can review the task's status, workflow run history, inputs, and outputs for each run.
+Automation tasks are more basic and lightweight than [Azure Automation](../automation/automation-intro.md). Currently, you can create an automation task only at the Azure resource level. Behind the scenes, an automation task is actually a logic app resource that runs a workflow. This logic app workflow is powered by the [*multi-tenant* Azure Logic Apps service](logic-apps-overview.md). After you create the automation task, you can view and edit the underlying workflow by opening the task in the workflow designer. After a task finishes at least one run, you can review the run's status, history, inputs, and outputs.
-By comparison, Azure Automation is a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. The service comprises [process automation for orchestrating processes](../automation/automation-intro.md#process-automation) by using [runbooks](../automation/automation-runbook-execution.md), configuration management with [change tracking and inventory](../automation/change-tracking/overview.md), update management, shared capabilities, and heterogeneous features. Automation gives you complete control during deployment, operations, and decommissioning of workloads and resources.
+By comparison, Azure Automation is a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. The service comprises [process automation for orchestrating processes](../automation/automation-intro.md#process-automation) that uses [runbooks](../automation/automation-runbook-execution.md), configuration management with [change tracking and inventory](../automation/change-tracking/overview.md), update management, shared capabilities, and heterogeneous features. Automation gives you complete control during deployment, operations, and decommissioning of workloads and resources.
<a name="pricing"></a>
Executions are metered and billed, regardless whether the workflow runs successf
Triggers and actions follow [Consumption plan rates](https://azure.microsoft.com/pricing/details/logic-apps/), which differ based on whether these operations are ["built-in"](../connectors/built-in.md) or ["managed" (Standard or Enterprise)](../connectors/managed.md). Triggers and actions also make storage transactions, which use the [Consumption plan data rate](https://azure.microsoft.com/pricing/details/logic-apps/).
-> [!TIP]
+> [!NOTE]
> As a monthly bonus, the Consumption plan includes *several thousand* built-in executions free of charge. > For specific information, review the [Consumption plan rates](https://azure.microsoft.com/pricing/details/logic-apps/).
Triggers and actions follow [Consumption plan rates](https://azure.microsoft.com
* The Azure resource that you want to manage. This article uses an Azure storage account as the example.
-* An Office 365 account if you want to follow along with the example, which sends you email by using Office 365 Outlook.
+* An Office 365 account if you want to follow along with the example, which sends email by using Office 365 Outlook.
<a name="create-automation-task"></a>
When you change the underlying workflow for an automation task, your changes aff
1. To disable the workflow so that the task doesn't continue running, see [Manage logic apps in the Azure portal](../logic-apps/manage-logic-apps-with-azure-portal.md).
+<a name="create-automation-template"></a>
+
+## Create automation task template from workflow
+
+You can create your own automation task template by using any Consumption logic app workflow that starts with a recurring or event-based trigger, but not HTTP-based triggers or HTTP-based webhook triggers. For this task, you'll need the following items:
+
+* A [GitHub](https://github.com) account
+
+* Your forked version of the [Azure automation task templates GitHub repository](https://github.com/Azure/automation-task-template/tree/master/templates).
+
+ For more information about forks and creating a fork, review the following GitHub documentation:
+
+ * [About forks](https://docs.github.com/pull-requests/collaborating-with-pull-requests/working-with-forks/about-forks)
+ * [Fork a repo](https://docs.github.com/get-started/quickstart/fork-a-repo)
+
+* A working branch in your forked repository where you'll add your automation task template.
+
+ For more information about branches and creating a branch, review the following documentation:
+
+ * [About branches](https://docs.github.com/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-branches)
+ * [Create and delete branches](https://docs.github.com/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-and-deleting-branches-within-your-repository)
+
+* Your choice of a web debugging tool. This example uses Fiddler 4, but you can try the free trial available for [Fiddler Everywhere](https://www.telerik.com/fiddler/fiddler-everywhere).
+
+To create the template and make the template available for use in Azure, here are the high-level steps:
+
+1. [Export the workflow](#export-workflow) to an automation task template.
+1. [Upload your template](#upload-template) to your working branch in your forked repository.
+1. [Test your template](#test-template) by using your web debugging tool or Fiddler.
+1. [Create a pull request (PR) for your working branch](#create-pull-request) against the default branch in the Azure automation task templates GitHub repository.
+
+After the Azure Logic Apps team reviews and approves your PR for merging to the default branch, your template is live and available to all Azure customers.
+
+<a name="export-workflow"></a>
+
+### Export workflow to automation task template
+
+1. In the [Azure portal](https://portal.azure.com), open the logic app workflow that you want to export. Make sure that the workflow starts with a recurring or event-based trigger, not an HTTP-based trigger or HTTP-based webhook trigger.
+
+1. On the logic app resource menu, select **Overview**.
+
+1. On the **Overview** pane toolbar, select **Export** > **Export to Automation Task**.
+
+ ![Screenshot showing the 'Overview' pane toolbar with 'Export' menu open and 'Export to Automation Task' selected.](./media/create-automation-tasks-azure-resources/export-automation-task.png)
+
+1. On the **Export to Automation Task** pane that opens, provide the following information:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Template Name** | Yes | <*template-name*> | The friendly display name for the automation task template. <p><p>**Important**: Make sure that you use a concise and easy-to-understand name, for example, **List stale virtual machines**. |
+ | **Template Description** | Yes | <*template-description*> | A description for the template's task or purpose |
+ | **Supported Resource Types** | No | Empty or <*supported-Azure-resource-type-list*> | The first-class Azure resource types where you want to make the template available. Sub-resource types are currently unsupported. To include all first-class Azure resource types, leave this property empty. To specify multiple resource types, separate each name with a comma and use the following syntax: <p><p>**Microsoft.<*service-provider*>/<*entity*>** <p><p>For example, to make the template available for Azure resource groups, specify **Microsoft.Resources/resourceGroups**. For more information, review [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md). |
+ | **Unsupported Resource Types** | No | Empty or <*unsupported-Azure-resource-type-list*> | If any, the Azure resource types where you specifically don't want to make the template available. To specify multiple resource types, separate each name with a comma and use the following syntax: <p><p>**Microsoft.<*service-provider*>/<*entity*>** <p><p>For example, to make the template unavailable for Azure resource groups, specify **Microsoft.Resources/resourceGroups**. For more information, review [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md). |
+ | **Configure Parameters** | No | Varies | If your workflow includes cross-environment [parameter definitions](create-parameters-workflows.md), those parameters appear in this section for you to configure further. You can select whether each parameter value is provided either from the resource or the task creator. <p><p>- If you select **From Resource**, select a **Source Parameter** property value to use from that resource: <p>-- **Resource Name** <br>-- **Resource Type** <br>-- **Resource Id** <br>-- **Subscription Id** <br>-- **Resource Group** <br>-- **Resource Location**. <p><p>- If you select **User Provided**, select a **Template** format that determines how the task creator provides the parameter value: <p>-- **Default**: The parameter value is anything other than an interval, frequency, or time zone. <p>- Specify the parameter's display name, default value, and description. <p>- If the value is a timestamp (*hh:mm:ss*), set the **Format** property to **Time Format**. <p>- To mark the parameter as required, change the **Optional** to **Required**. <p>-- **Interval**: The parameter value is an interval, such as **1** or **12**. <p>-- **Frequency**: The parameter value is a frequency, such as **Hour**, **Day** or **Month**. <p>-- **Timezone**: The parameter value is a timezone, such as **(UTC-08:00) Pacific Time (US & Canada)**. |
+ |||||
+
+ The following example shows the properties for a sample automation task template that works only on an Azure resource group:
+
+ ![Screenshot showing the 'Export to Automation Task' pane with example properties for an automation task template.](./media/create-automation-tasks-azure-resources/export-template-properties.png)
+
+ In this example, the task's underlying workflow includes the following parameter definitions and specifies that these parameter values are provided by the task creator:
+
+ | Parameter | Description |
+ |--|-|
+ | **emailAddress** | Specifies the email address for where to send the report. This parameter uses the **Default** template, which lets you specify the parameter's information, the expected format, and whether the parameter is optional or not. For this example parameter, the expected format is **None**, and the parameter is **Required**. |
+ | **numberOf** | Specifies the maximum number of time units that a virtual machine can stay idle. This parameter uses the **Default** template. |
+ | **timeUnit** | Specifies the time unit to use for the parameter value. This parameter uses the **Frequency** template, which shows the time units that the task creator can select, for example, **Hour**, **Day**, or **Month**. |
+ |||
+
+1. When you're done, select **Download Template**, and save the template using the **.json** file name extension. For a consistent template name, use only lowercase, hyphens between words, and the following syntax:
+
+ **<*action-verb*>-<*Azure-resource*>**
+
+ For example, based on the earlier example template name, you might name the template file as **list-stale-virtual-machines.json**.
+
+<a name="upload-template"></a>
+
+### Upload template to GitHub
+
+1. Go to [GitHub](https://github.com), and sign in with your GitHub account.
+
+1. Go to the [Azure automation task templates GitHub repository](https://github.com/Azure/automation-task-template/tree/master/templates), which takes you to the default branch in the repository.
+
+1. From the branch list, switch to your working branch.
+
+1. Above the files list, select **Add file** > **Upload files**.
+
+1. Either drag your workflow definition file to the specified area on the page, or select **choose your files**.
+
+1. After you add your template, in the same folder, open the **manifest.json** file, and add an entry for your **<*template-name*>.json** file.
+
+<a name="test-template"></a>
+
+### Test your template
+
+You can use your favorite web debugging tool to test the template you uploaded to your working directory. This example continues by using Fiddler with the script that modifies web requests. If you use a different tool, use the equivalent steps and script for your tool.
+
+1. In the Fiddler script, find the `onBeforeRequest()` function, and add the following code to the function, for example:
+
+ ```javascript
+ static function OnBeforeRequest(oSession: Session)
+ {
+ if (oSession.url == "raw.githubusercontent.com/azure/automation-task-template/master/templates/manifest.json") {
+ oSession.url = "raw.githubusercontent.com/<GitHub-username>/automation-task-template/<working-branch>/templates/manifest.json";
+ }
+
+ if (oSession.url == "raw.githubusercontent.com/azure/automation-task-template/master/templates/<template-name>") {
+ oSession.url = "raw.githubusercontent.com/<GitHub-username>/automation-task-template/<working-branch>/templates/<template-name>";
+ }
+
+ {...}
+ }
+ ```
+
+ This code gets the **manifest.json** and **<*template-name*>.json** files from your forked repository, rather than from the main Azure GitHub repository.
+
+ So, based on the example, the file redirection code looks like the following version:
+
+ ```javascript
+ static function OnBeforeRequest(oSession: Session)
+ {
+ if (oSession.url == "raw.githubusercontent.com/azure/automation-task-template/master/templates/manifest.json") {
+ oSession.url = "raw.githubusercontent.com/sophowe/automation-task-template/upload-auto-template/templates/manifest.json";
+ }
+
+ if (oSession.url == "raw.githubusercontent.com/azure/automation-task-template/master/templates/list-stale-virtual-machines.json") {
+ oSession.url = "raw.githubusercontent.com/sophowe/automation-task-template/upload-auto-template/templates/list-stale-virtual-machines.json";
+ }
+
+ {...}
+ }
+ ```
+
+1. Before you run your test, make sure to close all browser windows, and clear your browser cache in Fiddler.
+
+1. Open a new browser window, and sign in to the [Azure portal](https://portal.azure.com).
+
+1. Open the Azure resource where you expect to find your automation task. Create an automation task with your exported template. Run the task.
+
+If your task runs successfully, continue by creating a pull request from your working branch to the default branch.
+
+<a name="create-pull-request"></a>
+
+### Create your pull request
+
+1. Under **Commit changes**, enter a concise but descriptive title for your update. You can provide more information in the description box.
+
+1. Select **Create a new branch for this commit and start a pull request**. At the prompt, provide a name for your working branch, for example:
+
+ `<your-GitHub-alias>-<automation-task-name>-template`
+
+1. When you're ready, select **Propose changes**. On the next page, select **Create pull request**.
+
+1. Provide a name and description for your pull request. In the lower-right corner, select **Create pull request**.
+
+1. Wait for the Azure Logic Apps team to review your pull request.
+ ## Provide feedback We'd like to hear from you! To report bugs, provide feedback, or ask questions about this preview capability, [contact the Azure Logic Apps team](mailto:logicappspm@microsoft.com).
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
Previously updated : 02/01/2022 Last updated : 03/01/2022
The following steps describe how this topology works:
- ```api.azureml.ms``` - ```notebooks.azure.net``` - ```instances.azureml.ms```
+ - ```aznbcontent.net```
**Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn``` - ```instances.azureml.cn```
+ - ```aznbcontent.net```
**Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net``` - ```instances.azureml.us```
+ - ```aznbcontent.net```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
**Azure Public regions**: - ```api.azureml.ms``` - ```notebooks.azure.net```
- - ```instances.azureml.ms```
+ - ```instances.azureml.ms```
+ - ```aznbcontent.net```
**Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn``` - ```instances.azureml.cn```
+ - ```aznbcontent.net```
**Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net``` - ```instances.azureml.us```
+ - ```aznbcontent.net```
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
marketplace Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics.md
Learn how to access analytic reports in Microsoft Partner Center to monitor sale
To access the Partner Center analytics tools, go to the **[Summary](https://go.microsoft.com/fwlink/?linkid=2165765)** dashboard.
->[!NOTE]
+> [!NOTE]
> For detailed definitions of analytics terminology, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.yml). ## Next steps
marketplace Marketplace Categories Industries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-categories-industries.md
Following are the categories and industries applicable to each online stores, by
| Dynamics 365 Operations Apps | | &#x2714; | &#x2714; | | Dynamics 365 Business Central | | &#x2714; | &#x2714; | | Power BI app | | &#x2714; | &#x2714; |
-|
## Applicable store by offer type
Following are the combinations of options applicable to each online stores:
| | | | &#x2714; | | AppSource<sup>1</sup><br>Azure Marketplace<sup>1</sup> | | | | | | &#x2714; | AppSource<sup>1</sup><br>Azure Marketplace<sup>1,2</sup> | | | | | | &#x2714; | AppSource<sup>1</sup><br>Azure Marketplace<sup>1</sup> |
-|
<sup>1</sup> Depending on category/subcategory and industry selection.<br> <sup>2</sup> Offers with private plans will be published to the Azure portal.<br>
Industry selection applies only for offers published to AppSource and Consulting
| Media and Communications | Media and Entertainment<br>Telecommunications | | Professional Services | Partner Professional Services<br>Legal<br>Architecture and Construction<br>Real Estate | | Distribution | Wholesale<br>Parcel and Package Shipping |
-| Hospitality and Travel | Travel & Transportation<br>Hotels and Leisure<br>Restaurants and Food Services |
-|
+| Hospitality and Travel | Travel & Transportation<br>Hotels and Leisure<br>Restaurants and Food Services |
## Applicable products
Select the applicable products your app works with for the offer to show up unde
## Next steps - To create an offer, sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) to create and configure your offer. If you haven't yet enrolled in Partner Center, [create an account](./create-account.md).-- For step-by-step instructions on publishing an offer, see the commercial marketplace [publishing guide by offer type](./publisher-guide-by-offer-type.md).
+- For step-by-step instructions on publishing an offer, see the commercial marketplace [publishing guide by offer type](./publisher-guide-by-offer-type.md).
media-services Account Add Account Storage How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-add-account-storage-how-to.md
+
+ Title: Add storage to a Media Services Account
+description: This article shows you how to add storage to a Media Services account.
+++++ Last updated : 03/08/2022++
+# Add storage to a Media Services Account
++
+<!-- NOTE: The following are in the includes folder and are reused in other How To articles. All task based content should be in the includes folder with the task- prefix prepended to the file name. -->
+
+This article shows you how to add storage to a Media Services account.
+
+## Methods
+
+You can use the following methods to add storage to a Media Services account.
+
+## CLI
+++
media-services Account Create How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-create-how-to.md
You can use either the Azure portal or the CLI to create a Media Services accoun
[!INCLUDE [Create a Media Services account with CLI](./includes/task-create-media-services-account-cli.md)]
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/mediaservices/create-or-update).
+
media-services Account Delete How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-delete-how-to.md
You can find the Media Services accounts in the portal by navigating to your sub
## [CLI](#tab/cli/) +
+## [REST](#tab/rest/)
+
+See the Media Service [REST API](/rest/api/media/mediaservices/delete).
media-services Account List Assets How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-list-assets-how-to.md
+
+ Title: List the assets in a Media Services account
+description: This article shows you how to list the assets in a Media Services account.
+++++ Last updated : 03/08/2022+++
+# List the assets in a Media Services account
++
+This article shows you how to list the assets in a Media Services account.
+
+## Methods
+
+You can use the following methods to list the assets in a Media Services account.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
media-services Account List Transforms How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-list-transforms-how-to.md
+
+ Title: List the transforms in a Media Services account
+description: This article shows you how to list the transforms in a Media Services account.
+++++ Last updated : 03/08/2022+++
+# List the transforms in a Media Services account
++
+This article shows you how to list the transforms in a Media Services account.
+
+## Methods
+
+You can use the following methods to list the transforms in a Media Services account.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/transforms/list).
++
media-services Account Remove Account Storage How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-remove-account-storage-how-to.md
+
+ Title: Remove a storage account from a Media Services account
+description: This article shows you how to remove a storage account from a Media Services account
+++++ Last updated : 03/08/2022+++
+# Remove a storage account from a Media Services account
++
+This article shows you how to remove a storage account from a Media Services account.
+
+## Methods
+
+You can use the following methods to remove a storage account from a Media Services account.
+
+## CLI
+
media-services Account Set Account Encryption Customer Managed Key How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-set-account-encryption-customer-managed-key-how-to.md
+
+ Title: Set the Media Services account encryption with customer managed keys
+description: This article shows you how to set the Media Services account encryption with customer managed keys.
+++++ Last updated : 03/08/2022+++
+# Set the Media Services account encryption with customer managed keys
++
+This article shows you how to set the Media Services account encryption with customer managed keys.
+
+## Methods
+
+You can use the following methods to set the Media Services account encryption with customer managed keys.
+
+## CLI
+
media-services Account Set Account Encryption System Managed Key How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-set-account-encryption-system-managed-key-how-to.md
+
+ Title: Set the Media Services account encryption with system managed keys
+description: This article shows you how to set the Media Services account encryption with system managed keys.
+++++ Last updated : 03/08/2022+++
+# Set the Media Services account encryption with system managed keys
++
+This article shows you how to set the Media Services account encryption with system managed keys.
+
+## Methods
+
+You can use the following methods to set the Media Services account encryption with system managed keys.
+
+## CLI
+
media-services Account Set Storage Authentication How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-set-storage-authentication-how-to.md
+
+ Title: Set the Media Services storage authentication
+description: This article shows you how to set the Media Services storage authentication.
+++++ Last updated : 03/08/2022+++
+# Set the Media Services storage authentication
++
+This article shows you how to set the Media Services storage authentication.
+
+## Methods
+
+You can use the following methods to set the Media Services storage authentication.
+
+## CLI
+
media-services Account Show Encryption How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-show-encryption-how-to.md
+
+ Title: Show the account encryption of a Media Services Account
+description: This article shows you how to show the account encryption of a Media Services Account.
+++++ Last updated : 03/08/2022++
+# Show the account encryption of a Media Services Account
++
+This article shows you how to show the account encryption of a Media Services Account.
+
+<!-- NOTE: The following are in the includes folder and are reused in other How To articles. All task based content should be in the includes folder with the task- prefix prepended to the file name. -->
+
+## Methods
+
+You can use the following methods to show the account encryption of a Media Services Account.
+
+## CLI
+++
media-services Account Update How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/account-update-how-to.md
The Media Services account can be updated in the portal using the Media Services
[!INCLUDE [update a media services account CLI](./includes/task-update-media-services-account-cli.md)]
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/mediaservices/update).
+
media-services Asset Create Asset Filter How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-create-asset-filter-how-to.md
+
+ Title: Create a Media Services asset filter
+description: This article shows you how to create a Media Services asset filter.
+++++ Last updated : 03/08/2022++
+# Create an account filter
++
+This article shows you how to create a Media Services asset filter.
+
+<!-- NOTE: The following are in the includes folder and are reused in other How To articles. All task based content should be in the includes folder with the task- prefix prepended to the file name. -->
+
+## Methods
+
+You can use the following methods to create a Media Services asset filter.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/asset-filters/create-or-update).
++
media-services Asset Create Asset How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-create-asset-how-to.md
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+Use the following methods to create a Media Services asset.
+ ## [Portal](#tab/portal/) Creating assets in the portal is as simple as uploading a file.
Creating assets in the portal is as simple as uploading a file.
[!INCLUDE [Create an asset with CLI](./includes/task-create-asset-cli.md)] - ## [REST](#tab/rest/) ### Using REST ### Using cURL ## [.NET](#tab/net/) -
-## Next steps
-
-[Media Services Overview](media-services-overview.md)
media-services Asset Delete Asset Filter How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-delete-asset-filter-how-to.md
+
+ Title: Delete a Media Services asset filter
+description: This article shows you how to delete a Media Services asset filter.
+++++ Last updated : 03/01/2022+++
+# Delete an asset filter
++
+This article shows how to delete a Media Services asset filter.
+
+## Methods
+
+You can use the following methods to delete a Media Services asset filter.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/asset-filters/delete).
media-services Asset Delete Asset How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-delete-asset-how-to.md
+
+ Title: Delete a Media Services asset
+description: This article shows you how to delete a Media Services asset.
+++++ Last updated : 03/01/2022+++
+# Delete an asset
++
+This article shows how to delete a Media Services asset.
+
+## Methods
+
+You can use the following methods to delete a Media Services asset.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/assets/delete).
media-services Asset Get Asset Sas Urls How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-get-asset-sas-urls-how-to.md
+
+ Title: Get a Media Services asset SAS URLs
+description: This article shows you how to get a Media Services asset's SAS URLs.
+++++ Last updated : 03/08/2022+++
+# Get an asset's SAS URLs
++
+This article shows you how to get a Media Services asset's SAS URLs.
+
+## Methods
+
+You can use the following methods to get a Media Services asset's SAS URLs.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/assets/list-container-sas).
++
media-services Asset Get Encryption Key How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-get-encryption-key-how-to.md
+
+ Title: Get a Media Services asset encryption key
+description: This article shows you how to get a Media Services asset encryption key.
+++++ Last updated : 03/08/2022+++
+# Get an asset encryption key
++
+This article shows you how to get a Media Services asset encryption key.
+
+## Methods
+
+You can use the following methods to get a Media Services asset encryption key.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/assets/get-encryption-key).
++
media-services Asset List Asset Filters How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-list-asset-filters-how-to.md
+
+ Title: List the asset filters of a Media Services asset
+description: This article shows you how to list the asset filters of a Media Services asset.
+++++ Last updated : 03/08/2022+++
+# List the asset filters of an asset
++
+This article shows you how to list the asset filters of a Media Services asset.
+
+## Methods
+
+You can use the following methods to list the asset filters of a Media Services asset.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/asset-filters/list).
++
media-services Asset List Asset Streaming Locators How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-list-asset-streaming-locators-how-to.md
+
+ Title: List the streaming locators of a Media Services asset
+description: This article shows you how to list the streaming locators of a Media Services asset.
+++++ Last updated : 03/08/2022+++
+# List the streaming locators of an asset
++
+This article shows you how to list the streaming locators of a Media Services asset.
+
+## Methods
+
+You can use the following methods to list the streaming locators of a Media Services asset.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/assets/list-streaming-locators).
++
media-services Asset Update Asset Filter How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-update-asset-filter-how-to.md
+
+ Title: Update a Media Services asset filter
+description: This article shows you how to update a Media Services asset filter.
+++++ Last updated : 03/08/2022+++
+# Update an asset filter
++
+This article shows you how to update a Media Services asset filter.
+
+## Methods
+
+You can use the following methods to update a Media Services asset filter.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/asset-filters/update).
++
media-services Storage Sync Storage Keys How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/storage-sync-storage-keys-how-to.md
+
+ Title: Sync the storage keys of a Media Services storage account
+description: This article shows you how to sync the storage keys of a Media Services storage account.
+++++ Last updated : 03/08/2022+++
+# Sync storage keys
++
+This article shows you how to sync the storage keys of a Media Services storage account.
+
+## Methods
+
+You can use the following methods to sync the storage keys of a Media Services storage account.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/mediaservices/sync-storage-keys).
++
media-services Transform Add Custom Transform Output How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-add-custom-transform-output-how-to.md
+
+ Title: Add a custom transform output for a Media Services job
+description: This article shows you how to add a custom transform output for a Media Services job.
+++++ Last updated : 03/08/2022++
+# Add a custom transform output
++
+<!-- NOTE: The following are in the includes folder and are reused in other How To articles. All task based content should be in the includes folder with the task- prefix prepended to the file name. -->
+
+This article shows you how to add a custom transform output for a Media Services job.
+
+## Methods
+
+You can use the following methods to add a custom transform output for a Media Services job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/transforms).
++
media-services Transform Create Custom Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-custom-transform-how-to.md
+
+ Title: Create a Media Service custom transform
+description: This article shows you how to create a Media Services custom transform.
+++++ Last updated : 03/08/2022+++
+# Create a custom transform
++
+This article shows you how to create a Media Services custom transform.
+
+## Methods
+
+You can use the following methods to create a Media Services custom transform.
+
+## CLI
+
media-services Transform Create Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-transform-how-to.md
The Azure CLI script in this article shows how to create a transform. Transforms
[Create a Media Services account](./account-create-how-to.md).
-## Code snippets
+## Methods
+
+You can use the following methods to create a transform.
## [Portal](#tab/portal/) [!INCLUDE [task-create-asset-portal](includes/task-create-transform-portal.md)]
+## [CLI](#tab/cli/)
+ ## [REST](#tab/rest/)
+See the Media Services [REST API](/rest/api/media/transforms/create-or-update).
media-services Transform Delete Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-delete-transform-how-to.md
+
+ Title: Delete a Media Services transform
+description: This article shows you how to delete a Media Services transform.
+++++ Last updated : 03/01/2022+++
+# Delete a transform
++
+This article shows you how to delete a Media Services transform.
+
+## Methods
+
+You can use the following methods to delete a Media Services transform.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/transforms/delete).
++
media-services Transform Update Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-update-transform-how-to.md
+
+ Title: Update a Media Services transform
+description: This article shows you how to update a Media Services transform.
+++++ Last updated : 03/08/2022+++
+# Update a transform
++
+This article shows you how to update a Media Services transform.
+
+## Methods
+
+You can use the following methods to update a Media Services transform.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+
+See the Media Services [REST API](/rest/api/media/transforms/update).
migrate Deploy Appliance Script Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script-government.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
### Run the script
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
### Run the script
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
> [!NOTE] > The same script can be used to set up Physical appliance for Azure Government cloud with either public or private endpoint connectivity.
migrate Deploy Appliance Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2116601) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2116601) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
> [!NOTE] > The same script can be used to set up VMware appliance for either Azure public or Azure Government cloud.
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2116657) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2116657) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
> [!NOTE] > The same script can be used to set up Hyper-V appliance for either Azure public or Azure Government cloud.
migrate How To Scale Out For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md
In **Download Azure Migrate appliance**, click **Download**. You need to downlo
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` > 3. Download the latest version of the scale-out appliance installer from the portal if the computed hash value doesn't match this string:
-30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
### 3. Run the Azure Migrate installer script
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-physical.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
> [!NOTE] > The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud.
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
> [!NOTE] > The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
Check that the zipped file is secure, before you deploy it.
**Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
- For Azure Government: **Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
Check that the zipped file is secure, before you deploy it.
**Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
- For Azure Government: **Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
Check that the zipped file is secure, before you deploy it.
**Scenario*** | **Download** | **SHA256** | |
- Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
### 3. Create an appliance
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
> [!NOTE] > The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
Before you deploy the OVA file, verify that the file is secure:
**Algorithm** | **Download** | **SHA256** | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+ VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
#### Create the appliance server
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
The SQL account must have access to the **master** database. This is because the
> [!Note] > All the steps below can be executed using the code provided [here](https://github.com/Azure/Purview-Samples/blob/master/TSQL-Code-Permissions/grant-access-to-on-prem-sql-databases.sql)
-1. Navigate to SQL Server Management Studio (SSMS), connect to the server, navigate to security, select and hold (or right-click) on login and create New login. Make sure to select SQL authentication.
+1. Navigate to SQL Server Management Studio (SSMS), connect to the server, navigate to security, select and hold (or right-click) on login and create New login. Make sure to select "SQL authentication".
:::image type="content" source="media/register-scan-on-premises-sql-server/create-new-login-user.png" alt-text="Create new login and user.":::
The SQL account must have access to the **master** database. This is because the
1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your SQL server login 1. Select **Create** to complete 1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
-1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to setup your scan
+1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to set up your scan.
### Steps to register
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
Previously updated : 03/07/2022 Last updated : 03/08/2022
Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owne
## Additional information-- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that will execute the access will need to provide a fully qualified name (i.e., a direct absolute path) to the data object. The following documents show examples of how to do that:
+- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container, and there is no access at that level, the request will fail. The following documents show examples of how to do perform a direct access. See also blogs in the *Next steps* section of this tutorial.
- [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster) - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob) - Creating a policy at Storage account level will enable the Subjects to access system containers e.g., *$logs*. If this is undesired, first scan the data source(s) and then create finer-grained policies for each (i.e., at container or sub-container level).
This section contains a reference of how actions in Azure Purview data policies
## Next steps Check blog, demo and related tutorials
-* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
-* [Demo of access policy for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)-
+* [Demo of access policy for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Blog: What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
+* [Blog: Accessing data when folder level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
+* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md
When you create a key vault, it is automatically tied to the default Azure Activ
1. Use [az account show](/cli/azure/account#az_account_show) to get your subscription ID (in `bash`). ```azurecli
- subscriptionId=$(az account show --query id | sed -e 's/^"//' -e 's/"//' -e 's/\r$//')
+ subscriptionId=$(az account show --output tsv --query id)
``` 1. Use the [az graph](/cli/azure/graph) extension to list other Azure resources with known Azure AD directory dependencies (in `bash`).
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
+
+ Title: Create an index alias
+
+description: Create an alias to define a secondary name that can be used to refer to an index for querying, indexing, and other operations.
+++++ Last updated : 03/01/2022++
+# Create an index alias in Azure Cognitive Search
+
+> [!IMPORTANT]
+> Index aliases are currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In Azure Cognitive Search, an alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. You can create an alias that maps to a search index and substitute the alias name in places where you would otherwise reference an index name. This gives you added flexibility if you ever need to change which index your application is pointing to. Instead of updating the references to the index name in your application, you can just update the mapping for your alias.
+
+The main goal of index aliases is to make it easier to manage your production indexes. For example, if you need to make a change to your index definition, such as editing a field or adding a new analyzer, you'll have to create a new search index because all search indexes are immutable. This means you either need to [drop and rebuild your index](search-howto-reindex.md) or create a new index and then migrate your application over to that index.
+
+Instead of dropping and rebuilding your index, you can use index aliases. A typical workflow would be to:
+
+1. Create your search index
+1. Create an alias that maps to your search index
+1. Have your application send querying/indexing requests to the alias rather than the index name
+1. When you need to make a change to your index that requires a rebuild, create a new search index
+1. When your new index is ready to go, update the alias to map to the new index and requests will automatically be routed to the new index
+
+## Create an alias
+
+You can create an alias using the preview REST API, the preview SDKs, or through [Visual Studio Code](search-get-started-vs-code.md). An alias consists of the `name` of the alias and the name of the search index that the alias is mapped to. Only one index name can be specified in the `indexes` array.
+
+### [**REST API**](#tab/rest)
+
+You can use the [Create or Update Alias (REST preview)](/rest/api/searchservice/preview-api/create-or-update-alias) to create an index alias.
+
+```http
+POST /aliases?api-version=2021-04-30-preview
+{
+ "name": "my-alias",
+ "indexes": ["hotel-samples-index"]
+}
+```
+
+### [**Visual Studio Code**](#tab/vscode)
+
+To create an alias in Visual Studio Code:
+1. Follow the steps in the [Visual Studio Code Quickstart](search-get-started-vs-code.md) to install the [Azure Cognitive Search extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) and connect to your Azure Subscription.
+1. Navigate to your search service.
+1. Under your search service, right-click on **Aliases** and select **Create new alias**.
+1. Provide the name of your alias and the name of the search index you'd like to map it to and then save the file to create the alias.
+
+ ![Create an alias in VS Code](media/search-howto-alias/create-alias-vscode.png "Create an alias in VS Code")
++++
+## Send requests
+
+Once you've created your alias, you're ready to start using it. Aliases can be used for all [document operations](/rest/api/searchservice/document-operations) including querying, indexing, suggestions, and autocomplete.
+
+In the query below, instead of sending the request to `hotel-samples-index`, you can instead send the request to `my-alias` and it will be routed accordingly.
+
+```http
+POST /indexes/my-alias/docs/search?api-version=2021-04-30-preview
+{
+ "search": "pool spa +airport",
+ "searchMode": any,
+ "queryType": "simple",
+ "select": "HotelId, HotelName, Category, Description",
+ "count": true
+}
+```
+
+If you expect that you may need to make updates to your index definition for your production indexes, you should use an alias rather than the index name for requests in your client-side application. Scenarios that require you to create a new index are outlined under these [rebuild conditions](search-howto-reindex.md#rebuild-conditions).
+
+> [!NOTE]
+> You can only use an alias with [document operations](/rest/api/searchservice/document-operations). Aliases can't be used to get or update an index definition, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
+
+## Swap indexes
+
+Now, whenever you need to update your application to point to a new index, all you need to do is update the mapping in your alias. PUT is required for updates as described in [Create or Update Alias (REST preview)](/rest/api/searchservice/preview-api/create-or-update-alias).
+
+```http
+PUT /aliases/my-alias?api-version=2021-04-30-preview
+{
+ "name": "my-alias",
+ "indexes": ["hotel-samples-index2"]
+}
+```
+After you make the update to the alias, requests will automatically start to be routed to the new index.
+
+> [!NOTE]
+> An update to an alias may take up to 10 seconds to propogate through the system so you should wait at least 10 seconds before deleting the index that the alias was previously mapped to.
+
+## See also
+++ [Drop and rebuild an index in Azure Cognitive Search](search-howto-reindex.md)
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md
Last updated 01/10/2022
# Drop and rebuild an index in Azure Cognitive Search
-This article explains how to drop and rebuild an Azure Cognitive Search index, the circumstances under which rebuilds are required, and recommendations for mitigating the impact of rebuilds on ongoing query requests.
+This article explains how to drop and rebuild an Azure Cognitive Search index, the circumstances under which rebuilds are required, and recommendations for mitigating the impact of rebuilds on ongoing query requests. If you frequently have to rebuild your search index, we recommend using [index aliases](search-how-to-alias.md) to make it easier to swap which index your application is pointing to.
A search index is a collection of physical folders and field-based inverted indexes of your content, distributed in shards across the number of partitions allocated to your search index. In Azure Cognitive Search, you cannot drop and recreate individual fields. If you want to fully rebuild a field, all field storage must be deleted, recreated based on an existing or revised index schema, and then repopulated with data pushed to the index or pulled from external sources.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 02/02/2021 Last updated : 02/28/2022 # Service limits in Azure Cognitive Search
Maximum number of synonym maps varies by tier. Each rule can have up to 20 expan
| Maximum synonym maps |3 |3|5 |10 |20 |20 | 10 | 10 | | Maximum number of rules per map |5000 |20000|20000 |20000 |20000 |20000 | 20000 | 20000 |
+## Index alias limits
+
+Maximum number of [index aliases](search-how-to-alias.md) varies by tier. In all tiers, the maximum number of aliases is the same as the maximum number of indexes.
+
+| Resource | Free | Basic | S1 | S2 | S3 | S3-HD |L1 | L2 |
+| -- | --| |-|-|-|-||-|
+| Maximum aliases |3 |5 or 15 |50 |200 |200 |1000 per partition or 3000 per service |10 |10 |
+ ## Data limits (AI enrichment) An [AI enrichment pipeline](cognitive-search-concept-intro.md) that makes calls to Azure Cognitive Services for Language resource for [entity recognition](cognitive-search-skill-entity-recognition-v3.md), [entity linking](cognitive-search-skill-entity-linking-v3.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), [sentiment analysis](cognitive-search-skill-sentiment-v3.md), [language detection](cognitive-search-skill-language-detection.md), and [personal-information detection](cognitive-search-skill-pii-detection.md) is subject to data limits. The maximum size of a record should be 50,000 characters as measured by [`String.Length`](/dotnet/api/system.string.length). If you need to break up your data before sending it to the sentiment analyzer, use the [Text Split skill](cognitive-search-skill-textsplit.md).
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Learn what's new in the service. Bookmark this page to keep up to date with service updates. Check out the [**Preview feature list**](search-api-preview.md) for an itemized list of features that are not yet approved for production workloads.
+## February 2022
+
+|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
+||--||
+| [Index aliases](search-how-to-alias.md) | An index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. You can create an alias that maps to a search index and substitute the alias name in places where you would otherwise reference an index name. This gives you added flexibility if you ever need to change which index your application is pointing to. Instead of updating the references to the index name in your application, you can just update the mapping for your alias. | Public preview REST APIs (no portal support at this time).|
+ ## December 2021 |Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability | ||--||
-| [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | Semantic configurations are a multi-part specification of the fields used during semantic ranking, captions, and answers. This is a new addition to the 2021-04-30-Preview API, and are now required for semantic queries. | Public preview REST APIs (no portal support at this time).|
+| [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | Semantic configurations are a multi-part specification of the fields used during semantic ranking, captions, and answers. This is a new addition to the 2021-04-30-Preview API, and are now required for semantic queries. | Public preview in the portal and preview REST APIs.|
## November 2021
security Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/zero-trust.md
+
+ Title: Zero Trust security in Azure
+description: Learn about the guiding principles of Zero Trust and find resources to help you implement Zero Trust.
++++++ Last updated : 03/08/2022++
+# Zero Trust security
+
+Zero Trust is a new security model that assumes breach and verifies each request as though it originated from an uncontrolled network. In this article, you'll learn about the guiding principles of Zero Trust and find resources to help you implement Zero Trust.
+
+## Guiding principles of Zero Trust
+
+Today, organizations need a new security model that effectively adapts to the complexity of the modern environment, embraces the mobile workforce, and protects people, devices, applications, and data wherever they are located.
+
+To address this new world of computing, Microsoft highly recommends the Zero Trust security model, which is based on these guiding principles:
+
+- **Verify explicitly** - Always authenticate and authorize based on all available data points.
+- **Use least privilege access** - Limit user access with Just-In-Time and Just-Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection.
+- **Assume breach** - Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.
+
+For more information about Zero Trust, see [Microsoft's Zero Trust Guidance Center](/security/zero-trust).
+
+## Zero Trust architecture
+
+A Zero Trust approach extends throughout the entire digital estate and serves as an integrated security philosophy and end-to-end strategy.
+
+This illustration provides a representation of the primary elements that contribute to Zero Trust.
+
+![Zero Trust architecture](./media/zero-trust/zero-trust-architecture.png)
+
+In the illustration:
+
+- Security policy enforcement is at the center of a Zero Trust architecture. This includes Multi Factor authentication with conditional access that takes into account user account risk, device status, and other criteria and policies that you set.
+- [Identities](/security/zero-trust/deploy/identity), [devices](/security/zero-trust/deploy/endpoints) (also called endpoints), [data](/security/zero-trust/deploy/data), [applications](/security/zero-trust/deploy/applications), [network](/security/zero-trust/deploy/networks), and other [infrastructure](/security/zero-trust/deploy/infrastructure) components are all configured with appropriate security. Policies that are configured for each of these components are coordinated with your overall Zero Trust strategy. For example, device policies determine the criteria for healthy devices and conditional access policies require healthy devices for access to specific apps and data.
+- Threat protection and intelligence monitors the environment, surfaces current risks, and takes automated action to remediate attacks.
+
+For more information about deploying technology components of the Zero Trust architecture, see Microsoft's [Deploying Zero Trust solutions](/security/zero-trust/deploy/overview).
+
+As an alternative to deployment guidance that provides configuration steps for each of the technology components protected by Zero Trust principles, [Rapid Modernization Plan (RaMP)](/security/zero-trust/zero-trust-ramp-overview) guidance is based on initiatives and gives you a set of deployment paths to more quickly implement key layers of protection.
+
+## From security perimeter to Zero Trust
+
+The traditional approach of access control for IT has been based on restricting access to a corporate network and then supplementing it with more controls as appropriate. This model restricts all resources to a corporate owned network connection and has become too restrictive to meet the needs of a dynamic enterprise.
+
+![Shift from traditional network perimeter to Zero Trust approach ](./media/zero-trust/zero-trust-shift.png)
+
+Organizations must embrace a Zero Trust approach to access control as they embrace remote work and use cloud technology to digitally transform their business model, customer engagement model, employee engagement, and empowerment model.
+
+Zero trust principles help establish and continuously improve security assurances, while maintaining flexibility to keep pace with this new world. Most zero trust journeys start with access control and focus on identity as a preferred and primary control while they continue to embrace network security technology as a key element. Network technology and the security perimeter tactic are still present in a modern access control model, but they aren't the dominant and preferred approach in a complete access control strategy.
+
+For more information on the Zero Trust transformation of access control, see the Cloud Adoption Framework's [access control](/azure/cloud-adoption-framework/secure/access-control).
+
+## Conditional access with Zero Trust
+
+The Microsoft approach to Zero Trust includes [Conditional Access](../../active-directory/conditional-access/overview.md) as the main policy engine. Conditional Access is used as the policy engine for a Zero Trust architecture that covers both policy definition and policy enforcement. Based on various signals or conditions, Conditional Access can block or give limited access to resources.
+
+To learn more about creating an access model based on Conditional Access that's aligned with the guiding principles of Zero Trust, see [Conditional Access for Zero Trust](/azure/architecture/guide/security/conditional-access-design).
+
+## Develop apps using Zero Trust principles
+Zero Trust is a security framework that does not rely on the implicit trust afforded to interactions behind a secure network perimeter. Instead, it uses the principles of explicit verification, least privileged access, and assuming breach to keep users and data secure while allowing for common scenarios like access to applications from outside the network perimeter.
+
+As a developer, it is essential that you use Zero Trust principles to keep users safe and data secure. App developers can improve app security, minimize the impact of breaches, and ensure that their applications meet their customers' security requirements by adopting Zero Trust principles.
+
+For more information on best practices key to keeping your apps secure, see:
+
+- [Microsoft's Building apps with a Zero Trust approach to identity](/security/zero-trust/develop/identity#top-recommendations-for-zero-trust)
+- [Build Zero Trust-ready apps using Microsoft identity platform features and tools](../../active-directory/develop/zero-trust-for-developers.md)
+
+## Zero Trust and Microsoft 365
+Microsoft 365 is built with many security and information protection capabilities to help you build Zero Trust into your environment. Many of the capabilities can be extended to protect access to other SaaS apps your organization uses and the data within these apps. See [deploying Zero Trust for Microsoft 365](/microsoft-365/security/microsoft-365-zero-trust#deploying-zero-trust-for-microsoft-365) to learn more.
+
+To learn about recommendations and core concepts for deploying secure email, docs, and apps policies and configurations for Zero Trust access to Microsoft 365, see [Zero Trust identity and device access configurations](/microsoft-365/security/office-365-security/microsoft-365-policies-configurations).
+
+## Next steps
+
+- To learn how to enhance your security solutions by integrating with Microsoft products, see Integrate with [Microsoft's Zero Trust solutions](/security/zero-trust/integrate/overview)
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
Each schema field has a type. Some have built-in, Log Analytics types, such as `
|**FQDN** | String | A fully qualified domain name using a dot notation, for example, `docs.microsoft.com`. For more information, see [The Device entity](#the-device-entity). | |<a name="hostname"></a>**Hostname** | String | A hostname which is not an FQDN, includes up to 63 characters including letters, numbers and hyphens. For more information, see [The Device entity](#the-device-entity).| |<a name="domaintype"></a>**DomainType** | Enumerated | The type of domain stored in domain and FQDN fields. Supported values include `FQDN` and `Windows`. For more information, see [The Device entity](#the-device-entity). |
-|<a name="dvcidtype"></a>**DvcIdType** | Enumerated | The type of the device ID stored in DvcId fields. Supported values include `AzureResourceId`, `MDEid`, `MD4IoTid`, `VMConnectionId`, `AwsVpcId`, and `Other`. For more information, see [The Device entity](#the-device-entity). |
+|<a name="dvcidtype"></a>**DvcIdType** | Enumerated | The type of the device ID stored in DvcId fields. Supported values include `AzureResourceId`, `MDEid`, `MD4IoTid`, `VMConnectionId`, `AwsVpcId`, `VectraId`, and `Other`. For more information, see [The Device entity](#the-device-entity). |
|<a name="devicetype"></a>**DeviceType** | Enumerated | The type of the device stored in DeviceType fields. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other`. For more information, see [The Device entity](#the-device-entity). | |<a name="username"></a>**Username** | String | A valid username in one of the supported [types](#usernametype). For more information, see [The User entity](#the-user-entity). | |<a name="usernametype"></a>**UsernameType** | Enumerated | The type of username stored in username fields. Supported values include `UPN`, `Windows`, `DN`, `Simple`, and `Unknown`. For more information, see [The User entity](#the-user-entity). |
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
To deploy a large number of parsers, we recommend using parser ARM templates, as
1. Use the [ASIM Yaml to ARM template converter](https://aka.ms/ASimYaml2ARM) to convert your YAML file to an ARM template.
+1. If deploying an update, delete older versions of the functions using the portal or the [function delete PowerShell tool](https://aka.ms/ASimDelFunctionScript).
+ 1. Deploy your template using the [Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md#edit-and-deploy-the-template) or [PowerShell](../azure-resource-manager/templates/deploy-powershell.md). You can also combine multiple templates to a single deploy process using [linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#linked-template)
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/web-normalization-schema.md
The following filtering parameters are available:
| **srcipaddr_has_any_prefix** | dynamic | Filter only Web sessions for which the [source IP address field](network-normalization-schema.md#srcipaddr) prefix is in one of the listed values. Note that the list of values can include IP addresses as well as IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.| | **url_has_any** | dynamic | Filter only Web sessions for which the [URL field](#url) has any of the values listed. If specified, and the session is not a web session, no result will be returned. The length of the list is limited to 10,000 items.| | **httpuseragent_has_any** | dynamic | Filter only web sessions for which the [user agent field](#httpuseragent) has any of the values listed. If specified, and the session is not a web session, no result will be returned. The length of the list is limited to 10,000 items. |
-| **ventresultdetails_in** | dynamic | Filter only web sessions for which the HTTP status code, stored in the [EventResultDetails](#eventresultdetails) field, is any of the values listed. |
+| **eventresultdetails_in** | dynamic | Filter only web sessions for which the HTTP status code, stored in the [EventResultDetails](#eventresultdetails) field, is any of the values listed. |
| **eventresult** | string | Filter only network sessions with a specific **EventResult** value. | | | | |
service-connector Tutorial Csharp Webapp Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-csharp-webapp-storage-cli.md
git clone https://github.com/Azure-Samples/serviceconnector-webapp-storageblob-d
and go to the root folder of repository: ```Bash
-cd WebAppStorageMISample
+cd serviceconnector-webapp-storageblob-dotnet
``` ## 3. Create the App Service app
service-fabric Cluster Security Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/cluster-security-certificate-management.md
First define a user assigned identity (default values are included as examples)
]} ```
-Then grant this identity access to the vault secrets - refer to the [official documentation](/rest/api/keyvault/vaults/updateaccesspolicy) for current information:
+Then grant this identity access to the vault secrets - refer to the [official documentation](/rest/api/keyvault/keyvault/vaults/update-access-policy) for current information:
```json "resources": [{
service-fabric How To Grant Access Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-grant-access-other-resources.md
And for system-assigned managed identities:
} ```
-For more details, please see [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy).
+For more details, please see [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
## Next steps * [Deploy a Service Fabric application with Managed Identity to a managed cluster](how-to-managed-cluster-application-managed-identity.md)
service-fabric How To Managed Cluster Grant Access Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-grant-access-other-resources.md
And for system-assigned managed identities:
} ```
-For more details, please see [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy).
+For more details, please see [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
## Next steps
-* [Deploy an application with Managed Identity to a Service Fabric managed cluster](how-to-managed-cluster-application-managed-identity.md)
+* [Deploy an application with Managed Identity to a Service Fabric managed cluster](how-to-managed-cluster-application-managed-identity.md)
service-fabric Service Fabric Best Practices Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-networking.md
Title: Azure Service Fabric networking best practices
description: Rules and design considerations for managing network connectivity using Azure Service Fabric. Previously updated : 10/29/2021 Last updated : 03/01/2022
Service Fabric cluster can be provisioned on [Linux with Accelerated Networking]
Accelerated Networking is supported for Azure Virtual Machine Series SKUs: D/DSv2, D/DSv3, E/ESv3, F/FS, FSv2, and Ms/Mms. Accelerated Networking was tested successfully using the Standard_DS8_v3 SKU on 01/23/2019 for a Service Fabric Windows Cluster, and using Standard_DS12_v2 on 01/29/2019 for a Service Fabric Linux Cluster. Please note that Accelerated Networking requires at least 4 vCPUs.
-To enable Accelerated Networking on an existing Service Fabric cluster, you need to first [Scale a Service Fabric cluster out by adding a Virtual Machine Scale Set](./virtual-machine-scale-set-scale-node-type-scale-out.md), to perform the following:
+To enable Accelerated Networking on an existing Service Fabric cluster, you need to first [Scale a Service Fabric cluster out by adding a Virtual Machine Scale Set](virtual-machine-scale-set-scale-node-type-scale-out.md), to perform the following:
1. Provision a NodeType with Accelerated Networking enabled 2. Migrate your services and their state to the provisioned NodeType with Accelerated Networking enabled
Scaling out infrastructure is required to enable Accelerated Networking on an ex
## Network Security Rules
-The network security group rules described below are the recommended minimum for a typical configuration. We also include what rules are mandatory for an operational cluster if optional rules are not desired. Failure to open the mandatory ports or approving the IP/URL will prevent proper operation of the cluster and may not be supported. The [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is recommended for Windows Updates. If you use [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) an additional rule with the ServiceTag [AzureUpdateDelivery](../virtual-network/service-tags-overview.md) is needed.
-
-The rules marked as mandatory are needed for a proper operational cluster. Described is the minimum for typical configurations. It also enables a complete security lockdown with network peering and jumpbox concepts like Azure Bastion. Failure to open the mandatory ports or approving the IP/URL will prevent proper operation of the cluster and may not be supported. The [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is the recommendation for Windows Updates, for the [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) an additional rule with the Virtual Network Service Tag [AzureUpdateDelivery](../virtual-network/service-tags-overview.md) is needed.
+The rules described below are the recommended minimum for a typical configuration. We also include what rules are mandatory for an operational cluster if optional rules are not desired. It allows a complete security lockdown with network peering and jumpbox concepts like Azure Bastion. Failure to open the mandatory ports or approving the IP/URL will prevent proper operation of the cluster and may not be supported.
### Inbound
-|Priority |Name |Port |Protocol |Source |Destination |Action | Mandatory
+|Priority |Name |Port |Protocol |Source |Destination |Action |Mandatory
| | | | | | | |
-|3900 |Azure portal |19080 |TCP |ServiceFabric |Any |Allow | No
-|3910 |Client API |19000 |TCP |Internet |Any |Allow | No
-|3920 |SFX + Client API |19080 |TCP |Internet |Any |Allow | Yes
-|3930 |Cluster |1025-1027 |TCP |VirtualNetwork |Any |Allow | Yes
-|3940 |Ephemeral |49152-65534 |TCP |VirtualNetwork |Any |Allow | Yes
-|3950 |Application |20000-30000 |TCP |VirtualNetwork |Any |Allow | Yes
-|3960 |RDP |3389-3488 |TCP |Internet |Any |Deny | No
-|3970 |SSH |22 |TCP |Internet |Any |Deny | No
-|3980 |Custom endpoint |443 |TCP |Internet |Any |Deny | No
+|3900 |Azure portal |19080 |TCP |ServiceFabric |Any |Allow |Yes
+|3910 |Client API |19000 |TCP |Internet |Any |Allow |No
+|3920 |SFX + Client API |19080 |TCP |Internet |Any |Allow |No
+|3930 |Cluster |1025-1027 |TCP |VirtualNetwork |Any |Allow |Yes
+|3940 |Ephemeral |49152-65534 |TCP |VirtualNetwork |Any |Allow |Yes
+|3950 |Application |20000-30000 |TCP |VirtualNetwork |Any |Allow |Yes
+|3960 |RDP |3389-3488 |TCP |Internet |Any |Deny |No
+|3970 |SSH |22 |TCP |Internet |Any |Deny |No
+|3980 |Custom endpoint |443 |TCP |Internet |Any |Deny |No
More information about the inbound security rules:
-* **Azure portal**. This port is used by the Service Fabric Resource Provider to query information about your cluster in order to display in the Azure Management Portal. If this port is not accessible from the Service Fabric Resource Provider then you will see a message such as 'Nodes Not Found' or 'UpgradeServiceNotReachable' in the Azure portal and your node and application list will appear empty. This means that if you wish to have visibility of your cluster in the Azure Management Portal then your load balancer must expose a public IP address and your NSG must allow incoming 19080 traffic.
+* **Azure portal**. This port is used by the Service Fabric Resource Provider to query information about your cluster in order to display in the Azure Management Portal. If this port is not accessible from the Service Fabric Resource Provider then you will see a message such as 'Nodes Not Found' or 'UpgradeServiceNotReachable' in the Azure portal and your node and application list will appear empty. This means that if you wish to have visibility of your cluster in the Azure Management Portal then your load balancer must expose a public IP address and your NSG must allow incoming 19080 traffic. This port is recommended for extended management operations from the Service Fabric Resource Provider to guarantee higher reliability.
-* **Client API**. The client connection endpoint for APIs used by PowerShell. Please open the port for the integration with Azure DevOps by using [AzureDevOps](../virtual-network/service-tags-overview.md) as Virtual Network Service Tag.
+* **Client API**. The client connection endpoint for APIs used by PowerShell.
-* **SFX + Client API**. This port is used by Service Fabric Explorer to browse and manage your cluster. In the same way it's used by most common APIs like REST/PowerShell (Microsoft.ServiceFabric.PowerShell.Http)/CLI/.NET. This port is recommended for extended management operations from the Service Fabric Resource Provider to guarantee higher reliability. Please open the port for the integration with Azure API Management by using [ApiManagement](../virtual-network/service-tags-overview.md) as Virtual Network Service Tag.
+* **SFX + Client API**. This port is used by Service Fabric Explorer to browse and manage your cluster. In the same way it's used by most common APIs like REST/PowerShell (Microsoft.ServiceFabric.PowerShell.Http)/CLI/.NET.
-* **Cluster**. Used for inter-node communication; should never be blocked.
+* **Cluster**. Used for inter-node communication.
* **Ephemeral**. Service Fabric uses a part of these ports as application ports, and the remaining are available for the OS. It also maps this range to the existing range present in the OS, so for all purposes, you can use the ranges given in the sample here. Make sure that the difference between the start and the end ports is at least 255. You might run into conflicts if this difference is too low, because this range is shared with the OS. To see the configured dynamic port range, run *netsh int ipv4 show dynamic port tcp*. These ports aren't needed for Linux clusters.
More information about the inbound security rules:
### Outbound
-|Priority |Name |Port |Protocol |Source |Destination |Action | Mandatory
-| | | | | | | |
-|4010 |Resource Provider |443 |TCP |Any |ServiceFabric |Allow | Yes
-|4020 |Download Binaries |443 |TCP |Any |AzureFrontDoor.FirstParty |Allow | Yes
+|Priority |Name |Port |Protocol |Source |Destination |Action |Mandatory
+| | | | | | | |
+|4010 |Resource Provider |443 |TCP |Any |ServiceFabric |Allow |Yes
+|4020 |Download Binaries |443 |TCP |Any |AzureFrontDoor.FirstParty |Allow |Yes
More information about the outbound security rules:
Use Azure Firewall with [NSG flow log](../network-watcher/network-watcher-nsg-fl
> [!NOTE] > Please note that the default network security rules should not be overwritten as they ensure the communication between the nodes. [Network Security Group - How it works](../virtual-network/network-security-group-how-it-works.md). Another example, outbound connectivity on port 80 is needed to do the Certificate Revocation List check.
+### Common scenarios needing additional rules
+
+All additional scenarios can be covered with [Azure Service Tags](../virtual-network/service-tags-overview.md).
+
+#### Azure DevOps
+
+The classic PowerShell tasks in Azure DevOps (Service Tag: AzureCloud) need Client API access to the cluster, examples are application deployments or operational tasks. This does not apply to the ARM templates only approach, including [ARM application resources](service-fabric-application-arm-resource.md).
+
+|Priority |Name |Port |Protocol |Source |Destination |Action |Direction
+| | | | | | | |
+|3915 |Azure DevOps |19000 |TCP |AzureCloud |Any |Allow |Inbound
+
+#### Updating Windows
+
+Best practice to patch the Windows operating system is replacing the OS disk by [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md), no additional rule is required.
+The [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) is managing in-VM upgrades where Windows Updates applies operating system patches, this needs access to the Download Center (Service Tag: AzureUpdateDelivery) to download the update binaries.
+
+|Priority |Name |Port |Protocol |Source |Destination |Action |Direction
+| | | | | | | |
+|4015 |Windows Updates |443 |TCP |Any |AzureUpdateDelivery |Allow |Outbound
+
+#### API Management
+
+The integration of Azure API Management (Service Tag: ApiManagement) need Client API access to query endpoint information from the cluster.
+
+|Priority |Name |Port |Protocol |Source |Destination |Action |Direction
+| | | | | | | |
+|3920 |API Management |19080 |TCP |ApiManagement |Any |Allow |Inbound
+ ## Application Networking * To run Windows container workloads, use [open networking mode](service-fabric-networking-modes.md#set-up-open-networking-mode) to make service-to-service communication easier. * Use a reverse proxy such as [Traefik](https://docs.traefik.io/v1.6/configuration/backends/servicefabric/) or the [Service Fabric reverse proxy](service-fabric-reverseproxy.md) to expose common application ports such as 80 or 443.
-* For Windows Containers hosted on air-gapped machines that can't pull base layers from Azure cloud storage, override the foreign layer behavior, by using the [--allow-nondistributable-artifacts](/virtualization/windowscontainers/about/faq#how-do-i-make-my-container-images-available-on-air-gapped-machines) flag in the Docker daemon.
+* For Windows Containers hosted on air-gapped machines that can't pull base layers from Azure cloud storage, override the foreign layer behavior, by using the [--allow-nondistributable-artifacts](https://docs.microsoft.com/virtualization/windowscontainers/about/faq#how-do-i-make-my-container-images-available-on-air-gapped-machines) flag in the Docker daemon.
## Next steps
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
16.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| 16.04 LTS | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| |||
-18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
+18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 4.15.0-1131-azure </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure |
18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** </br> 4.15.0-1127-azure </br> 4.15.0-163-generic </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic | 18.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic | 18.04 LTS |[9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic | |||
-20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
+20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure |
20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
Debian 10 | [9.41](https://support.microsoft.com/topic/update-rollup-54-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.85-azure:5 </br> 4.12.14-122.106-default:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.85-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.110-default:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/en-us/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure </br> 4.12.14-16.65-azure </br> 4.12.14-16.68-azure | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure </br> 4.12.14-16.65-azure |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.43](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.59.49-default:3
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure </br> 5.3.18-18.75-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
static-web-apps Local Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/local-development.md
Open a terminal to the root folder of your existing Azure Static Web Apps site.
1. Install the CLI. ```console
- npm install -g @azure/static-web-apps-cli
+ npm install -g @azure/static-web-apps-cli azure-functions-core-tools
``` 1. Build your app if required by your application.
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
# Authorize access to blobs using Azure Active Directory
-Azure Storage supports using Azure Active Directory (Azure AD) to authorize requests to blob data. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, group, or application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service.
+Azure Storage supports using Azure Active Directory (Azure AD) to authorize requests to blob data. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, group, or application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service. Note that this is only supported for API versions 2017-11-09 and later. For more information, see [Versioning for the Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests).
Authorizing requests against Azure Storage with Azure AD provides superior security and ease of use over Shared Key authorization. Microsoft recommends using Azure AD authorization with your blob applications when possible to assure access with minimum required privileges.
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
To use the `Test-NetConnection` cmdlet, the Azure PowerShell module must be inst
$resourceGroupName = "<your-resource-group-name>" $storageAccountName = "<your-storage-account-name>"
-# This command requires you to be logged into your Azure account, run Login-AzAccount if you haven't
-# already logged in.
+# This command requires you to be logged into your Azure account and set the subscription your storage account is under, run:
+# Connect-AzAccount -SubscriptionId ΓÇÿxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxΓÇÖ
+# if you haven't already logged in.
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName # The ComputerName, or host, is <storage-account>.file.core.windows.net for Azure Public Regions.
az storage account keys renew \
## Need help? Contact support.
-If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
+If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
stream-analytics Stream Analytics Quick Create Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-vs.md
This quickstart shows you how to create and run a Stream Analytics job using Azure Stream Analytics tools for Visual Studio. The example job reads streaming data from an IoT Hub device. You define a job that calculates the average temperature when over 27┬░ and writes the resulting output events to a new file in blob storage. > [!NOTE]
-> - We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./stream-analytics-quick-create-vs.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+> - We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
> - Visual Studio and Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions. ## Before you begin
synapse-analytics Synapse Workspace Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
You can also add IP firewall rules to a Synapse workspace after the workspace is
You can connect to your Synapse workspace using Synapse Studio. You can also use SQL Server Management Studio (SSMS) to connect to the SQL resources (dedicated SQL pools and serverless SQL pool) in your workspace.
-Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1443 for Synapse Studio.
+Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1433 for Synapse Studio.
Also, you need to allow outgoing communication on UDP port 53 for Synapse Studio. To connect using tools such as SSMS and Power BI, you must allow outgoing communication on TCP port 1433.
traffic-manager Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-cli-websites-high-availability.md
documentationcenter: traffic-manager - tags: azure-infrastructure- ms.assetid: ms.devlang: azurecli na Previously updated : 04/26/2018 Last updated : 02/28/2022
This script creates a resource group, two app service plans, two web apps, a traffic manager profile, and two traffic manager endpoints. Traffic Manager directs traffic to the application in one region as the primary region, and to the secondary region when the application in the primary region is unavailable. Before executing the script, you must change the MyWebApp, MyWebAppL1 and MyWebAppL2 values to unique values across Azure. After running the script, you can access the app in the primary region with the URL mywebapp.trafficmanager.net. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/traffic-manager/direct-traffic-for-increased-application-availability/direct-traffic-for-increased-application-availability.sh "Route traffic for high availability")]
+
+### Run the script
-## Clean up deployment
+## Clean up resources
-After the script sample has been run, the follow command can be used to remove the resource group, App Service app, and all related resources.
```azurecli
-az group delete --name myResourceGroup1 --yes
-az group delete --name myResourceGroup2 --yes
+az group delete --name $resourceGroup1
+az group delete --name $resourceGroup2
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, web app, traffic manager profile, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, web app, tra
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional App Service CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional App Service CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
virtual-machine-scale-sets Cli Sample Attach Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-attach-disks.md
This script creates a virtual machine scale set and attaches and prepares data d
### Run the script ## Clean up resources
virtual-machine-scale-sets Cli Sample Create Scale Set From Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-create-scale-set-from-custom-image.md
This script creates a virtual machine scale set that uses a custom VM image as t
### Run the script ## Clean up resources
virtual-machine-scale-sets Cli Sample Create Simple Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-create-simple-scale-set.md
This script creates an Azure virtual machine scale set with an Ubuntu operating
### Run the script ## Clean up resources
virtual-machine-scale-sets Cli Sample Enable Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-enable-autoscale.md
This script creates a virtual machine scale set running Ubuntu and uses host-bas
### Run the script ## Clean up resources
virtual-machine-scale-sets Cli Sample Install Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-install-apps.md
This script creates a virtual machine scale set running Ubuntu and uses the Cust
### Run the script ## Clean up resources
virtual-machine-scale-sets Cli Sample Single Availability Zone Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-single-availability-zone-scale-set.md
This script creates a virtual machine scale set running Ubuntu in a single Avail
### Run the script ## Clean up resources
virtual-machine-scale-sets Cli Sample Zone Redundant Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-zone-redundant-scale-set.md
This script creates a virtual machine scale set running Ubuntu across multiple A
### Run the script ## Clean up resources
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
To protect your virtual machine scale sets from datacenter-level failures, you c
## Availability considerations When you deploy a regional (non-zonal) scale set into one or more zones as of API version *2017-12-01*, you have the following availability options:+ - Max spreading (platformFaultDomainCount = 1) - Static fixed spreading (platformFaultDomainCount = 5) - Spreading aligned with storage disk fault domains (platformFaultDomainCount = 2 or 3)
When you deploy a scale set, you also have the option to deploy with a single [p
Finally, for scale sets deployed across multiple zones, you also have the option of choosing "best effort zone balance" or "strict zone balance". A scale set is considered "balanced" if each zone the same number of VMs or +\\- 1 VM in all other zones for the scale set. For example: -- A scale set with 2 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced. There is only one zone with a different VM count and it is only 1 less than the other zones.
+- A scale set with 2 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced. There is only one zone with a different VM count and it is only 1 less than the other zones.
- A scale set with 1 VM in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered unbalanced. Zone 1 has 2 fewer VMs than zones 2 and 3. It's possible that VMs in the scale set are successfully created, but extensions on those VMs fail to deploy. These VMs with extension failures are still counted when determining if a scale set is balanced. For instance, a scale set with 3 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced even if all extensions failed in zone 1 and all extensions succeeded in zones 2 and 3.
When you create a scale set in a single zone, you control which zone all those V
To use Availability Zones, your scale set must be created in a [supported Azure region](../availability-zones/az-region.md). You can create a scale set that uses Availability Zones with one of the following methods: - [Azure portal](#use-the-azure-portal)-- Azure CLI
+- [Azure CLI](#use-the-azure-cli)
- [Azure PowerShell](#use-azure-powershell) - [Azure Resource Manager templates](#use-azure-resource-manager-templates)
The scale set and supporting resources, such as the Azure load balancer and publ
The process to create a scale set that uses an Availability Zone is the same as detailed in the [getting started article](quick-create-cli.md). To use Availability Zones, you must create your scale set in a supported Azure region.
-Add the `--zones` parameter to the [az vmss create](/cli/azure/vmss) command and specify which zone to use (such as zone *1*, *2*, or *3*). The following example creates a single-zone scale set named *myScaleSet* in zone *1*:
+Add the `--zones` parameter to the [az vmss create](/cli/azure/vmss) command and specify which zone to use (such as zone *1*, *2*, or *3*).
+
+### Single-zone scale set
+
+The following example creates a single-zone scale set named *myScaleSet* in zone *1*:
```azurecli az vmss create \
az vmss create \
--zones 1 ```
-For a complete example of a single-zone scale set and network resources, see [this sample CLI script](https://github.com/Azure/azure-docs-cli-python-samples/blob/master/virtual-machine-scale-sets/create-single-availability-zone/create-single-availability-zone.sh)
+For a complete example of a single-zone scale set and network resources, see [this sample CLI script](scripts/cli-sample-single-availability-zone-scale-set.md#sample-script)
### Zone-redundant scale set
az vmss create \
--zones 1 2 3 ```
-It takes a few minutes to create and configure all the scale set resources and VMs in the zone(s) that you specify. For a complete example of a zone-redundant scale set and network resources, see [this sample CLI script](https://github.com/Azure/azure-docs-cli-python-samples/blob/master/virtual-machine-scale-sets/create-zone-redundant-scale-set/create-zone-redundant-scale-set.sh)
+It takes a few minutes to create and configure all the scale set resources and VMs in the zone(s) that you specify. For a complete example of a zone-redundant scale set and network resources, see [this sample CLI script](scripts/cli-sample-zone-redundant-scale-set.md#sample-script)
## Use Azure PowerShell To use Availability Zones, you must create your scale set in a supported Azure region. Add the `-Zone` parameter to the [New-AzVmssConfig](/powershell/module/az.compute/new-azvmssconfig) command and specify which zone to use (such as zone *1*, *2*, or *3*).
+### Single-zone scale set
+ The following example creates a single-zone scale set named *myScaleSet* in *East US 2* zone *1*. The Azure network resources for virtual network, public IP address, and load balancer are automatically created. When prompted, provide your own desired administrative credentials for the VM instances in the scale set: ```powershell
New-AzVmss `
The process to create a scale set that uses an Availability Zone is the same as detailed in the getting started article for [Linux](quick-create-template-linux.md) or [Windows](quick-create-template-windows.md). To use Availability Zones, you must create your scale set in a supported Azure region. Add the `zones` property to the *Microsoft.Compute/virtualMachineScaleSets* resource type in your template and specify which zone to use (such as zone *1*, *2*, or *3*).
+### Single-zone scale set
+ The following example creates a Linux single-zone scale set named *myScaleSet* in *East US 2* zone *1*: ```json
virtual-machines Oracle Oci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-oci-overview.md
Cross-cloud connectivity is limited to the following regions:
* Azure Germany West Central & OCI Germany Central (Frankfurt) * Azure West US 3 & OCI US West ((Phoenix) * Azure Korea Central region & OCI South Korea Central (Seoul)
+* Azure Southeast Asia region & OCI Singapore (Singapore)
## Networking
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
The source IP address and port of each flow is SNAT'd to the public IP address 6
#### Source (SNAT) port reuse
-Azure provides ~64,000 SNAT ports per public IP address. For each public IP address attached to NAT gateway, the entire inventory of ports provided by those IPs is made available to any virtual machine instance within a subnet that is also attached to NAT gateway. NAT gateway selects a port at random out of the available inventory of ports for a virtual machine to use. Each time a new connection is made to the same destination endpoint over the internet, a new source port is used. As mentioned in the [Performance](#performance) section, NAT gateway supports up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet. NAT gateway will continue to select a new source port at random to go to the same destination endpoint until no more SNAT ports are available for use. If NAT gateway doesn't find any available SNAT ports, only then will it reuse a SNAT port. A port can be reused so long as it's going to a different destination endpoint.
+Azure provides ~64,000 SNAT ports per public IP address. For each public IP address attached to NAT gateway, the entire inventory of ports provided by those IPs is made available to any virtual machine instance within a subnet that is also attached to NAT gateway. NAT gateway selects a port at random out of the available inventory of ports to make new outbound connections. If NAT gateway doesn't find any available SNAT ports, then it will reuse a SNAT port. A port can be reused so long as it's going to a different destination endpoint. As mentioned in the [Performance](#performance) section, NAT gateway supports up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet.
The following flow illustrates this concept with a VM flowing to destination IP 65.52.0.2 after flows 1 - 3 from the above tables have already taken place.
The following flow illustrates this concept with a VM flowing to destination IP
|::|::|::| | 4 | 192.168.0.16:4285 | 65.52.0.2:80 |
-A NAT gateway will translate flow 4 to a source port that may have been recently used for a different destination endpoint. See [Scale NAT](#scale-nat) for more discussion on correctly sizing your IP address provisioning.
+A NAT gateway will likely translate flow 4 to a source port that may be used for other destinations as well. See [Scale NAT](#scale-nat) for more discussion on correctly sizing your IP address provisioning.
| Flow | Source tuple | Source tuple after SNAT | Destination tuple | |::|::|::|::|
SNAT provided by NAT is different from SNAT provided by a [load balancer](../../
- NAT gateway selects source ports at random for outbound traffic flow whereas Load Balancer selects ports sequentially. -- NAT gateway doesn't reuse a SNAT port until no other SNAT ports are available to make new connections, whereas Load Balancer looks to select the lowest available SNAT port in sequential order.
+- NAT gateway reuses SNAT ports for connections to different destination endpoints if no other source ports are available, whereas Load Balancer looks to select the lowest available SNAT port in sequential order.
### On-demand
SNAT maps private addresses to one or more public IP addresses, rewriting the so
NAT gateway opportunistically reuses source (SNAT) ports. When you scale your workload, assume that each flow requires a new SNAT port, and then scale the total number of available IP addresses for outbound traffic. Carefully consider the scale you're designing for, and then allocate IP addresses quantities accordingly.
-SNAT ports set to different destinations will most likely be reused when possible. As SNAT port exhaustion approaches, flows may not succeed.
+SNAT ports to different destinations are most likely to be reused when possible. As SNAT port exhaustion approaches, flows may not succeed.
For a SNAT example, see [SNAT fundamentals](#source-network-address-translation).
virtual-wan Disaster Recovery Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/disaster-recovery-design.md
# Disaster recovery design
-Virtual WAN allows you to aggregate, connect, centrally manage, and secure all of your global deployments. Your global deployments could include any combinations of different branches, Point-of-Presence (PoP), private users, offices, Azure virtual networks, and other multi-cloud deployments. You can use SD-WAN, site-to-site VPN, point-to-site VPN, and ExpressRoute to connect your different sites to a virtual hub. If you have multiple virtual hubs, all the hubs would be connected in full mesh in a standard Virtual WAN deployment.
+Azure Virtual WAN helps you aggregate, connect, centrally manage, and secure all of your global deployments. Your global deployments could include any combination of branches, point of presence (PoP), private users, offices, Azure virtual networks, and other multiple-cloud deployments.
-In this article, let's look into how to architect different types of backend connectivity that Virtual WAN supports for disaster recovery.
+You can use SD-WAN, site-to-site VPN, point-to-site VPN, and Azure ExpressRoute to connect your sites to a virtual hub. If you have multiple virtual hubs, all the hubs are connected in full mesh in a standard Virtual WAN deployment.
-## Backend connectivity options of Virtual WAN
+In this article, let's look at how to architect various types of back-end connectivity that Virtual WAN supports for disaster recovery.
-Virtual WAN supports following backend connectivity options:
+## Back-end connectivity options for Virtual WAN
+
+Virtual WAN supports the following back-end connectivity options:
* Point-to-site or user VPN * Site-to-site VPN * ExpressRoute private peering
-For each of these connectivity options, Virtual WAN deploys separate set of gateway instances within a virtual hub.
+For each of these connectivity options, Virtual WAN deploys a separate set of gateway instances within a virtual hub.
+
+Virtual WAN offers a carrier-grade, high-availability network aggregation solution. For high availability, Virtual WAN instantiates multiple instances when each type of gateway is deployed in a Virtual WAN hub. To learn more about ExpressRoute high availability, see [Designing for high availability with ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md).
-Inherently, Virtual WAN is designed to offer carrier-grade high-available network aggregation solution. For high availability, Virtual WAN instantiates multiple instances when each of these different types of gateways is deployed with in a Virtual WAN hub. To learn more about ExpressRoute high availability, see [Designing for high availability with ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md).
+With the point-to-site VPN gateway, the minimum number of instances deployed is two. You choose the number of instances via *gateway scale units*. You choose gateway scale units according to the number of clients or users you intend to connect to the virtual hub. From the perspective of client connectivity, the point-to-site VPN gateway instances are hidden behind the fully qualified domain name (FQDN) of the gateway.
-With the point-to-site VPN gateway, the minimum number of instances deployed is two. With the point-to-site VPN gateway, you choose the number of instances via 'Gateway scale units'. You choose 'Gateway scale units' according to the number of clients or users you intend to connect to the virtual hub. From the client connectivity perspective, the point-to-site VPN gateway instances are hidden behind the Fully Qualified Domain Name (FQDN) of the gateway.
+For the site-to-site VPN gateway, two instances of the gateway are deployed within a virtual hub. Each gateway instance is deployed with its own set of public and private IP addresses.
-For the site-to-site VPN gateway, two instances of the gateway are deployed within a virtual hub. Each of the gateway instance is deployed with its own set of public and private IP addresses. The following screen capture shows the IP addresses associated with the two instances of an example site-to-site VPN gateway configuration. In other words, the two instances provide two independent tunnel endpoints for establishing site-to-site VPN connectivity from your branches. To maximize high-availability, you can create two tunnels terminating on the two different instances of the VPN gateway for each link from your branch sites.
+The following screen capture shows the IP addresses associated with the two instances of an example site-to-site VPN gateway configuration. In other words, the two instances provide two independent tunnel endpoints for establishing site-to-site VPN connectivity from your branches. To maximize high availability, you can create two tunnels that end on the two instances of the VPN gateway for each link from your branch sites.
-Maximizing the high-availability of your network architecture is a key first step for Business Continuity and Disaster Recovery (BCDR). In the rest of this article, as stated previously, let's go beyond high-availability and discuss how to architect your Virtual WAN connectivity network for BCDR.
+Maximizing the high availability of your network architecture is a key first step for business continuity and disaster recovery (BCDR). In the rest of this article, let's go beyond high availability and discuss how to architect your Virtual WAN connectivity network for BCDR.
## Need for disaster recovery design
-Disaster may strike at any time, anywhere. Disaster may occur in a cloud provider regions or network, within a service provider network, or within an on-premises network. Regional impact of a cloud or network service due to certain factors such as natural calamity, human errors, war, terrorism, misconfiguration are hard to rule-out. So for the continuity of your business-critical applications you need to have a disaster recovery design. For a comprehensive disaster recovery design, you need to identify all the dependencies that may possibly fail in your end-to-end communication path, and create non-overlapping redundancy for each of the dependency.
+Disaster can strike at any time, anywhere. Disasters can occur in a cloud provider's regions or network, within a service provider's network, or within an on-premises network. Regional impact of a cloud or network service due to factors such as natural calamity, human errors, war, terrorism, or misconfiguration is hard to rule out.
+
+For the continuity of your business-critical applications, you need to have a disaster recovery design. For a comprehensive disaster recovery design, you need to identify all the dependencies that can possibly fail in your end-to-end communication path, and then create non-overlapping redundancy for each dependency.
-Irrespective of whether you run your mission-critical applications in an Azure region, on-premises or anywhere else, you can use another Azure region as your failover site. The following articles addresses disaster recovery from applications and frontend access perspectives:
+Whether you run your mission-critical applications in an Azure region, on-premises, or anywhere else, you can use another Azure region as your failover site. The following articles address disaster recovery from the perspectives of applications and front-end access:
- [Enterprise-scale disaster recovery](https://azure.microsoft.com/solutions/architecture/disaster-recovery-enterprise-scale-dr/) - [Disaster recovery with Azure Site Recovery](https://azure.microsoft.com/solutions/architecture/disaster-recovery-smb-azure-site-recovery/) ## Challenges of using redundant connectivity
-When you interconnect the same set of networks using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities (for example, NAT, firewall) in the path, asymmetrical routing could block traffic flow. Typically, over private connectivity you won't have or come across stateful entities such as NAT or Firewalls. Therefore, asymmetrical routing over private connectivity doesn't necessarily block traffic flow.
+When you interconnect the same set of networks by using more than one connection, you introduce parallel paths between the networks. Parallel paths, when not properly architected, could lead to asymmetrical routing. If you have stateful entities (for example, NAT or firewalls) in a path, asymmetrical routing could block traffic flow.
-However, if you load balance traffic across geo-redundant parallel paths, you would experience inconsistent network performance because of the difference in physical path of the parallel connections. So we need to consider network traffic performance both during the steady state (non-failure state), and a failure state as part of our disaster recovery design.
+Over private connectivity, you typically won't have or come across stateful entities such as NAT or firewalls. Asymmetrical routing over private connectivity doesn't necessarily block traffic flow.
-## Access network redundancy
+However, if you load balance traffic across geo-redundant parallel paths, you'll experience inconsistent network performance because of the difference in the physical path of the parallel connections. So you need to consider network traffic performance during both the steady state (non-failure state) and a failure state as part of your disaster recovery design.
-Most SD-WAN services (managed solutions or otherwise) provide you network connectivity via multiple transport type (for example, Internet broadband, MPLS, LTE). To safeguard against transport network failures, choose connectivity over more than one transport network. For a home user scenario, you can consider using mobile network as a back-up for broadband network connectivity.
+## Redundant access networks
-If network connectivity over different transport type isn't possible, then choose network connectivity via more than one service provider. If you're getting connectivity via more than one service providers, ensure that the service providers maintain non-overlapping independent access networks.
+Most SD-WAN services (managed solutions or otherwise) provide network connectivity via multiple transport types (for example, internet broadband, MPLS, LTE). To safeguard against transport network failures, choose connectivity over more than one transport network. For a home user scenario, consider using a mobile network as a backup for broadband network connectivity.
+
+If network connectivity over various transport types isn't possible, choose network connectivity via more than one service provider. If you're getting connectivity via more than one service provider, ensure that the service providers maintain independent access networks that don't overlap.
## Point-to-site VPN considerations
-Point-to-site VPN establishes private connectivity between an end-device to a network. Following a network failure, the end-device would drop and attempt to re-estabilish the VPN tunnel. Therefore, for point-to-site VPN, your disaster recovery design should aim to minimize the recovery time following a failure. The following network redundancy would help minimize the recovery time. Depending on how critical the connections are you can choose some or all of these options.
+Point-to-site VPN establishes private connectivity between an end device and a network. After a network failure, the end device drops and tries to re-establish the VPN tunnel.
+
+For point-to-site VPN, your disaster recovery design should aim to minimize the recovery time after a failure. The following options for network redundancy can help minimize the recovery time. Depending on how critical the connections are, you can choose either or both of these options.
+
+- Redundant access networks (as discussed earlier)
+- Managing a redundant virtual hub for point-to-site VPN termination
-- Access network redundancy (discussed above).-- Managing redundant virtual hub for point-to-site VPN termination. When you have multiple virtual hub with point-to-site gateways, VWAN provides global profile listing all the point-to-site endpoints. With the global profile, your end-devices could connect to the closest available virtual hub that offers the best network performance. If all your Azure deployments are in a single region and the end devices that connects are in close proximity to the region, you can have redundant virtual hubs within the region. If your deployment and end-devices are spread across multiple regions, you can deploy virtual hub with point-to-site gateway in each of your selected region.
+When you have multiple virtual hubs with point-to-site gateways, Virtual WAN provides a global profile that lists all the point-to-site endpoints. With the global profile, your end devices can connect to the closest available virtual hub that offers the best network performance.
-The following diagram shows the concept of managing redundant virtual hub with their respective point-to-site gateway within a region.
+If all your Azure deployments are in a single region and the end devices that connect are in close proximity to the region, you can have redundant virtual hubs within the region. If your deployment and end devices are spread across multiple regions, you can deploy a virtual hub with a point-to-site gateway in each of your selected regions.
+The following diagram shows the concept of managing redundant virtual hubs with their respective point-to-site gateways within a region.
-In the above diagram, the solid green lines show the primary site-to-site VPN connections and the dotted yellow lines show the stand-by back-up connections. The VWAN point-to-site global profile select primary and back-up connections based on the network performance. See [Download a global profile for User VPN clients](global-hub-profile.md) for further information regarding global profile.
+
+In the diagram, the solid green lines show the primary site-to-site VPN connections. The dotted yellow lines show the standby backup connections. The Virtual WAN point-to-site global profile selects primary and backup connections based on the network performance. For more information about global profiles, see [Download a global profile for User VPN clients](global-hub-profile.md).
## Site-to-site VPN considerations
-Let's consider the example site-to-site VPN connection shown in the following diagram for our discussion. To establish a site-to-site VPN connection with high-available active-active tunnels, see [Tutorial: Create a Site-to-Site connection using Azure Virtual WAN](virtual-wan-site-to-site-portal.md).
+Let's consider the example site-to-site VPN connection shown in the following diagram for our discussion. To establish a site-to-site VPN connection with high-availability active/active tunnels, see [Tutorial: Create a site-to-site connection using Azure Virtual WAN](virtual-wan-site-to-site-portal.md).
> [!NOTE]
-> For easy understanding of the concepts discussed in the section, we are not repeating the discussion of the high-availability feature of site-to-site VPN gateway that lets you create two tunnels to two different endpoints for each VPN link you configure. However, while deploying any of the suggested architecture in the section, remember to configure two tunnels for each of the link you establish.
+> For easy understanding of the concepts discussed in this section, we're not repeating the discussion of the high-availability feature of site-to-site VPN gateways that lets you create two tunnels to two different endpoints for each VPN link that you configure. While you're deploying any of the suggested architectures in this section, remember to configure two tunnels for each link that you establish.
>
-### Multi-link topology
+### Multiple-link topology
+
+To protect against failures of VPN customer premises equipment (CPE) at a branch site, you can configure parallel VPN links to a VPN gateway from parallel CPE devices at the branch site. To protect against network failures of a last-mile service provider to the branch office, you can configure different VPN links over different service provider networks.
+
+The following diagram shows multiple VPN links originating from two CPEs of a branch site and ending on the same VPN gateway.
-To protect against failures of VPN Customer Premises Equipment (CPE) at a branch site, you can configure parallel VPN links to a VPN gateway from parallel CPE devices at the branch site. Further to protect against network failures of a last-mile service provider to the branch office, you can configure different VPN links over different service provider network. The following diagram shows multiple VPN links originating from two different CPEs of a branch site terminating on the same VPN-gateway.
+You can configure up to four links to a branch site from a virtual hub's VPN gateway. While you're configuring a link to a branch site, you can identify the service provider and the throughput speed associated with the link. When you configure parallel links between a branch site and a virtual hub, the VPN gateway will load balance traffic across the parallel links by default. The gateway load balances traffic according to equal-cost multi-path (ECMP) on a per-flow basis.
-You can configure up to four links to a branch site from a virtual hub VPN gateway. While configuring a link to a branch site, you can identify the service provider and the throughput speed associated with the link. When you configure parallel links between a branch site and a virtual hub, the VPN gateway by default would load balance traffic across the parallel links. The load balancing of traffic would be according to Equal-Cost Multi-Path (ECMP) on per-flow basis.
+### Multiple-hub, multiple-link topology
-### Multi-hub multi-link topology
+A multiple-hub, multiple-link topology helps protect against failures of CPE devices and a service provider's network at the on-premises branch location. It also helps protect against any downtime of a virtual hub's VPN gateway. The following diagram shows the topology.
-Multi-link topology protects against CPE device failures and a service provider network failure at the on-premises branch location. Additionally, to protect against any downtime of a virtual hub VPN-gateway, multi-hub multi-link topology would help. The following diagram shows the topology:
+In this topology, latency over the connection between the hubs within an Azure region is insignificant. So you can use all the site-to-site VPN connections between the on-premises branch location and the two virtual hubs in active/active state by spreading the spoke virtual networks across the hubs.
-In the above topology, because intra-Azure-region latency over the connection between the hubs is insignificant, you can use all the site-to-site VPN connections between the on-premises and the two virtual hubs in active-active state by spreading the spoke VNets across the hubs. In the topology, by default, traffic between on-premises and a spoke VNET would traverse directly through the virtual hub to which the spoke VNET is connected during the steady-state and use another virtual hub as a backup only during a failure state. Traffic would traverse through the directly connected hub in the steady state, because the BGP routes advertised by the directly connected hub would have shorter AS-path compared to the backup hub.
+By default, traffic between the on-premises branch location and a spoke virtual network traverses directly through the virtual hub to which the spoke virtual network is connected during the steady state. The traffic uses another virtual hub as a backup only during a failure state. Traffic traverses through the directly connected hub in the steady state because the BGP routes advertised by the directly connected hub have a shorter autonomous systems (AS) path compared to the backup hub.
-The multi-hub multi-link topology would protect and provide business continuity against most of the failure scenarios. However, if a catastrophic failure takes down the entire Azure region, you need 'multi-region multi-link topology' to withstand the failure.
+The multiple-hub, multiple-link topology provides protection and business continuity in most failure scenarios. But it's not sufficient if a catastrophic failure takes down the entire Azure region.
-### Multi-region multi-link topology
+### Multiple-region, multiple-link topology
-Multi-region multi-link topology protects against even a catastrophic failure of an entire region, in addition to the protections offered by the multi-hub multi-link topology that we previously discussed. The following diagram shows the multi-region multi-link topology.
+A multiple-region, multiple-link topology helps protect against a catastrophic failure of an entire region, along with providing the protections of a multiple-hub, multiple-link topology. The following diagram shows the multiple-region, multiple-link topology.
-From a traffic engineering point of view, you need to take into consideration one substantial difference between having redundant hubs within a region vs having the backup hub in a different region. The difference is the latency resulting from the physical distance between the primary and secondary regions. Therefore, you may want to deploy your steady-state service resources in the region closest to your branch/end-users and use the remote region purely for backup.
+From a traffic engineering point of view, you need consider one substantial difference between having redundant hubs within a region and having the backup hub in a different region. The difference is the latency that results from the physical distance between the primary and secondary regions. You might want to deploy your steady-state service resources in the region closest to your branch and end users, and then use the remote region purely for backup.
-If your on-premises branch locations are spread around two or more Azure regions, the multi-region multi-link topology would be more effective in spreading the load and in gaining better network experience during the steady state. The following diagram shows multi-region multi-link topology with branches in different regions. In such scenario, the topology would additionally provide effective Business Continuity Disaster Recovery (BCDR).
+If your on-premises branch locations are spread around two or more Azure regions, the multiple-region, multiple-link topology would be more effective in spreading the load and in gaining better network experience during the steady state. The following diagram shows the multiple-region, multiple-link topology with branches in different regions. In such a scenario, the topology would provide effective BCDR.
## ExpressRoute considerations
-Disaster recovery considerations for ExpressRoute private peering are discussed in [Designing for disaster recovery with ExpressRoute private peering](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md#small-to-medium-on-premises-network-considerations). As noted in the article, the concepts described in that article equally apply to ExpressRoute gateways created within a virtual hub. Using a redundant virtual hub within the region, as shown in the following diagram, is the only topology enhancement recommended for [Small to medium on-premises network considerations](../expressroute/index.yml).
+Disaster recovery considerations for ExpressRoute private peering are discussed in [Designing for disaster recovery with ExpressRoute private peering](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md#small-to-medium-on-premises-network-considerations). The concepts described in that article equally apply to ExpressRoute gateways created within a virtual hub. Using a redundant virtual hub within the region, as shown in the following diagram, is the only topology enhancement that we recommend for [small to medium on-premises networks](../expressroute/index.yml).
-In the above diagram, the ExpressRoute 2 is terminated on a separate ExpressRoute gateway within a second virtual hub within the region.
+In the preceding diagram, the second ExpressRoute instance ends on a separate ExpressRoute gateway within a second virtual hub in the region.
## Next steps
-In this article, we discussed about Virtual WAN disaster recovery design. The following articles addresses disaster recovery from applications and frontend access perspectives:
+This article discussed design considerations for Virtual WAN disaster recovery. The following articles address disaster recovery from the perspectives of applications and front-end access:
- [Enterprise-scale disaster recovery](https://azure.microsoft.com/solutions/architecture/disaster-recovery-enterprise-scale-dr/) - [Disaster recovery with Azure Site Recovery](https://azure.microsoft.com/solutions/architecture/disaster-recovery-smb-azure-site-recovery/)
-To create a point-to-site connectivity to Virtual WAN, see [Tutorial: Create a User VPN connection using Azure Virtual WAN](virtual-wan-point-to-site-portal.md). To create a site-to-site connectivity to Virtual WAN see [Tutorial: Create a Site-to-Site connection using Azure Virtual WAN](virtual-wan-site-to-site-portal.md). To associate an ExpressRoute circuit to Virtual WAN, see [Tutorial: Create an ExpressRoute association using Azure Virtual WAN](virtual-wan-expressroute-portal.md).
+To create a point-to-site connectivity to Virtual WAN, see [Tutorial: Create a User VPN connection using Azure Virtual WAN](virtual-wan-point-to-site-portal.md).
+
+To create a site-to-site connectivity to Virtual WAN, see [Tutorial: Create a site-to-site connection using Azure Virtual WAN](virtual-wan-site-to-site-portal.md).
+
+To associate an ExpressRoute circuit with Virtual WAN, see [Tutorial: Create an ExpressRoute association using Azure Virtual WAN](virtual-wan-expressroute-portal.md).
virtual-wan Global Hub Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md
# Download a global or hub-based profile for User VPN clients
-Azure Virtual WAN offers two types of connectivity for remote users: Global and Hub-based. Use the following sections to learn about and download a profile.
+Azure Virtual WAN offers two types of connectivity for remote users: global and hub-based. Use the following sections to learn about profile types and how to download them.
> [!IMPORTANT]
-> RADIUS authentication supports only the Hub-based profile.
+> RADIUS authentication supports only the hub-based profile.
## Global profile
-The global profile associated with a User VPN Configuration points to a load balancer that includes all active User VPN hubs using that User VPN Configuration. A user connected to the global profile is directed to the hub that is closest to the user's geographic location. This type of connectivity is useful when users travel to different locations frequently.
+The global profile associated with a User VPN configuration points to a load balancer that includes all active User VPN hubs that are using that User VPN configuration. A user connected to the global profile is directed to the hub that's closest to the user's geographic location. This type of connectivity is useful when users travel to different locations frequently.
-For example, you can associate a VPN configuration to 2 different Virtual WAN hubs, one in West US and one in Southeast Asia. If a user connects to the global profile associated with the User VPN configuration, they will connect to the closest Virtual WAN hub based on their location.
+For example, you can associate a VPN configuration with two Virtual WAN hubs, one in West US and one in Southeast Asia. If a user connects to the global profile associated with the User VPN configuration, they'll connect to the closest Virtual WAN hub based on their location.
-To download the **global** profile:
+To download the global profile:
-1. Navigate to the virtual WAN.
-2. Click **User VPN configuration**.
-3. Highlight the configuration for which you want to download the profile.
-4. Click **Download virtual WAN user VPN profile**.
+1. Go to the virtual WAN.
+2. Select **User VPN configurations**.
+3. Select the configuration for which you want to download the profile.
+4. Select **Download virtual WAN user VPN profile**.
- ![Global profile](./media/global-hub-profile/global1.png)
+![Screenshot that shows selections for downloading a global profile.](./media/global-hub-profile/global1.png)
-### Include or exclude hub from global profile
+### Include or exclude a hub from a global profile
-By default, every hub using a specific User VPN Configuration is included in the corresponding global VPN profile. You may choose to exclude a hub from the global VPN profile, meaning a user will not be load-balanced to connect to that hub's gateway if they are using the global VPN profile.
+By default, every hub that uses a specific User VPN configuration is included in the corresponding global VPN profile. You can choose to exclude a hub from the global VPN profile. If you do, a user won't be load balanced to connect to that hub's gateway if they're using the global VPN profile.
To check whether or not the hub is included in the global VPN profile:
-1. Navigate to the hub
-1. Navigate to **User VPN (Point to site)** under **Connectivity** on the left-hand panel
-1. See **Gateway Attachment State** to determine if this hub is included in the global VPN profile. If the state is **attached**, then the hub is included in the global VPN profile. If the state is **detached**, then the hub is not included in the global VPN profile.
+1. Go to the hub.
+1. On the left panel, go to **User VPN (Point to site)** under **Connectivity**.
+1. See **Gateway attachment state** to determine if this hub is included in the global VPN profile. If the state is **attached**, the hub is included. If the state is **detached**, the hub is not included.
- :::image type="content" source="./media/global-hub-profile/attachment-state.png" alt-text="Screenshot showing attachment state of gateway."lightbox="./media/global-hub-profile/attachment-state.png":::
+ :::image type="content" source="./media/global-hub-profile/attachment-state.png" alt-text="Screenshot that shows the attachment state of a gateway."lightbox="./media/global-hub-profile/attachment-state.png":::
To include or exclude a specific hub from the global VPN profile:
-1. Click **Include/Exclude Gateway from Global Profile**
+1. Select **Include/Exclude Gateway from Global Profile**.
- :::image type="content" source="./media/global-hub-profile/include-exclude-1.png" alt-text="Screenshot showing how to include or exclude hub from profile." lightbox="./media/global-hub-profile/include-exclude-1.png":::
+ :::image type="content" source="./media/global-hub-profile/include-exclude-1.png" alt-text="Screenshot that shows the button for including or excluding a hub from a profile." lightbox="./media/global-hub-profile/include-exclude-1.png":::
-1. Click **Exclude** if you wish to remove this hub's gateway from the WAN Global User VPN Profile. Users who are using the Hub-level User VPN profile will still be able to connect to this gateway. Users who are using the WAN-level profile will not be able to connect to this gateway.
+1. Make one of the following choices:
-1. Click **Include** if you wish to include this hub's gateway in the Virtual WAN Global User VPN Profile. Users who are using this WAN-level profile will be able to connect to this gateway.
+ - Select **Exclude** if you want to remove this hub's gateway from the Virtual WAN global User VPN profile. Users who are using the hub-level User VPN profile will still be able to connect to this gateway. Users who are using the WAN-level profile will not be able to connect to this gateway.
+ - Select **Include** if you want to include this hub's gateway in the Virtual WAN global User VPN profile. Users who are using this WAN-level profile will be able to connect to this gateway.
- ![Hub profile 4](./media/global-hub-profile/include-exclude.png)
+ :::image type="content" source="./media/global-hub-profile/include-exclude.png" alt-text="Screenshot that shows the Exclude and Include buttons." lightbox="./media/global-hub-profile/include-exclude.png":::
## Hub-based profile
-The profile points to a single hub. The user can only connect to the particular hub using this profile. To download the **hub-based** profile:
+The profile points to a single hub. The user can connect to only the particular hub by using this profile. To download the hub-based profile:
-1. Navigate to the virtual WAN.
-2. Click **Hub** in the Overview page.
+1. Go to the virtual WAN.
+2. On the **Overview** page, select the hub.
- ![Hub profile 1](./media/global-hub-profile/hub1.png)
-3. Click **User VPN (Point to site)**.
-4. Click **Download virtual Hub User VPN profile**.
- :::image type="content" source="./media/global-hub-profile/hub2.png" alt-text="Screenshot showing how to download hub profile."lightbox="./media/global-hub-profile/hub2.png":::
+ ![Screenshot that shows selecting a hub.](./media/global-hub-profile/hub1.png)
+
+3. Select **User VPN (Point to site)**.
+4. Select **Download virtual Hub User VPN profile**.
+
+ :::image type="content" source="./media/global-hub-profile/hub2.png" alt-text="Screenshot that shows how to download a hub profile."lightbox="./media/global-hub-profile/hub2.png":::
-5. Check **EAPTLS**.
-6. Click **Generate and download profile**.
-
- ![Hub profile 3](./media/global-hub-profile/download.png)
+5. Select **EAPTLS** as the authentication type.
+6. Select **Generate and download profile**.
+ ![Screenshot that shows the button for generating and downloading a profile.](./media/global-hub-profile/download.png)
## Next steps
-To learn more about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md) page.
+To learn more about Virtual WAN, see the [Virtual WAN overview](virtual-wan-about.md) article.
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
Diagnostics and logging configuration must be done from there accessing the **Di
Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
+### Virtual Hub Router
+
+The following metric is available for Virtual Hub Router within a Virtual Hub:
+
+#### Virtual Hub Router Metric
+
+| Metric | Description|
+| | |
+| **Virtual Hub Data Processed** | Data in bytes/second on how much traffic traverses the Virtual Hub Router in a given time period. Please note only the following flows use the Virtual Hub Router - VNET to VNET same hub and inter hub Branch to VNET interhub via VPN or Express Route Gateways.|
+
+##### PowerShell Commands
+
+To query via PowerShell, use the following commands:
+
+**Step 1:**
+```azurepowershell-interactive
+$MetricInformation = Get-AzMetric -ResourceId "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/VirtualHubs/<VirtualHubName>" -MetricName "VirtualHubDataProcessed" -TimeGrain 00:05:00 -StartTime 2022-2-20T01:00:00Z -EndTime 2022-2-20T01:30:00Z -AggregationType Average
+```
+
+**Step 2:**
+```azurepowershell-interactive
+$MetricInformation.Data
+```
+
+**Resource ID** - Your Virtual Hub's Resource ID can be found on the Azure portal. Navigate to the Virtual Hub page within vWAN and select JSON View under Essentials.
+
+**Metric Name** - Refers to the name of the metric you are querying, which in this case is called 'VirtualHubDataProcessed'. This metric shows all the data that the Virtual Hub Router has processed in the selected time period of the hub.
+
+**Time Grain** - Refers to the frequency at which you want see the aggregation of. In the current command, you will see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D.
+
+**Start Time and End Time** - This time is based on UTC, so please ensure that you are entering UTC values when inputting these parameters. If these parameters are not used, by default the past one hour's worth of data is shown.
+
+**Aggregation Types** - Average/Minimum/Maximum/Total
+* Average - Total average of bytes/sec per the selected time period
+* Minimum ΓÇô Minimum bytes that were sent during the selected time grain period.
+* Maximum ΓÇô Maximum bytes that were sent during the selected time grain period
+* Total ΓÇô Total bytes/sec that were sent during the selected time grain period.
+
### Site-to-site VPN gateways The following metrics are available for Azure site-to-site VPN gateways:
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
A connection from a branch or VPN device into Azure Virtual WAN is a VPN connect
An Azure Virtual WAN connection is composed of 2 tunnels. A Virtual WAN VPN gateway is deployed in a virtual hub in active-active mode, which implies that there are separate tunnels from on-premises devices terminating on separate instances. This is the recommendation for all users. However, if the user chooses to only have 1 tunnel to one of the Virtual WAN VPN gateway instances, if for any reason (maintenance, patches etc.) the gateway instance is taken offline, the tunnel will be moved to the secondary active instance and the user may experience a reconnect. BGP sessions will not move across instances.
+### What happens during a Gateway Reset in a Virtual WAN VPN Gateway?
+
+The Gateway Reset button should be used if your on-premises devices are all working as expected but Site to Site VPN connections in Azure are in a Disconnected state. Virtual WAN VPN Gateways are always deployed in an Active-Active state for high availability. This means there is always more than one instance deployed in a VPN Gateway at any point of time. When the Gateway Reset button is used, it reboots the instances in the VPN Gateway in a sequential manner, so your connections are not disrupted. There will be a brief gap as connections move from one instance to the other, but this gap should be less than a minute. Additionally, please note that resetting the gateways will not change your Public IPs.
+ ### Can the on-premises VPN device connect to multiple hubs? Yes. Traffic flow, when commencing, is from the on-premises device to the closest Microsoft network edge, and then to the virtual hub.
When you choose to deploy a security partner provider to protect Internet access
For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md).
-### Why am I seeing a message and button called "Update router to latest software version" in portal?
-
-The Virtual WAN team has been working on upgrading virtual routers from their current cloud service infrastructure to Virtual Machine Scale Sets (VMSS) based deployments. This will enable the virtual hub router to now be availability zone aware and have enhanced scaling out capabilities during high CPU usage. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the lastest version by clicking on the button.
-
-Please note that youΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNET connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new VMSS based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating.
- ## Next steps * For more information about Virtual WAN, see [About Virtual WAN](virtual-wan-about.md).
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
Title: Web application firewall request size limits and exclusion lists in Azure Application Gateway - Azure portal
-description: This article provides information on Web Application Firewall request size limits and exclusion lists configuration in Application Gateway with the Azure portal.
+ Title: Web application firewall exclusion lists in Azure Application Gateway - Azure portal
+description: This article provides information on Web Application Firewall exclusion lists configuration in Application Gateway with the Azure portal.
Previously updated : 02/10/2022 Last updated : 03/08/2022
-# Web Application Firewall request size limits and exclusion lists
+# Web Application Firewall exclusion lists
-The Azure Application Gateway Web Application Firewall (WAF) provides protection for web applications. This article describes WAF request size limits and exclusion lists configuration. These settings are located in the WAF Policy associated to your Application Gateway. To learn more about WAF Policies, see [Azure Web Application Firewall on Azure Application Gateway](ag-overview.md) and [Create Web Application Firewall policies for Application Gateway](create-waf-policy-ag.md)
-
-## WAF exclusion lists
-
-![Request size limits](../media/application-gateway-waf-configuration/waf-policy.png)
+The Azure Application Gateway Web Application Firewall (WAF) provides protection for web applications. This article describes the configuration for WAF exclusion lists. These settings are located in the WAF Policy associated to your Application Gateway. To learn more about WAF Policies, see [Azure Web Application Firewall on Azure Application Gateway](ag-overview.md) and [Create Web Application Firewall policies for Application Gateway](create-waf-policy-ag.md)
Sometimes Web Application Firewall (WAF) might block a request that you want to allow for your application. WAF exclusion lists allow you to omit certain request attributes from a WAF evaluation. The rest of the request is evaluated as normal.
For example, Active Directory inserts tokens that are used for authentication. W
Exclusion lists are global in scope.
+To set exclusion lists in the Azure portal, configure **Exclusions** in the WAF policy resource's **Policy settings** page:
++
+## Attributes
+ The following attributes can be added to exclusion lists by name. The values of the chosen field aren't evaluated against WAF rules, but their names still are (see Example 1 below, the value of the User-Agent header is excluded from WAF evaluation). The exclusion lists remove inspection of the field's value. * Request Headers
In all cases matching is case insensitive and regular expression aren't allowed
> [!NOTE] > For more information and troubleshooting help, see [WAF troubleshooting](web-application-firewall-troubleshoot.md).
-### Examples
+## Examples
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] The following examples demonstrate the use of exclusions.
-#### Example 1
+### Example 1
In this example, you want to exclude the user-agent header. The user-agent request header contains a characteristic string that allows the network protocol peers to identify the application type, operating system, software vendor, or software version of the requesting software user agent. For more information, see [User-Agent](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent).
$exclusion1 = New-AzApplicationGatewayFirewallExclusionConfig `
-SelectorMatchOperator "Equals" ` -Selector "User-Agent" ```
-#### Example 2
+### Example 2
This example excludes the value in the *user* parameter that is passed in the request via the URL. For example, say itΓÇÖs common in your environment for the user field to contain a string that the WAF views as malicious content, so it blocks it. You can exclude the user parameter in this case so that the WAF doesn't evaluate anything in the field.
$exclusion2 = New-AzApplicationGatewayFirewallExclusionConfig `
``` So if the URL `http://www.contoso.com/?user%281%29=fdafdasfda` is passed to the WAF, it won't evaluate the string **fdafdasfda**, but it will still evaluate the parameter name **user%281%29**.
-## WAF request size limits
---
-Web Application Firewall allows you to configure request size limits within lower and upper bounds. The following two size limits configurations are available:
--- The maximum request body size field is specified in kilobytes and controls overall request size limit excluding any file uploads. This field has a minimum value of 8 KB and a maximum value of 128 KB. The default value for request body size is 128 KB.-- The file upload limit field is specified in MB and it governs the maximum allowed file upload size. This field can have a minimum value of 1 MB and the following maximums:-
- - 100 MB for v1 Medium WAF gateways
- - 500 MB for v1 Large WAF gateways
- - 750 MB for v2 WAF gateways
-
-The default value for file upload limit is 100 MB.
-
-For CRS 3.2 (on the WAF_v2 SKU) and newer, these limits are as follows when using a WAF Policy for Appplication Gateway:
-
- - 2MB request body size limit
- - 4GB file upload limit
-
-WAF also offers a configurable knob to turn the request body inspection on or off. By default, the request body inspection is enabled. If the request body inspection is turned off, WAF doesn't evaluate the contents of HTTP message body. In such cases, WAF continues to enforce WAF rules on headers, cookies, and URI. If the request body inspection is turned off, then maximum request body size field isn't applicable and can't be set. Turning off the request body inspection allows for messages larger than 128 KB to be sent to WAF, but the message body isn't inspected for vulnerabilities.
- ## Next steps After you configure your WAF settings, you can learn how to view your WAF logs. For more information, see [Application Gateway diagnostics](../../application-gateway/application-gateway-diagnostics.md#diagnostic-logging).
web-application-firewall Application Gateway Waf Request Size Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-request-size-limits.md
+
+ Title: Web application firewall request size limits in Azure Application Gateway - Azure portal
+description: This article provides information on Web Application Firewall request size limits in Application Gateway with the Azure portal.
+++ Last updated : 03/08/2022+++++
+# Web Application Firewall request size limits
+
+Web Application Firewall allows you to configure request size limits within lower and upper bounds.
+
+Request size limits are global in scope.
+
+## Limits
+
+The following two size limits configurations are available:
+
+- The maximum request body size field is specified in kilobytes and controls overall request size limit excluding any file uploads. This field has a minimum value of 8 KB and a maximum value of 128 KB. The default value for request body size is 128 KB.
+- The file upload limit field is specified in MB and it governs the maximum allowed file upload size. This field can have a minimum value of 1 MB and the following maximums:
+
+ - 100 MB for v1 Medium WAF gateways
+ - 500 MB for v1 Large WAF gateways
+ - 750 MB for v2 WAF gateways
+
+The default value for file upload limit is 100 MB.
+
+For CRS 3.2 (on the WAF_v2 SKU) and newer, these limits are as follows when using a WAF policy for Appplication Gateway:
+
+ - 2MB request body size limit
+ - 4GB file upload limit
+
+To set request size limits in the Azure portal, configure **Global parameters** in the WAF policy resource's **Policy settings** page:
++
+## Request body inspection
+
+WAF offers a configuration setting to enable or disable the request body inspection. By default, the request body inspection is enabled. If the request body inspection is disabled, WAF doesn't evaluate the contents of an HTTP message's body. In such cases, WAF continues to enforce WAF rules on headers, cookies, and URI. If the request body inspection is turned off, then maximum request body size field isn't applicable and can't be set.
+
+Turning off the request body inspection allows for messages larger than 128 KB to be sent to WAF, but the message body isn't inspected for vulnerabilities.
+
+When your WAF receives a request that's over the size limit, the behavior depends on the mode of your WAF and the version of the managed ruleset you use.
+- When your WAF policy is in prevention mode, WAF blocks requests that are over the size limit.
+- When your WAF policy is in detection mode:
+ - If you use CRS 3.2 or newer, WAF inspects the body up to the limit specified and ignores the rest.
+ - If you use CRS 3.1 or earlier, WAF inspects the entire message.
+
+## Next steps
+
+After you configure your WAF settings, you can learn how to view your WAF logs. For more information, see [Application Gateway diagnostics](../../application-gateway/application-gateway-diagnostics.md#diagnostic-logging).
web-application-firewall Web Application Firewall Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-troubleshoot.md
With this information, and the knowledge that rule 942130 is the one that matche
- Use an Exclusion List
- See [WAF configuration](application-gateway-waf-configuration.md#waf-exclusion-lists) for more information about exclusion lists.
+ See [WAF configuration](application-gateway-waf-configuration.md) for more information about exclusion lists.
- Disable the rule. ### Using an exclusion list
In this example, you can see that the field where the *1=1* string was entered i
:::image type="content" source="../media/web-application-firewall-troubleshoot/fiddler-1.png" alt-text="Screenshot of the Progress Telerik Fiddler Web Debugger. In the Raw tab, 1 = 1 is visible after the name text1." border="false":::
-This is a field you can exclude. To learn more about exclusion lists, See [Web application firewall request size limits and exclusion lists](application-gateway-waf-configuration.md#waf-exclusion-lists). You can exclude the evaluation in this case by configuring the following exclusion:
+This is a field you can exclude. To learn more about exclusion lists, See [Web application firewall exclusion lists](application-gateway-waf-configuration.md). You can exclude the evaluation in this case by configuring the following exclusion:
![WAF exclusion](../media/web-application-firewall-troubleshoot/waf-exclusion-02.png)