Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
app-service | Configure Language Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md | Title: Configure Linux Python apps description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Previously updated : 11/16/2022 Last updated : 08/29/2024 adobe-target: true # Configure a Linux Python app for Azure App Service -This article describes how [Azure App Service](overview.md) runs Python apps, how you can migrate existing apps to Azure, and how you can customize the behavior of App Service when needed. Python apps must be deployed with all the required [pip](https://pypi.org/project/pip/) modules. +This article describes how [Azure App Service](overview.md) runs Python apps, how you can migrate existing apps to Azure, and how you can customize the behavior of App Service when you need to. Python apps must be deployed with all the required [pip](https://pypi.org/project/pip/) modules. -The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or a [zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy). +The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or when you deploy a [zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy). This guide provides key concepts and instructions for Python developers who use a built-in Linux container in App Service. If you've never used Azure App Service, first follow the [Python quickstart](quickstart-python.md) and [Python with PostgreSQL tutorial](tutorial-python-postgresql-app.md). You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI for configuration: -- **Azure portal**, use the app's **Settings** > **Configuration** page as described on [Configure an App Service app in the Azure portal](configure-common.md).+- **Azure portal**, use the app's **Settings** > **Configuration** page as described in [Configure an App Service app in the Azure portal](configure-common.md). - **Azure CLI**: you have two options. You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI ## Configure Python version -- **Azure portal**: use the **General settings** tab on the **Configuration** page as described on [Configure general settings](configure-common.md#configure-general-settings) for Linux containers.+- **Azure portal**: use the **General settings** tab on the **Configuration** page as described in [Configure general settings](configure-common.md#configure-general-settings) for Linux containers. - **Azure CLI**: You can run an unsupported version of Python by building your own container imag ## Customize build automation -App Service's build system, called Oryx, performs the following steps when you deploy your app if the app setting `SCM_DO_BUILD_DURING_DEPLOYMENT` is set to `1`: +App Service's build system, called Oryx, performs the following steps when you deploy your app, if the app setting `SCM_DO_BUILD_DURING_DEPLOYMENT` is set to `1`: -1. Run a custom pre-build script if specified by the `PRE_BUILD_COMMAND` setting. (The script can itself run other Python and Node.js scripts, pip and npm commands, and Node-based tools like yarn, for example, `yarn install` and `yarn build`.) +1. Run a custom pre-build script, if that step is specified by the `PRE_BUILD_COMMAND` setting. (The script can itself run other Python and Node.js scripts, pip and npm commands, and Node-based tools like yarn, for example, `yarn install` and `yarn build`.) 1. Run `pip install -r requirements.txt`. The *requirements.txt* file must be present in the project's root folder. Otherwise, the build process reports the error: "Could not find setup.py or requirements.txt; Not running pip install." 1. If *manage.py* is found in the root of the repository (indicating a Django app), run *manage.py collectstatic*. However, if the `DISABLE_COLLECTSTATIC` setting is `true`, this step is skipped. -1. Run custom post-build script if specified by the `POST_BUILD_COMMAND` setting. (Again, the script can run other Python and Node.js scripts, pip and npm commands, and Node-based tools.) +1. Run custom post-build script, if that step is specified by the `POST_BUILD_COMMAND` setting. (Again, the script can run other Python and Node.js scripts, pip and npm commands, and Node-based tools.) By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTSTATIC` settings are empty. - To disable running collectstatic when building Django apps, set the `DISABLE_COLLECTSTATIC` setting to `true`. -- To run pre-build commands, set the `PRE_BUILD_COMMAND` setting to contain either a command, such as `echo Pre-build command`, or a path to a script file relative to your project root, such as `scripts/prebuild.sh`. All commands must use relative paths to the project root folder.+- To run pre-build commands, set the `PRE_BUILD_COMMAND` setting to contain either a command, such as `echo Pre-build command`, or a path to a script file, relative to your project root, such as `scripts/prebuild.sh`. All commands must use relative paths to the project root folder. -- To run post-build commands, set the `POST_BUILD_COMMAND` setting to contain either a command, such as `echo Post-build command`, or a path to a script file relative to your project root, such as `scripts/postbuild.sh`. All commands must use relative paths to the project root folder.+- To run post-build commands, set the `POST_BUILD_COMMAND` setting to contain either a command, such as `echo Post-build command`, or a path to a script file, relative to your project root, such as `scripts/postbuild.sh`. All commands must use relative paths to the project root folder. For other settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md). For more information on how App Service runs and builds Python apps in Linux, se > [!NOTE] > The `PRE_BUILD_SCRIPT_PATH` and `POST_BUILD_SCRIPT_PATH` settings are identical to `PRE_BUILD_COMMAND` and `POST_BUILD_COMMAND` and are supported for legacy purposes. >-> A setting named `SCM_DO_BUILD_DURING_DEPLOYMENT`, if it contains `true` or 1, triggers an Oryx build happens during deployment. The setting is true when deploying using git, the Azure CLI command `az webapp up`, and Visual Studio Code. +> A setting named `SCM_DO_BUILD_DURING_DEPLOYMENT`, if it contains `true` or `1`, triggers an Oryx build that happens during deployment. The setting is `true` when you deploy by using Git, the Azure CLI command `az webapp up`, and Visual Studio Code. > [!NOTE] > Always use relative paths in all pre- and post-build scripts because the build container in which Oryx runs is different from the runtime container in which the app runs. Never rely on the exact placement of your app project folder within the container (for example, that it's placed under *site/wwwroot*). Existing web applications can be redeployed to Azure as follows: 1. **Database**: If your app depends on a database, create the necessary resources on Azure as well. -1. **App service resources**: Create a resource group, App Service Plan, and App Service web app to host your application. You can do it easily by running the Azure CLI command [`az webapp up`](/cli/azure/webapp?az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Python (Django or Flask) web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service Plan, and the web app to be more suitable for your application. +1. **App service resources**: Create a resource group, App Service plan, and App Service web app to host your application. You can do this easily by running the Azure CLI command [`az webapp up`](/cli/azure/webapp#az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Python (Django or Flask) web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service plan, and web app to be more suitable for your application. -1. **Environment variables**: If your application requires any environment variables, create equivalent [App Service application settings](configure-common.md#configure-app-settings). These App Service settings appear to your code as environment variables, as described on [Access environment variables](#access-app-settings-as-environment-variables). +1. **Environment variables**: If your application requires any environment variables, create equivalent [App Service application settings](configure-common.md#configure-app-settings). These App Service settings appear to your code as environment variables, as described in [Access environment variables](#access-app-settings-as-environment-variables). - Database connections, for example, are often managed through such settings, as shown in [Tutorial: Deploy a Django web app with PostgreSQL - verify connection settings](tutorial-python-postgresql-app.md#2-verify-connection-settings). - See [Production settings for Django apps](#production-settings-for-django-apps) for specific settings for typical Django apps. -1. **App startup**: Review the section, [Container startup process](#container-startup-process) later in this article to understand how App Service attempts to run your app. App Service uses the Gunicorn web server by default, which must be able to find your app object or *wsgi.py* folder. If needed, you can [Customize the startup command](#customize-startup-command). +1. **App startup**: Review the section [Container startup process](#container-startup-process) later in this article to understand how App Service attempts to run your app. App Service uses the Gunicorn web server by default, which must be able to find your app object or *wsgi.py* folder. If you need to, you can [Customize the startup command](#customize-startup-command). 1. **Continuous deployment**: Set up continuous deployment from GitHub Actions, Bitbucket, or Azure Repos as described in the article [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). Or, set up continuous deployment from Local Git as described in the article [Local Git deployment to Azure App Service](deploy-local-git.md). With these steps completed, you should be able to commit changes to your source ### Production settings for Django apps -For a production environment like Azure App Service, Django apps should follow Django's [Deployment checklist](https://docs.djangoproject.com/en/4.1/howto/deployment/checklist/) (djangoproject.com). +For a production environment like Azure App Service, Django apps should follow Django's [Deployment checklist](https://docs.djangoproject.com/en/4.1/howto/deployment/checklist/). The following table describes the production settings that are relevant to Azure. These settings are defined in the app's *setting.py* file. | Django setting | Instructions for Azure | | | |-| `SECRET_KEY` | Store the value in an App Service setting as described on [Access app settings as environment variables](#access-app-settings-as-environment-variables). You can alternately [store the value as a "secret" in Azure Key Vault](/azure/key-vault/secrets/quick-create-python). | +| `SECRET_KEY` | Store the value in an App Service setting as described on [Access app settings as environment variables](#access-app-settings-as-environment-variables). You can alternatively [store the value as a secret in Azure Key Vault](/azure/key-vault/secrets/quick-create-python). | | `DEBUG` | Create a `DEBUG` setting on App Service with the value 0 (false), then load the value as an environment variable. In your development environment, create a `DEBUG` environment variable with the value 1 (true). |-| `ALLOWED_HOSTS` | In production, Django requires that you include app's URL in the `ALLOWED_HOSTS` array of *settings.py*. You can retrieve this URL at runtime with the code, `os.environ['WEBSITE_HOSTNAME']`. App Service automatically sets the `WEBSITE_HOSTNAME` environment variable to the app's URL. | -| `DATABASES` | Define settings in App Service for the database connection and load them as environment variables to populate the [`DATABASES`](https://docs.djangoproject.com/en/4.1/ref/settings/#std:setting-DATABASES) dictionary. You can alternately store the values (especially the username and password) as [Azure Key Vault secrets](/azure/key-vault/secrets/quick-create-python). | +| `ALLOWED_HOSTS` | In production, Django requires that you include the app's URL in the `ALLOWED_HOSTS` array of *settings.py*. You can retrieve this URL at runtime with the code `os.environ['WEBSITE_HOSTNAME']`. App Service automatically sets the `WEBSITE_HOSTNAME` environment variable to the app's URL. | +| `DATABASES` | Define settings in App Service for the database connection and load them as environment variables to populate the [`DATABASES`](https://docs.djangoproject.com/en/4.1/ref/settings/#std:setting-DATABASES) dictionary. You can alternatively store the values (especially the username and password) as [Azure Key Vault secrets](/azure/key-vault/secrets/quick-create-python). | ## Serve static files for Django apps -If your Django web app includes static front-end files, first follow the instructions on [Managing static files](https://docs.djangoproject.com/en/4.1/howto/static-files/) in the Django documentation. +If your Django web app includes static front-end files, first follow the instructions on [managing static files](https://docs.djangoproject.com/en/4.1/howto/static-files/) in the Django documentation. For App Service, you then make the following modifications: For App Service, you then make the following modifications: STATICFILES_DIRS = [os.path.join(FRONTEND_DIR, 'build', 'static')] ``` - Here, `FRONTEND_DIR`, to build a path to where a build tool like yarn is run. You can again use an environment variable and App Setting as desired. + Here, `FRONTEND_DIR` is used to build a path to where a build tool like yarn is run. You can again use an environment variable and App Setting as desired. -1. Add `whitenoise` to your *requirements.txt* file. [Whitenoise](http://whitenoise.evans.io/en/stable/) (whitenoise.evans.io) is a Python package that makes it simple for a production Django app to serve its own static files. Whitenoise specifically serves those files that are found in the folder specified by the Django `STATIC_ROOT` variable. +1. Add `whitenoise` to your *requirements.txt* file. [WhiteNoise](http://whitenoise.evans.io/en/stable/) (whitenoise.evans.io) is a Python package that makes it simple for a production Django app to serve its own static files. WhiteNoise specifically serves those files that are found in the folder specified by the Django `STATIC_ROOT` variable. -1. In your *settings.py* file, add the following line for Whitenoise: +1. In your *settings.py* file, add the following line for WhiteNoise: ```python STATICFILES_STORAGE = ('whitenoise.storage.CompressedManifestStaticFilesStorage') ``` -1. Also modify the `MIDDLEWARE` and `INSTALLED_APPS` lists to include Whitenoise: +1. Also modify the `MIDDLEWARE` and `INSTALLED_APPS` lists to include WhiteNoise: ```python MIDDLEWARE = [ For App Service, you then make the following modifications: ## Serve static files for Flask apps -If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.2.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on GitHub. +If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.2.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on GitHub. To serve static files directly from a route on your application, you can use the [`send_from_directory`](https://flask.palletsprojects.com/en/2.2.x/api/#flask.send_from_directory) method: This container has the following characteristics: - Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the extra arguments `--bind=0.0.0.0 --timeout 600`. - You can provide configuration settings for Gunicorn by [customizing the startup command](#customize-startup-command). - - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org). + - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described in [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html). - By default, the base container image includes only the Flask web framework, but the container supports other frameworks that are WSGI-compliant and compatible with Python 3.6+, such as Django. This container has the following characteristics: During startup, the App Service on Linux container runs the following steps: -1. Use a [custom startup command](#customize-startup-command), if provided. -2. Check for the existence of a [Django app](#django-app), and launch Gunicorn for it if detected. -3. Check for the existence of a [Flask app](#flask-app), and launch Gunicorn for it if detected. -4. If no other app is found, start a default app that's built into the container. +1. Use a [custom startup command](#customize-startup-command), if one is provided. +1. Check for the existence of a [Django app](#django-app), and launch Gunicorn for it if one is detected. +1. Check for the existence of a [Flask app](#flask-app), and launch Gunicorn for it if one is detected. +1. If no other app is found, start a default app that's built into the container. The following sections provide extra details for each option. gunicorn --bind=0.0.0.0 --timeout 600 <module>.wsgi If you want more specific control over the startup command, use a [custom startup command](#customize-startup-command), replace `<module>` with the name of folder that contains *wsgi.py*, and add a `--chdir` argument if that module isn't in the project root. For example, if your *wsgi.py* is located under *knboard/backend/config* from your project root, use the arguments `--chdir knboard/backend config.wsgi`. -To enable production logging, add the `--access-logfile` and `--error-logfile` parameters as shown in the examples for [custom startup commands](#customize-startup-command). +To enable production logging, add the `--access-logfile` and `--error-logfile` parameters as shown in the examples for [custom startup commands](#example-startup-commands). ### Flask app gunicorn --bind=0.0.0.0 --timeout 600 application:app gunicorn --bind=0.0.0.0 --timeout 600 app:app ``` -If your main app module is contained in a different file, use a different name for the app object, or you want to provide other arguments to Gunicorn, use a [custom startup command](#customize-startup-command). +If your main app module is contained in a different file, use a different name for the app object. If you want to provide other arguments to Gunicorn, use a [custom startup command](#customize-startup-command). ### Default behavior If you deployed code and still see the default app, see [Troubleshooting - App d :::image type="content" source="media/configure-language-python/default-python-app.png" alt-text="Screenshot of the default App Service on Linux web page." link="#app-doesnt-appear"::: -Again, if you expect to see a deployed app instead of the default app, see [Troubleshooting - App doesn't appear](#app-doesnt-appear). - ## Customize startup command You can control the container's startup behavior by providing either a custom startup command or multiple commands in a startup command file. A startup command file can use whatever name you choose, such as *startup.sh*, *startup.cmd*, *startup.txt*, and so on. To specify a startup command or command file: Replace `<custom-command>` with either the full text of your startup command or the name of your startup command file. -App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free, and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for more information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com). +App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free, and that a startup command file is deployed to App Service along with your app code. You can also check the [diagnostic logs](#access-diagnostic-logs) for more information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com). ### Example startup commands -- **Added Gunicorn arguments**: The following example adds the `--workers=4` to a Gunicorn command line for starting a Django app:+- **Added Gunicorn arguments**: The following example adds the `--workers=4` argument to a Gunicorn command line for starting a Django app: ```bash # <module-path> is the relative path to the folder that contains the module App Service ignores any errors that occur when processing a custom startup comma gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi ``` - For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you're using auto-scale rules to scale your web app up and down, you should also dynamically set the number of gunicorn workers using the `NUM_CORES` environment variable in your startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers) + For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html). If you're using auto-scale rules to scale your web app up and down, you should also dynamically set the number of Gunicorn workers using the `NUM_CORES` environment variable in your startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of Gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers). - **Enable production logging for Django**: Add the `--access-logfile '-'` and `--error-logfile '-'` arguments to the command line: App Service ignores any errors that occur when processing a custom startup comma These logs will appear in the [App Service log stream](#access-diagnostic-logs). - For more information, see [Gunicorn logging](https://docs.gunicorn.org/en/stable/settings.html#logging) (docs.gunicorn.org). + For more information, see [Gunicorn logging](https://docs.gunicorn.org/en/stable/settings.html#logging). - **Custom Flask main module**: By default, App Service assumes that a Flask app's main module is *application.py* or *app.py*. If your main module uses a different name, then you must customize the startup command. For example, if you have a Flask app whose main module is *hello.py* and the Flask app object in that file is named `myapp`, then the command is as follows: App Service ignores any errors that occur when processing a custom startup comma ## Access app settings as environment variables -App settings are values stored in the cloud specifically for your app as described on [Configure app settings](configure-common.md#configure-app-settings). These settings are available to your app code as environment variables and accessed using the standard [os.environ](https://docs.python.org/3/library/os.html#os.environ) pattern. +App settings are values stored in the cloud specifically for your app, as described in [Configure app settings](configure-common.md#configure-app-settings). These settings are available to your app code as environment variables and accessed using the standard [os.environ](https://docs.python.org/3/library/os.html#os.environ) pattern. -For example, if you've created app setting called `DATABASE_SERVER`, the following code retrieves that setting's value: +For example, if you've created an app setting called `DATABASE_SERVER`, the following code retrieves that setting's value: ```python db_server = os.environ['DATABASE_SERVER'] db_server = os.environ['DATABASE_SERVER'] ## Detect HTTPS session -In App Service, [TLS/SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) (wikipedia.org) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header. +In App Service, [TLS/SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header. ```python if 'X-Forwarded-Proto' in request.headers and request.headers['X-Forwarded-Proto'] == 'https': # Do something when HTTPS is used ``` -Popular web frameworks let you access the `X-Forwarded-*` information in your standard app pattern. For example in Django, you can use the [SECURE_PROXY_SSL_HEADER](https://docs.djangoproject.com/en/4.1/ref/settings/#secure-proxy-ssl-header) to tell Django to use the `X-Forwarded-Proto` header. +Popular web frameworks let you access the `X-Forwarded-*` information in your standard app pattern. For example, in Django you can use the [SECURE_PROXY_SSL_HEADER](https://docs.djangoproject.com/en/4.1/ref/settings/#secure-proxy-ssl-header) to tell Django to use the `X-Forwarded-Proto` header. ## Access diagnostic logs Use the following steps to access the deployment logs: 1. On the Azure portal for your web app, select **Deployment** > **Deployment Center** on the left menu. 1. On the **Logs** tab, select the **Commit ID** for the most recent commit.-1. On the **Log details** page that appears, select the **Show Logs...** link that appears next to "Running oryx build...". +1. On the **Log details** page that appears, select the **Show Logs** link that appears next to "Running oryx build...". -Build issues such as incorrect dependencies in *requirements.txt* and errors in pre- or post-build scripts will appear in these logs. Errors also appear if your requirements file isn't exactly named *requirements.txt* or doesn't appear in the root folder of your project. +Build issues such as incorrect dependencies in *requirements.txt* and errors in pre- or post-build scripts will appear in these logs. Errors also appear if your requirements file isn't named *requirements.txt* or doesn't appear in the root folder of your project. ## Open SSH session in browser [!INCLUDE [Open SSH session in browser](../../includes/app-service-web-ssh-connect-builtin-no-h.md)] -When you're successfully connected to the SSH session, you should see the message "SSH CONNECTION ESTABLISHED" at the bottom of the window. If you see errors such as "SSH_CONNECTION_CLOSED" or a message that the container is restarting, an error may be preventing the app container from starting. See [Troubleshooting](#troubleshooting) for steps to investigate possible issues. +When you're successfully connected to the SSH session, you should see the message "SSH CONNECTION ESTABLISHED" at the bottom of the window. If you see errors such as "SSH_CONNECTION_CLOSED" or a message that the container is restarting, an error might be preventing the app container from starting. See [Troubleshooting](#other-issues) for steps to investigate possible issues. ## URL rewrites -When deploying Python applications on Azure App Service for Linux, you may need to handle URL rewrites within your application. This is particularly useful for ensuring specific URL patterns are redirected to the correct endpoints without relying on external web server configurations. For Flask applications, [URL processors](https://flask.palletsprojects.com/patterns/urlprocessors/) and custom middleware can be used to achieve this. In Django applications, the robust [URL dispatcher](https://docs.djangoproject.com/en/5.0/topics/http/urls/) allows for efficient management of URL rewrites. +When deploying Python applications on Azure App Service for Linux, you might need to handle URL rewrites within your application. This is particularly useful for ensuring specific URL patterns are redirected to the correct endpoints without relying on external web server configurations. For Flask applications, [URL processors](https://flask.palletsprojects.com/patterns/urlprocessors/) and custom middleware can be used to achieve this. In Django applications, the robust [URL dispatcher](https://docs.djangoproject.com/en/5.0/topics/http/urls/) allows for efficient management of URL rewrites. ## Troubleshooting -In general, the first step in troubleshooting is to use App Service Diagnostics: +In general, the first step in troubleshooting is to use App Service diagnostics: 1. In the Azure portal for your web app, select **Diagnose and solve problems** from the left menu. 1. Select **Availability and Performance**. The following sections provide guidance for specific issues. - <a name="service-unavailable"></a>**You see the message "Service Unavailable" in the browser.** The browser has timed out waiting for a response from App Service, which indicates that App Service started the Gunicorn server, but the app itself didn't start. This condition could indicate that the Gunicorn arguments are incorrect, or that there's an error in the app code. - - Refresh the browser, especially if you're using the lowest pricing tiers in your App Service Plan. The app may take longer to start up when using free tiers, for example, and becomes responsive after you refresh the browser. + - Refresh the browser, especially if you're using the lowest pricing tiers in your App Service plan. The app might take longer to start up when you use free tiers, for example, and becomes responsive after you refresh the browser. - Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command). The following sections provide guidance for specific issues. - **The log stream shows "Could not find setup.py or requirements.txt; Not running pip install."**: The Oryx build process failed to find your *requirements.txt* file. - - Connect to the web app's container via [SSH](#open-ssh-session-in-browser) and verify that *requirements.txt* is named correctly and exists directly under *site/wwwroot*. If it doesn't exist, make site the file exists in your repository and is included in your deployment. If it exists in a separate folder, move it to the root. + - Connect to the web app's container via [SSH](#open-ssh-session-in-browser) and verify that *requirements.txt* is named correctly and exists directly under *site/wwwroot*. If it doesn't exist, make sure the file exists in your repository and is included in your deployment. If it exists in a separate folder, move it to the root. #### ModuleNotFoundError when app starts -If you see an error like `ModuleNotFoundError: No module named 'example'`, then Python couldn't find one or more of your modules when the application started. This error most often occurs if you deploy your virtual environment with your code. Virtual environments aren't portable, so a virtual environment shouldn't be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This setting will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html). +If you see an error like `ModuleNotFoundError: No module named 'example'`, then Python couldn't find one or more of your modules when the application started. This error most often occurs if you deploy your virtual environment with your code. Virtual environments aren't portable, so a virtual environment shouldn't be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This setting will force Oryx to install your packages whenever you deploy to App Service. For more information, see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html). ### Database is locked -When attempting to run database migrations with a Django app, you may see "sqlite3. OperationalError: database is locked." The error indicates that your application is using a SQLite database for which Django is configured by default rather than using a cloud database such as PostgreSQL for Azure. +When attempting to run database migrations with a Django app, you might see "sqlite3. OperationalError: database is locked." The error indicates that your application is using a SQLite database, for which Django is configured by default, rather than using a cloud database such as Azure Database for PostgreSQL. Check the `DATABASES` variable in the app's *settings.py* file to ensure that your app is using a cloud database instead of SQLite. If you're encountering this error with the sample in [Tutorial: Deploy a Django #### Other issues -- **Passwords don't appear in the SSH session when typed**: For security reasons, the SSH session keeps your password hidden when you type. The characters are being recorded, however, so type your password as usual and press **Enter** when done.+- **Passwords don't appear in the SSH session when typed**: For security reasons, the SSH session keeps your password hidden when you type. The characters are being recorded, however, so type your password as usual and select **Enter** when done. -- **Commands in the SSH session appear to be cut off**: The editor may not be word-wrapping commands, but they should still run correctly.+- **Commands in the SSH session appear to be cut off**: The editor might not be word-wrapping commands, but they should still run correctly. -- **Static assets don't appear in a Django app**: Ensure that you've enabled the [whitenoise module](http://whitenoise.evans.io/en/stable/django.html)+- **Static assets don't appear in a Django app**: Ensure that you've enabled the [WhiteNoise module](http://whitenoise.evans.io/en/stable/django.html). - **You see the message, "Fatal SSL Connection is Required"**: Check any usernames and passwords used to access resources (such as databases) from within the app. -## More resources +## Related content - [Tutorial: Python app with PostgreSQL](tutorial-python-postgresql-app.md) - [Tutorial: Deploy from private container repository](tutorial-custom-container.md?pivots=container-linux)-- [App Service Linux FAQ](faq-app-service-linux.yml)+- [App Service on Linux FAQ](faq-app-service-linux.yml) - [Environment variables and app settings reference](reference-app-settings.md) |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | An App Service Environment is an Azure App Service feature that provides a fully > [!NOTE] > This article covers the features, benefits, and use cases of App Service Environment v3, which is used with App Service Isolated v2 plans.-> +> An App Service Environment can host your: - Windows web apps App Service Environments have many use cases, including: - Network-isolated application hosting. - Multi-tier applications. -There are many networking features that enable apps in a multi-tenant App Service to reach network-isolated resources or become network-isolated themselves. These features are enabled at the application level. With an App Service Environment, no added configuration is required for the apps to be on a virtual network. The apps are deployed into a network-isolated environment that's already on a virtual network. If you really need a complete isolation story, you can also deploy your App Service Environment onto dedicated hardware. +There are many networking features that enable apps in a multitenant App Service to reach network-isolated resources or become network-isolated themselves. These features are enabled at the application level. With an App Service Environment, no added configuration is required for the apps to be on a virtual network. The apps are deployed into a network-isolated environment that's already on a virtual network. If you really need a complete isolation story, you can also deploy your App Service Environment onto dedicated hardware. ## Dedicated environment -An App Service Environment is a single-tenant deployment of Azure App Service that runs on your virtual network. +An App Service Environment is a single-tenant deployment of Azure App Service that runs on your virtual network. Applications are hosted in App Service plans, which are created in an App Service Environment. An App Service plan is essentially a provisioning profile for an application host. As you scale out your App Service plan, you create more application hosts with all the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all the App Service plans combined. A single App Service Isolated v2 (Iv2) plan can have up to 100 instances by itself. The number of addresses that are used by an App Service Environment v3 in its su The apps in an App Service Environment don't need any features enabled to access resources on the same virtual network that the App Service Environment is in. If the App Service Environment virtual network is connected to another network, the apps in the App Service Environment can access resources in those extended networks. Traffic can be blocked by user configuration on the network. -The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. With those networking features, your apps can act as though they're deployed on a virtual network. The apps in an App Service Environment v3 don't need any added configuration to be on the virtual network. +The multitenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. With those networking features, your apps can act as though they're deployed on a virtual network. The apps in an App Service Environment v3 don't need any added configuration to be on the virtual network. -A benefit of using an App Service Environment instead of a multi-tenant service is that any network access controls for the App Service Environment-hosted apps are external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app-by-app basis and use role-based access control or a policy to prevent any configuration changes. +A benefit of using an App Service Environment instead of a multitenant service is that any network access controls for the App Service Environment-hosted apps are external to the application configuration. With the apps in the multitenant service, you must enable the features on an app-by-app basis and use role-based access control or a policy to prevent any configuration changes. ## Feature differences App Service Environment v3 differs from earlier versions in the following ways: -- There are no networking dependencies on the customer's virtual network. You can secure all inbound and outbound traffic and route outbound traffic as you want. +- There are no networking dependencies on the customer's virtual network. You can secure all inbound and outbound traffic and route outbound traffic as you want. - You can deploy an App Service Environment v3 that's enabled for zone redundancy. You set zone redundancy only during creation and only in regions where all App Service Environment v3 dependencies are zone redundant. In this case, each App Service Plan on the App Service Environment will need to have a minimum of three instances so that they can be spread across zones. For more information, see [Migrate App Service Environment to availability zone support](../../availability-zones/migrate-app-service-environment.md).-- You can deploy an App Service Environment v3 on a dedicated host group. Host group deployments aren't zone redundant. -- Scaling is much faster than with an App Service Environment v2. Although scaling still isn't immediate, as in the multi-tenant service, it's a lot faster.+- You can deploy an App Service Environment v3 on a dedicated host group. Host group deployments aren't zone redundant. +- Scaling is much faster than with an App Service Environment v2. Although scaling still isn't immediate, as in the multitenant service, it's a lot faster. - Front-end scaling adjustments are no longer required. App Service Environment v3 front ends automatically scale to meet your needs and are deployed on better hosts.-- Scaling no longer blocks other scale operations within the App Service Environment v3. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan is scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small. +- Scaling no longer blocks other scale operations within the App Service Environment v3. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan is scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small. - You can reach apps in an internal-VIP App Service Environment v3 across global peering. Such access wasn't possible in earlier versions. A few features that were available in earlier versions of App Service Environment aren't available in App Service Environment v3. For example, you can no longer do the following: With App Service Environment v3, the pricing model varies depending on the type > [!NOTE] > Sample calculations for zone redundant App Service Environment v3 pricing:-> +> > 1. Your zone redundant App Service Environment v3 has 3 Linux I1v2 instances in a single App Service plan. +> > - An I1v2 instance has 2 cores. > - In total, across your instances, you have 6 cores. > - 18 cores - 6 cores = 12 cores With App Service Environment v3, the pricing model varies depending on the type > - You'll be charged for your 3 Linux I1v2 instances plus 6 additional Windows I1v2 instances. > > 2. Your zone redundant App Service Environment v3 has 3 Linux I2v2 instances in a single App Service plan. +> > - An I2v2 instance has 4 cores. > - In total, across your instances, you have 12 cores. > - 18 cores - 12 cores = 6 cores With App Service Environment v3, the pricing model varies depending on the type > - You'll be charged for your 3 Linux I2v2 instances plus 3 additional Windows I1v2 instances. > > 3. Your zone redundant App Service Environment v3 has 4 Linux I3v2 instances in a single App Service plan. +> > - An I3v2 instance has 8 cores. > - In total, across your instances, you have 32 cores. > - 32 cores is greater than 18 cores Reserved Instance pricing for Isolated v2 is available and is described in [How App Service Environment v3 is available in the following regions: -### Azure Public: --| Region | Single zone support | Availability zone support | Single zone support | -| -- | :--: | :-: | :-: | -| | App Service Environment v3 | App Service Environment v3 | App Service Environment v1/v2 | -| Australia Central | ✅ | | ✅ | -| Australia Central 2 | ✅* | | ✅ | -| Australia East | ✅ | ✅ | ✅ | -| Australia Southeast | ✅ | | ✅ | -| Brazil South | ✅ | ✅ | ✅ | -| Brazil Southeast | ✅ | | ✅ | -| Canada Central | ✅ | ✅ | ✅ | -| Canada East | ✅ | | ✅ | -| Central India | ✅ | ✅ | ✅ | -| Central US | ✅ | ✅ | ✅ | -| East Asia | ✅ | ✅ | ✅ | -| East US | ✅ | ✅ | ✅ | -| East US 2 | ✅ | ✅ | ✅ | -| France Central | ✅ | ✅ | ✅ | -| France South | ✅ | | ✅ | -| Germany North | ✅ | | ✅ | -| Germany West Central | ✅ | ✅ | ✅ | -| Israel Central | ✅ | ✅ | | -| Italy North | ✅ | ✅** | | -| Japan East | ✅ | ✅ | ✅ | -| Japan West | ✅ | | ✅ | -| Jio India Central | ✅** | | | -| Jio India West | ✅** | | ✅ | -| Korea Central | ✅ | ✅ | ✅ | -| Korea South | ✅ | | ✅ | -| Mexico Central | ✅ | ✅** | | -| North Central US | ✅ | | ✅ | -| North Europe | ✅ | ✅ | ✅ | -| Norway East | ✅ | ✅ | ✅ | -| Norway West | ✅ | | ✅ | -| Poland Central | ✅ | ✅ | | -| Qatar Central | ✅** | ✅** | | -| South Africa North | ✅ | ✅ | ✅ | -| South Africa West | ✅ | | ✅ | -| South Central US | ✅ | ✅ | ✅ | -| South India | ✅ | | ✅ | -| Southeast Asia | ✅ | ✅ | ✅ | -| Spain Central | ✅ | ✅** | | -| Sweden Central | ✅ | ✅ | | -| Switzerland North | ✅ | ✅ | ✅ | -| Switzerland West | ✅ | | ✅ | -| UAE Central | ✅ | | ✅ | -| UAE North | ✅ | ✅ | ✅ | -| UK South | ✅ | ✅ | ✅ | -| UK West | ✅ | | ✅ | -| West Central US | ✅ | | ✅ | -| West Europe | ✅ | ✅ | ✅ | -| West India | ✅* | | ✅ | -| West US | ✅ | | ✅ | -| West US 2 | ✅ | ✅ | ✅ | -| West US 3 | ✅ | ✅ | ✅ | +### Azure Public ++| Region | Single zone support | Availability zone support | +| -- | :--: | :-: | +| | App Service Environment v3 | App Service Environment v3 | +| Australia Central | ✅ | | +| Australia Central 2 | ✅* | | +| Australia East | ✅ | ✅ | +| Australia Southeast | ✅ | | +| Brazil South | ✅ | ✅ | +| Brazil Southeast | ✅ | | +| Canada Central | ✅ | ✅ | +| Canada East | ✅ | | +| Central India | ✅ | ✅ | +| Central US | ✅ | ✅ | +| East Asia | ✅ | ✅ | +| East US | ✅ | ✅ | +| East US 2 | ✅ | ✅ | +| France Central | ✅ | ✅ | +| France South | ✅ | | +| Germany North | ✅ | | +| Germany West Central | ✅ | ✅ | +| Israel Central | ✅ | ✅ | +| Italy North | ✅ | ✅** | +| Japan East | ✅ | ✅ | +| Japan West | ✅ | | +| Jio India Central | ✅** | | +| Jio India West | ✅** | | +| Korea Central | ✅ | ✅ | +| Korea South | ✅ | | +| Mexico Central | ✅ | ✅** | +| North Central US | ✅ | | +| North Europe | ✅ | ✅ | +| Norway East | ✅ | ✅ | +| Norway West | ✅ | | +| Poland Central | ✅ | ✅ | +| Qatar Central | ✅** | ✅** | +| South Africa North | ✅ | ✅ | +| South Africa West | ✅ | | +| South Central US | ✅ | ✅ | +| South India | ✅ | | +| Southeast Asia | ✅ | ✅ | +| Spain Central | ✅ | ✅** | +| Sweden Central | ✅ | ✅ | +| Switzerland North | ✅ | ✅ | +| Switzerland West | ✅ | | +| UAE Central | ✅ | | +| UAE North | ✅ | ✅ | +| UK South | ✅ | ✅ | +| UK West | ✅ | | +| West Central US | ✅ | | +| West Europe | ✅ | ✅ | +| West India | ✅* | | +| West US | ✅ | | +| West US 2 | ✅ | ✅ | +| West US 3 | ✅ | ✅ | \* Limited availability and no support for dedicated host deployments. \** To learn more about availability zones and available services support in these regions, contact your Microsoft sales or customer representative. -### Azure Government: --| Region | Single zone support | Availability zone support | Single zone support | -| -- | :--: | :-: | :-: | -| | App Service Environment v3 | App Service Environment v3 | App Service Environment v1/v2 | -| US DoD Central | ✅ | | ✅ | -| US DoD East | ✅ | | ✅ | -| US Gov Arizona | ✅ | | ✅ | -| US Gov Iowa | | | | -| US Gov Texas | ✅ | | ✅ | -| US Gov Virginia | ✅ |✅ | ✅ | --### Microsoft Azure operated by 21Vianet: --| Region | Single zone support | Availability zone support | Single zone support | -| -- | :--: | :-: | :-: | -| | App Service Environment v3 | App Service Environment v3 | App Service Environment v1/v2 | -| China East 2 | | | ✅ | -| China East 3 | ✅ | | | -| China North 2 | | | ✅ | -| China North 3 | ✅ | ✅ | | +### Azure Government ++| Region | Single zone support | Availability zone support | +| -- | :--: | :-: | +| | App Service Environment v3 | App Service Environment v3 | +| US DoD Central | ✅ | | +| US DoD East | ✅ | | +| US Gov Arizona | ✅ | | +| US Gov Iowa | | | +| US Gov Texas | ✅ | | +| US Gov Virginia | ✅ |✅ | ++### Microsoft Azure operated by 21Vianet ++| Region | Single zone support | Availability zone support | +| -- | :--: | :-: | +| | App Service Environment v3 | App Service Environment v3 | +| China East 2 | | | +| China East 3 | ✅ | | +| China North 2 | | | +| China North 3 | ✅ | ✅ | ### In-region data residency |
app-service | Using | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using.md | When you scale an App Service plan, the needed infrastructure is added automatic A scale operation on one size and operating system won't affect scaling of the other combinations of size and operating system. For example, if you are scaling a Windows I2v2 App Service plan, a scale operation to a Windows I3v2 App Service plan starts immediately. Scaling normally takes less than 15 minutes but can take up to 45 minutes. -In a multi-tenant App Service, scaling is immediate, because a pool of shared resources is readily available to support it. App Service Environment is a single-tenant service, so there's no shared buffer, and resources are allocated based on need. +In a multitenant App Service, scaling is immediate, because a pool of shared resources is readily available to support it. App Service Environment is a single-tenant service, so there's no shared buffer, and resources are allocated based on need. ## App access If you have multiple App Service Environments, you might want some of them to be Select the value you want, and then select **Save**. -![Screenshot that shows the App Service Environment configuration portal.][5] +![Screenshot that shows the App Service Environment upgrade preference setting.][7] This feature makes the most sense when you have multiple App Service Environments, and you might benefit from sequencing the upgrades. For example, you might set your development and test App Service Environments to be early, and your production App Service Environments to be late. To delete: [4]: ./media/using/using-logs.png [5]: ./media/using/using-configuration.png [6]: ./media/using/using-ip-addresses.png+[7]: ./media/using/using-upgrade-preference.png <!--Links--> |
app-service | Manage Scale Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-up.md | Title: Scale up features and capacities description: Learn how to scale up an app in Azure App Service. Get more CPU, memory, disk space, and extra features. ms.assetid: f7091b25-b2b6-48da-8d4a-dcf9b7baccab- Previously updated : 05/08/2023+ Last updated : 08/26/2024 -* [Scale up](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Get more CPU, memory, disk space, and extra features +* [Scale up](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Get more CPU, memory, or disk space, or extra features like dedicated virtual machines (VMs), custom domains and certificates, staging slots, autoscaling, and more. You scale up by changing the pricing tier of the App Service plan that your app belongs to. * [Scale out](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Increase the number of VM instances that run your app.- Basic, Standard and Premium service plans scale out to as many as 3, 10 and 30 instances respectively. [App Service Environments](environment/intro.md) - in **Isolated** tier further increases your scale-out count to 100 instances. For more information about scaling out, see + Basic, Standard, and Premium service plans scale out to as many as 3, 10, and 30 instances, respectively. [App Service Environments](environment/intro.md) + in the Isolated tier further increase your scale-out count to 100 instances. For more information about scaling out, see [Scale instance count manually or automatically](/azure/azure-monitor/autoscale/autoscale-get-started). There, you find out how to use autoscaling, which is to scale instance count automatically based on predefined rules and schedules. >[!IMPORTANT]-> [App Service now offers an automatic scale-out option to handle varying incoming HTTP requests.](./manage-automatic-scaling.md) +> [App Service offers an automatic scale-out option to handle varying incoming HTTP requests.](./manage-automatic-scaling.md) > The scale settings take only seconds to apply and affect all apps in your [App Service plan](../app-service/overview-hosting-plans.md). They don't require you to change your code or redeploy your application. For information about the pricing and features of individual App Service plans, see [App Service Pricing Details](https://azure.microsoft.com/pricing/details/web-sites/). > [!NOTE]-> Before you switch an App Service plan from the **Free** tier, you must first remove the [spending limits](https://azure.microsoft.com/pricing/spending-limits/) in place for your Azure subscription. To view or change options for your Microsoft Azure App Service subscription, see [Microsoft Azure Subscriptions][azuresubscriptions]. +> Before you switch an App Service plan from the Free tier, you must first remove the [spending limits](https://azure.microsoft.com/pricing/spending-limits/) in place for your Azure subscription. To view or change options for your App Service subscription, see [Cost Management + Billing][azuresubscriptions] in the Azure portal. > > If your app depends on other services, such as Azure SQL Database or Azure Stora 1. In the **Overview** page for your app, select the **Resource group** link. - ![Scale up your Azure app's related resources](./media/web-sites-scale/RGEssentialsLink.png) + ![Scale up your Azure app's related resources.](./media/web-sites-scale/RGEssentialsLink.png) -2. In the **Summary** part of the **Resource group** page, select a resource that you want to scale. The following screenshot +2. On the **Overview** page for the resource group, select a resource that you want to scale. The following screenshot shows a SQL Database resource. ![Navigate to resource group page to scale up your Azure app](./media/web-sites-scale/ResourceGroup.png) - To scale up the related resource, see the documentation for the specific resource type. For example, to scale up a single SQL Database, see [Scale single database resources in Azure SQL Database](/azure/azure-sql/database/single-database-scale). To scale up an Azure Database for MySQL resource, see [Scale MySQL resources](/azure/mysql/concepts-pricing-tiers#scale-resources). + To scale up the related resource, see the documentation for the specific resource type. For example, to scale up a single SQL database, see [Scale single database resources in Azure SQL Database](/azure/azure-sql/database/single-database-scale). To scale up an Azure Database for MySQL resource, see [Scale Azure Database for MySQL resources](/azure/mysql/concepts-pricing-tiers#scale-resources). <a name="OtherFeatures"></a> <a name="devfeatures"></a> ## Compare pricing tiers -For detailed information, such as VM sizes for each pricing tier, see [App Service Pricing Details](https://azure.microsoft.com/pricing/details/app-service). +For detailed information, such as VM sizes for each pricing tier, see [App Service Pricing Details](https://azure.microsoft.com/pricing/details/app-service/windows/). For a table of service limits, quotas, and constraints, and supported features in each tier, see [App Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits). <a name="Next Steps"></a> -## More resources +## Related content -* [Scale instance count manually or automatically](/azure/azure-monitor/autoscale/autoscale-get-started) +* [Get started with autoscale in Azure](/azure/azure-monitor/autoscale/autoscale-get-started) * [Configure Premium V3 tier for App Service](app-service-configure-premium-tier.md) * [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md) <!-- LINKS --> [vmsizes]:https://azure.microsoft.com/pricing/details/app-service/ [SQLaccountsbilling]:https://go.microsoft.com/fwlink/?LinkId=234930-[azuresubscriptions]:https://account.windowsazure.com/subscriptions +[azuresubscriptions]:https://ms.portal.azure.com/#view/Microsoft_Azure_Billing/BillingMenuBlade/~/Overview <!-- IMAGES --> [ChooseWHP]: ./media/web-sites-scale/scale1ChooseWHP.png |
app-service | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md | Title: 'Quickstart: Create a Node.js web app' -description: Deploy your first Node.js Hello World to Azure App Service in minutes. +description: Deploy your first Node.js Hello World app to Azure App Service in minutes. ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a Previously updated : 07/17/2023 Last updated : 08/28/2024 ms.devlang: javascript zone_pivot_groups: app-service-vscode-cli-portal ai-usage: ai-assisted In this quickstart, you'll learn how to create and deploy your first Node.js ([Express](https://www.expressjs.com)) web app to [Azure App Service](overview.md). App Service supports various versions of Node.js on both Linux and Windows. -This quickstart configures an App Service app in the **Free** tier and incurs no cost for your Azure subscription. +This quickstart configures an App Service app in the Free tier and incurs no cost for your Azure subscription. This video shows you how to deploy a Node.js web app in Azure. > [!VIDEO c66346dd-9fde-4cef-b135-47d3051d5db5] The steps in the video are also described in the following sections. - Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?utm_source=campaign&utm_campaign=vscode-tutorial-app-service-extension&mktingSource=vscode-tutorial-app-service-extension). - Install [Node.js and npm](https://nodejs.org). Run the command `node --version` to verify that Node.js is installed. - Install [Visual Studio Code](https://code.visualstudio.com/).-- The [Azure App Service extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice) for Visual Studio Code.+- Install the [Azure App Service extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice) for Visual Studio Code. <!- ::: zone-end The steps in the video are also described in the following sections. - Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?utm_source=campaign&utm_campaign=vscode-tutorial-app-service-extension&mktingSource=vscode-tutorial-app-service-extension). - Install [Node.js LTS and npm](https://nodejs.org). Run the command `node --version` to verify that Node.js is installed.-- Install <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a>, with which you run commands in any shell to create and configure Azure resources.+- Install <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a>, with which you run commands in a shell to create and configure Azure resources. ::: zone-end In this step, you create a basic Node.js application and ensure it runs on your > [!TIP] > If you have already completed the [Node.js tutorial](https://code.visualstudio.com/docs/nodejs/nodejs-tutorial), you can skip ahead to [Deploy to Azure](#deploy-to-azure). -1. Create a Node.js application using the [Express Generator](https://expressjs.com/starter/generator.html), which is installed by default with Node.js and NPM. +1. Create a Node.js application using the [Express Generator](https://expressjs.com/starter/generator.html), which is installed by default with Node.js and npm. ```bash npx express-generator myExpressApp --view ejs ``` -1. Change to the application's directory and install the NPM packages. +1. Change to the application's directory and install the npm packages. ```bash cd myExpressApp && npm install In this step, you create a basic Node.js application and ensure it runs on your 1. In a browser, navigate to `http://localhost:3000`. You should see something like this: - ![Running Express Application](./media/quickstart-nodejs/express.png) + ![Screenshot of a running Express application.](./media/quickstart-nodejs/express.png) :::zone target="docs" pivot="development-environment-vscode" > [!div class="nextstepaction"] In this step, you create a basic Node.js application and ensure it runs on your Before you continue, ensure that you have all the prerequisites installed and configured. > [!NOTE]-> For your Node.js application to run in Azure, it needs to listen on the port provided by the `PORT` environment variable. In your generated Express app, this environment variable is already used in the startup script *bin/www* (search for `process.env.PORT`). +> For your Node.js application to run in Azure, it needs to listen on the port provided by the `PORT` environment variable. In your generated Express app, this environment variable is already used in the startup script *bin/www*. (Search for `process.env.PORT`.) > :::zone target="docs" pivot="development-environment-vscode" #### Sign in to Azure -1. In the terminal, ensure you're in the *myExpressApp* directory, then start Visual Studio Code with the following command: +1. In the terminal, ensure you're in the *myExpressApp* directory, and then start Visual Studio Code with the following command: ```bash code . ``` -1. In Visual Studio Code, in the [Activity Bar](https://code.visualstudio.com/docs/getstarted/userinterface), select the **Azure** logo. +1. In Visual Studio Code, in the [Activity Bar](https://code.visualstudio.com/docs/getstarted/userinterface), select the Azure logo. -1. In the **App Service** explorer, select **Sign in to Azure...** and follow the instructions. +1. In the **App Service** explorer, select **Sign in to Azure** and follow the instructions. - In Visual Studio Code, you should see your Azure email address in the Status Bar and your subscription in the **AZURE APP SERVICE** explorer. + In Visual Studio Code, you should see your Azure email address in the Status Bar and your subscription in the **App Service** explorer. - ![sign in to Azure](./media/quickstart-nodejs/sign-in.png) + ![Screensnot of the Sign in to Azure option.](./media/quickstart-nodejs/sign-in.png) > [!div class="nextstepaction"] > [I ran into an issue](https://www.research.net/r/PWZWZ52?tutorial=node-deployment-azure-app-service&step=getting-started) Before you continue, ensure that you have all the prerequisites installed and co # [Deploy to Linux](#tab/linux) -2. Right-click on App Services and select **Create new Web App**. A Linux container is used by default. -1. Type a globally unique name for your web app and press **Enter**. The name must be unique across all of Azure and use only alphanumeric characters ('A-Z', 'a-z', and '0-9') and hyphens ('-'). See [note at top](#dnl-note). -1. In Select a runtime stack, select the Node.js version you want. An **LTS** version is recommended. -1. In Select a pricing tier, select **Free (F1)** and wait for the resources to be created in Azure. -1. In the popup **Always deploy the workspace "myExpressApp" to \<app-name>"**, select **Yes**. This way, as long as you're in the same workspace, Visual Studio Code deploys to the same App Service app each time. +2. Right-click **App Services** and select **Create new Web App**. A Linux container is used by default. +1. Type a globally unique name for your web app and select **Enter**. The name must be unique across all of Azure and use only alphanumeric characters ('A-Z', 'a-z', and '0-9') and hyphens ('-'). See [the note at the the start of this article](#dnl-note). +1. In **Select a runtime stack**, select the Node.js version you want. An LTS version is recommended. +1. In **Select a pricing tier**, select **Free (F1)** and wait for the resources to be created in Azure. +1. In the popup **Always deploy the workspace "myExpressApp" to \<app-name>"**, select **Yes**. Doing so ensures that, as long as you're in the same workspace, Visual Studio Code deploys to the same App Service app each time. While Visual Studio Code creates the Azure resources and deploys the code, it shows [progress notifications](https://code.visualstudio.com/api/references/extension-guidelines#notifications). Before you continue, ensure that you have all the prerequisites installed and co # [Deploy to Windows](#tab/windows) -2. Right-click on App Services and select **Create new Web App... Advanced**. -1. Type a globally unique name for your web app and press **Enter**. The name must be unique across all of Azure and use only alphanumeric characters ('A-Z', 'a-z', and '0-9') and hyphens ('-'). See [note at top](#dnl-note). -1. Select **Create a new resource group**, then enter a name for the resource group, such as *AppServiceQS-rg*. -1. Select the Node.js version you want. An **LTS** version is recommended. +2. Right-click **App Services** and select **Create new Web App... Advanced**. +1. Type a globally unique name for your web app and select **Enter**. The name must be unique across all of Azure and use only alphanumeric characters ('A-Z', 'a-z', and '0-9') and hyphens ('-'). See [the note at start of this article](#dnl-note). +1. Select **Create a new resource group**, and then enter a name for the resource group, such as *AppServiceQS-rg*. +1. Select the Node.js version you want. An LTS version is recommended. 1. Select **Windows** for the operating system.-1. Select the location you want to serve your app from. For example, *West Europe*. -1. Select **Create new App Service plan**, then enter a name for the plan (such as *AppServiceQS-plan*), then select **F1 Free** for the pricing tier. -1. For **Select an Application Insights resource for your app**, select **Skip for now** and wait the resources to be created in Azure. -1. In the popup **Always deploy the workspace "myExpressApp" to \<app-name>"**, select **Yes**. This way, as long as you're in the same workspace, Visual Studio Code deploys to the same App Service app each time. +1. Select the location you want to serve your app from. For example, **West Europe**. +1. Select **Create new App Service plan**, enter a name for the plan (such as *AppServiceQS-plan*), and then select **F1 Free** for the pricing tier. +1. For **Select an Application Insights resource for your app**, select **Skip for now** and wait for the resources to be created in Azure. +1. In the popup **Always deploy the workspace "myExpressApp" to \<app-name>"**, select **Yes**. Doing so ensures that, as long as you're in the same workspace, Visual Studio Code deploys to the same App Service app each time. While Visual Studio Code creates the Azure resources and deploys the code, it shows [progress notifications](https://code.visualstudio.com/api/references/extension-guidelines#notifications). > [!NOTE]- > When deployment completes, your Azure app doesn't run yet because your project root doesn't have a *web.config*. Follow the remaining steps to generate it automatically. For more information, see [You do not have permission to view this directory or page](configure-language-nodejs.md#you-do-not-have-permission-to-view-this-directory-or-page). + > When deployment completes, your Azure app doesn't run yet because your project root doesn't have a *web.config*. Follow the remaining steps to generate it automatically. For more information, see **You do not have permission to view this directory or page** in [Configure a Node.js](configure-language-nodejs.md?view=platform-windows&preserve-view=true). -1. In the **App Service** explorer in Visual Studio code, expand the node for the new app, right-click **Application Settings**, and select **Add New Setting**: +1. In the **App Service** explorer in Visual Studio Code, expand the node for the new app, right-click **Application Settings**, and select **Add New Setting**: - ![Add app setting command](media/quickstart-nodejs/add-setting.png) + ![Screenshot of the Add New Setting command.](media/quickstart-nodejs/add-setting.png) 1. Enter `SCM_DO_BUILD_DURING_DEPLOYMENT` for the setting key. 1. Enter `true` for the setting value. This app setting enables build automation at deploy time, which automatically detects the start script and generates the *web.config* with it. -1. In the **App Service** explorer, select the **Deploy to Web App** icon again, confirm by clicking **Deploy** again. -1. Wait for deployment to complete, then select **Browse Website** in the notification popup. The browser should display the Express default page. +1. In the **App Service** explorer, select the **Deploy to Web App** icon again, and confirm by selecting **Deploy** again. +1. Wait for deployment to complete, and then select **Browse Website** in the notification popup. The browser should display the Express default page. -- az webapp up --sku F1 --name <app-name> --os-type Windows -- - If the `az` command isn't recognized, ensure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment).-- Replace `<app_name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). See [note at top](#dnl-note). A good pattern is to use a combination of your company name and an app identifier.-- The `--sku F1` argument creates the web app on the Free pricing tier, which incurs a no cost.+- Replace `<app_name>` with a name that's unique across all of Azure. (*Valid characters are `a-z`, `0-9`, and `-`*.) See [the note at the start of this article](#dnl-note). A good pattern is to use a combination of your company name and an app identifier. +- The `--sku F1` argument creates the web app on the Free pricing tier, which incurs no cost. - You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az-appservice-list-locations) command. - The command creates a Linux app for Node.js by default. To create a Windows app instead, use the `--os-type` argument. -- If you see the error, "Could not auto-detect the runtime stack of your app," ensure you're running the command in the *myExpressApp* directory (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md)).+- If you see the error, "Could not auto-detect the runtime stack of your app," ensure you're running the command in the *myExpressApp* directory (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md).) -The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://<app-name>.azurewebsites.net", which is the app's URL on Azure (see [note at top](#dnl-note)). +The command might take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing Zip deployment. It then gives the message, "You can launch the app at http://<app-name>.azurewebsites.net", which is the app's URL on Azure. (See [the note at the start of this article](#dnl-note).) <pre> The webapp '<app-name>' doesn't exist Sign in to the [Azure portal](https://portal.azure.com). 1. To start creating a Node.js app, browse to [https://portal.azure.com/#create/Microsoft.WebSite](https://portal.azure.com/#create/Microsoft.WebSite). -1. In the **Basics** tab, under **Project details**, ensure the correct subscription is selected and then select to **Create new** resource group. Type *myResourceGroup* for the name. +1. In the **Basics** tab, under **Project Details**, ensure the correct subscription is selected and then select **Create new** to create a resource group. Type *myResourceGroup* for the name. - :::image type="content" source="./media/quickstart-nodejs/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the web app"::: + :::image type="content" source="./media/quickstart-nodejs/project-details.png" alt-text="Screenshot of the Project Details section showing where you select the Azure subscription and the resource group for the web app."::: -1. Under **Instance details**, type a globally unique name for your web app and select **Code** (see [note at top](#dnl-note)). Select *Node 18 LTS* **Runtime stack**, an **Operating System**, and a **Region** you want to serve your app from. +1. Under **Instance details**, type a globally unique name for your web app and select **Code**. (See [the note at the start of this article](#dnl-note).) Select **Node 18 LTS** in **Runtime stack**, an **Operating System**, and a **Region** you want to serve your app from. - :::image type="content" source="./media/quickstart-nodejs/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size"::: + :::image type="content" source="./media/quickstart-nodejs/instance-details.png" alt-text="Screenshot of the Instance Details section."::: -1. Under **App Service Plan**, select **Create new** App Service Plan. Type *myAppServicePlan* for the name. To change to the Free tier, select **Change size**, select **Dev/Test** tab, select **F1**, and select the **Apply** button at the bottom of the page. +1. Under **App Service Plan**, select **Create new** to create an App Service plan. Type **myAppServicePlan** for the name. To change to the Free tier, select **Change size**, select **Dev/Test** tab, select **F1**, and then select the **Apply** button at the bottom of the page. - :::image type="content" source="./media/quickstart-nodejs/app-service-plan-details.png" alt-text="Screenshot of the Administrator account section where you provide the administrator username and password"::: + :::image type="content" source="./media/quickstart-nodejs/app-service-plan-details.png" alt-text="Screenshot of the App Service Plan section."::: 1. Select the **Review + create** button at the bottom of the page. Sign in to the [Azure portal](https://portal.azure.com). 1. After deployment is complete, select **Go to resource**. - :::image type="content" source="./media/quickstart-nodejs/next-steps.png" alt-text="Screenshot showing the next step of going to the resource"::: + :::image type="content" source="./media/quickstart-nodejs/next-steps.png" alt-text="Screenshot showing the Go to resource button."::: ### Get FTPS credentials -Azure App Service supports [**two types of credentials**](deploy-configure-credentials.md) for FTP/S deployment. These credentials aren't the same as your Azure subscription credentials. In this section, you get the *application-scope credentials* to use with FileZilla. +Azure App Service supports [two types of credentials](deploy-configure-credentials.md) for FTP/S deployment. These credentials aren't the same as your Azure subscription credentials. In this section, you get the application-scope credentials to use with FileZilla. -1. From the App Service app page, select **Deployment Center** in the left-hand menu and select **FTPS credentials** tab. +1. From the App Service app page, select **Deployment Center** in the left-hand menu and then select the **FTPS credentials** tab. - :::image type="content" source="./media/quickstart-nodejs/ftps-deployment-credentials.png" alt-text="FTPS deployment credentials"::: + :::image type="content" source="./media/quickstart-nodejs/ftps-deployment-credentials.png" alt-text="Screenshot that shows the FTPS deployment credentials tab."::: -1. Open **FileZilla** and create a new site. +1. Open FileZilla and create a new site. -1. From the **FTPS credentials** tab, under **Application scope**, copy **FTPS endpoint**, **FTPS Username**, and **Password** into FileZilla. +1. From the **FTPS credentials** tab, copy the **FTPS endpoint**, **Username**, and **Password** into FileZilla. - :::image type="content" source="./media/quickstart-nodejs/filezilla-ftps-connection.png" alt-text="FTPS connection details"::: + :::image type="content" source="./media/quickstart-nodejs/filezilla-ftps-connection.png" alt-text="Screenshot of the FTPS connection details."::: 1. Select **Connect** in FileZilla. ### Deploy files with FTPS -1. Copy all files and directories files to the [**/site/wwwroot** directory](https://github.com/projectkudu/kudu/wiki/File-structure-on-azure) in Azure. +1. Copy all files and directories files to the [/site/wwwroot directory](https://github.com/projectkudu/kudu/wiki/File-structure-on-azure) in Azure. - :::image type="content" source="./media/quickstart-nodejs/filezilla-deploy-files.png" alt-text="FileZilla deploy files"::: + :::image type="content" source="./media/quickstart-nodejs/filezilla-deploy-files.png" alt-text="Screenshot of the /site/wwwroot directory."::: 1. Browse to your app's URL to verify the app is running properly. ::: zone-end ## Redeploy updates -You can deploy changes to this app by making edits in Visual Studio Code, saving your files, and then redeploy to your Azure app. For example: +You can deploy changes to this app by making edits in Visual Studio Code, saving your files, and then redeploying to your Azure app. For example: 1. From the sample project, open *views/index.ejs* and change You can deploy changes to this app by making edits in Visual Studio Code, saving :::zone target="docs" pivot="development-environment-vscode" -2. In the **App Service** explorer, select the **Deploy to Web App** icon again, confirm by clicking **Deploy** again. +2. In the **App Service** explorer, select the **Deploy to Web App** icon again, and confirm by selecting **Deploy** again. -1. Wait for deployment to complete, then select **Browse Website** in the notification popup. You should see that the `Welcome to Express` message has been changed to `Welcome to Azure!`. +1. Wait for deployment to complete, then select **Browse Website** in the notification popup. You should see that the `Welcome to Express` message has been changed to `Welcome to Azure`. ::: zone-end You can deploy changes to this app by making edits in Visual Studio Code, saving This command uses values that are cached locally in the *.azure/config* file, such as the app name, resource group, and App Service plan. -1. Once deployment is complete, refresh the webpage `http://<app-name>.azurewebsites.net` (see [note at top](#dnl-note)). You should see that the `Welcome to Express` message has been changed to `Welcome to Azure!`. +1. Once deployment is complete, refresh the webpage `http://<app-name>.azurewebsites.net`. (See [the note at the start of this article](#dnl-note).) You should see that the `Welcome to Express` message has been changed to `Welcome to Azure`. ::: zone-end You can deploy changes to this app by making edits in Visual Studio Code, saving 2. Save your changes, then redeploy the app using your FTP client. -1. Once deployment is complete, refresh the webpage `http://<app-name>.azurewebsites.net` (see [note at top](#dnl-note)). You should see that the `Welcome to Express` message has been changed to `Welcome to Azure!`. +1. Once deployment is complete, refresh the webpage `http://<app-name>.azurewebsites.net`. (See [note the at the start of this article](#dnl-note).) You should see that the `Welcome to Express` message has been changed to `Welcome to Azure`. ::: zone-end -## Stream Logs +## Stream logs :::zone target="docs" pivot="development-environment-vscode" az webapp log tail The command uses the resource group name cached in the *.azure/config* file. -You can also include the `--logs` parameter with then [az webapp up](/cli/azure/webapp#az-webapp-up) command to automatically open the log stream on deployment. +You can also include the `--logs` parameter with the [az webapp up](/cli/azure/webapp#az-webapp-up) command to automatically open the log stream on deployment. Refresh the app in the browser to generate console logs, which include messages describing HTTP requests to the app. If no output appears immediately, try again in 30 seconds. -To stop log streaming at any time, press **Ctrl**+**C** in the terminal. +To stop log streaming at any time, select **Ctrl**+**C** in the terminal. ::: zone-end To stop log streaming at any time, press **Ctrl**+**C** in the terminal. You can access the console logs generated from inside the app and the container in which it runs. You can stream log output (calls to `console.log()`) from the Node.js app directly in the Azure portal. -1. In the same **App Service** page for your app, use the left menu to scroll to the *Monitoring* section and select **Log stream**. +1. In the same **App Service** page for your app, use the left menu to scroll to the **Monitoring** section and select **Log stream**. :::image type="content" source="./media/quickstart-nodejs/log-stream.png" alt-text="Screenshot of Log stream in Azure App service."::: You can access the console logs generated from inside the app and the container :::zone target="docs" pivot="development-environment-vscode" -In the preceding steps, you created Azure resources in a resource group. The create steps in this quickstart put all the resources in this resource group. To clean up, you just need to remove the resource group. +In the preceding steps, you created Azure resources in a resource group. The steps in this quickstart put all the resources in this resource group. To clean up, you just need to remove the resource group. 1. In the Azure extension of Visual Studio, expand the **Resource Groups** explorer. In the preceding steps, you created Azure resources in a resource group. The cre :::zone target="docs" pivot="development-environment-cli" -In the preceding steps, you created Azure resources in a resource group. The resource group has a name like "appsvc_rg_Linux_CentralUS" depending on your location. +In the preceding steps, you created Azure resources in a resource group. The resource group has a name like "appsvc_rg_Linux_CentralUS," depending on your location. If you don't expect to need these resources in the future, delete the resource group by running the following command: The `--no-wait` argument allows the command to return before the operation is co :::zone target="docs" pivot="development-environment-azure-portal" -When no longer needed, you can delete the resource group, App service, and all related resources. +You can delete the resource group, App service, and all related resources when they're no longer needed. 1. From your App Service *overview* page, select the *resource group* you created in the [Create Azure resources](#create-azure-resources) step. When no longer needed, you can delete the resource group, App service, and all r 1. From the *resource group* page, select **Delete resource group**. Confirm the name of the resource group to finish deleting the resources. - :::image type="content" source="./media/quickstart-nodejs/delete-resource-group.png" alt-text="Delete resource group"::: + :::image type="content" source="./media/quickstart-nodejs/delete-resource-group.png" alt-text="Delete resource group."::: ::: zone-end When no longer needed, you can delete the resource group, App service, and all r Congratulations, you've successfully completed this quickstart! > [!div class="nextstepaction"]-> [Tutorial: Node.js app with MongoDB](tutorial-nodejs-mongodb-app.md) +> [Deploy a Node.js + MongoDB web app to Azure](tutorial-nodejs-mongodb-app.md) > [!div class="nextstepaction"]-> [Configure Node.js app](configure-language-nodejs.md) +> [Configure a Node.js app](configure-language-nodejs.md) > [!div class="nextstepaction"]-> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md) +> [Secure your Azure App Service app with a custom domain and a managed certificate](tutorial-secure-domain-certificate.md) Check out the other Azure extensions. |
application-gateway | Overview V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md | The following table displays a comparison between Basic and Standard_v2. | Feature | Capabilities | Basic SKU (preview)| Standard SKU | | :: | : | :: | :: | | Reliability | SLA | 99.9 | 99.95 |-| Functionality - basic | HTTP/HTTP2/HTTPS<br>Websocket<br>Public/Private IP<br>Cookie Affinity<br>Path-based affinity<br>Wildcard<br>Multisite<br>KeyVault<br>Zone<br>Header rewrite | ✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓ | ✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓| +| Functionality - basic | HTTP/HTTP2/HTTPS<br>Websocket<br>Public/Private IP<br>Cookie Affinity<br>Path-based affinity<br>Wildcard<br>Multisite<br>KeyVault<br>Zone<br>Header rewrite | ✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓ | ✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓<br>✓| | Functionality - advanced | AKS (via AGIC)<br>URL rewrite<br>mTLS<br>Private Link<br>Private-only<sup>1</sup><br>TCP/TLS Proxy | | ✓<br>✓<br>✓<br>✓<br>✓<br>✓ | | Scale | Max. connections per second<br>Number of listeners<br>Number of backend pools<br>Number of backend servers per pool<br>Number of rules | 200<sup>1</sup><br>5<br>5<br>5<br>5 | 62500<sup>1</sup><br>100<br>100<br>1200<br>400 | | Capacity Unit | Connections per second per compute unit<br>Throughput<br>Persistent new connections | 10<br>2.22 Mbps<br>2500 | 50<br>2.22 Mbps<br>2500 | |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | +## September 9, 2024 ++**Image tag**: `v1.33.0_2024-09-10` ++For complete release version information, review [Version log](version-log.md#september-9-2024). + ## August 13, 2024 **Image tag**: `v1.32.0_2024-08-13` |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | +## September 9, 2024 ++|Component|Value| +|--|--| +|Container images tag |`v1.33.0_2024-09-10`| +|**CRD names and version:**| | +|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| +|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| +|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| +|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| +|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| +|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| +|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| +|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| +|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| +|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| +|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| +|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| +|Azure Resource Manager (ARM) API version|2023-11-01-preview| +|`arcdata` Azure CLI extension version|1.5.18 ([Download](https://aka.ms/az-cli-arcdata-ext))| +|Arc-enabled Kubernetes helm chart extension version|1.33.0| +|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| +|SQL Database version | 972 | ++ ## August 13, 2024 |Component|Value| |
azure-arc | Run Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/run-command.md | Run Command on Azure Arc-enabled servers supports the following operations: |Operation |Description | |||-|[Create](/rest/api/hybridcompute/machine-run-commands/create-or-update?tabs=HTTP) |The operation to create a run command. This runs the run command. | -|[Delete](/rest/api/hybridcompute/machine-run-commands/delete?tabs=HTTP) |The operation to delete a run command. If it's running, delete will also stop the run command. | -|[Get](/rest/api/hybridcompute/machine-run-commands/get?tabs=HTTP) |The operation to get a run command. | -|[List](/rest/api/hybridcompute/machine-run-commands/list?tabs=HTTP) |The operation to get all the run commands of an Azure Arc-enabled server. | -|[Update](/rest/api/hybridcompute/machine-run-commands/update?tabs=HTTP) |The operation to update the run command. This stops the previous run command. | +|Create |The operation to create a run command. This runs the run command. | +|Delete |The operation to delete a run command. If it's running, delete will also stop the run command. | +|Get |The operation to get a run command. | +|List |The operation to get all the run commands of an Azure Arc-enabled server. | +|Update |The operation to update the run command. This stops the previous run command. | > [!NOTE] > Output and error blobs are overwritten each time the run command script executes. |
azure-linux | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/faq.md | Title: Frequently asked questions about the Azure Linux Container Host for AKS description: Find answers to some of the common questions about the Azure Linux Container Host for AKS. - - + + |
azure-linux | Intro Azure Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/intro-azure-linux.md | Title: Introduction to the Azure Linux Container Host for AKS description: Learn about the Azure Linux Container Host to use the container-optimized OS in your AKS clusters.--++ |
azure-linux | Quickstart Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-cli.md | Title: 'Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using the Azure CLI' description: Learn how to quickly create an Azure Linux Container Host for AKS cluster using the Azure CLI.--++ |
azure-linux | Quickstart Azure Resource Manager Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-resource-manager-template.md | Title: 'Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using an ARM template' description: Learn how to quickly create an Azure Linux Container Host for AKS cluster using an Azure Resource Manager template.--++ |
azure-linux | Quickstart Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-terraform.md | Title: 'Quickstart: Deploy an Azure Linux Container Host for AKS cluster by using Terraform' description: Learn how to quickly create an Azure Linux Container Host for AKS cluster using Terraform.--++ ms.editor: schaffererin |
azure-linux | Support Cycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/support-cycle.md | Title: Azure Linux Container Host for AKS support lifecycle description: Learn about the support lifecycle for the Azure Linux Container Host for AKS. --++ Last updated 09/29/2023 |
azure-linux | Support Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/support-help.md | Title: Azure Linux Container Host for AKS support and help options description: How to obtain help and support for questions or problems when you create solutions using the Azure Linux Container Host. --++ |
azure-linux | Troubleshoot Kernel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/troubleshoot-kernel.md | Title: Troubleshooting Azure Linux Container Host for AKS kernel version issues description: How to troubleshoot Azure Linux Container Host for AKS kernel version issues.--++ |
azure-linux | Tutorial Azure Linux Add Nodepool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-add-nodepool.md | Title: Azure Linux Container Host for AKS tutorial - Add an Azure Linux node pool to your existing AKS cluster description: In this Azure Linux Container Host for AKS tutorial, you learn how to add an Azure Linux node pool to your existing cluster.--++ |
azure-linux | Tutorial Azure Linux Create Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-create-cluster.md | Title: Azure Linux Container Host for AKS tutorial - Create a cluster description: In this Azure Linux Container Host for AKS tutorial, you will learn how to create an AKS cluster with Azure Linux.--++ |
azure-linux | Tutorial Azure Linux Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-migration.md | Title: Azure Linux Container Host for AKS tutorial - Migrating to Azure Linux description: In this Azure Linux Container Host for AKS tutorial, you learn how to migrate your nodes to Azure Linux nodes.--++ |
azure-maps | Routing Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md | For more coverage tables, see: [Azure Maps routing coverage tables]: #azure-maps-routing-coverage-tables <!-- TODO: Update with link to route v2 docs when available -->-[Azure Maps Route Service]: https://github.com/Azure/azure-rest-api-specs/blob/koyasu221b-maps-Route-2023-10-01-preview/specification/maps/data-plane/Route/preview/2023-10-01-preview/route.json +[Azure Maps Route Service]: /rest/api/maps/route/ [Geocoding]: geocoding-coverage.md [Get Route Directions]: /rest/api/maps/route/get-route-directions [Get Route Range]: /rest/api/maps/route/get-route-range |
azure-netapp-files | Cross Region Replication Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md | This article describes requirements and considerations about [using the volume c ## Requirements and considerations * Azure NetApp Files replication is only available in certain fixed region pairs. See [Supported region pairs](cross-region-replication-introduction.md#supported-region-pairs). -* SMB volumes are supported along with NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or AD DS Domain Controllers that are reachable from the delegated subnet in the destination region. For more information, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). +* SMB volumes are supported along with NFS volumes. Replicating SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or AD DS Domain Controllers that are reachable from the delegated subnet in the destination region. For more information, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). * The destination account must be in a different region from the source volume region. You can also select an existing NetApp account in a different region. * The replication destination volume is read-only until you [fail over to the destination region](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume) to enable the destination volume for read and write. >[!IMPORTANT] This article describes requirements and considerations about [using the volume c * If you are copying large data sets into a volume that has cross-region replication enabled and you have spare capacity in the capacity pool, you should set the replication interval to 10 minutes, increase the volume size to allow for the changes to be stored, and temporarily disable replication. * If you use the cool access feature, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md#considerations) for more considerations. * [Large volumes](large-volumes-requirements-considerations.md) are supported with cross-region replication only with an hourly or daily replication schedule.+* If the volume's size exceeds 95% utilization, there's a risk that replication to the destination volume can fail depending on the rate of data changes. ## Large volumes configuration |
azure-signalr | Signalr Quickstart Azure Signalr Service Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-bicep.md | Title: 'Quickstart: Create an Azure SignalR Service - Bicep' description: In this quickstart, learn how to create an Azure SignalR Service using Bicep.--++ Last updated 05/18/2022 |
azure-vmware | Deploy Zerto Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md | In this scenario, the primary site is an Azure VMware Solution private cloud in ## Install Zerto on Azure VMware Solution -To deploy Zerto on Azure VMware Solution, follow these [instructions]( -/azure/azure-vmware/deploy-zerto-disaster-recovery#install-zerto-on-azure-vmware-solution +To deploy Zerto on Azure VMware Solution, follow these [instructions](https://help.zerto.com/bundle/Install.AVS.HTML.10.0_U5/page/zerto_deployment_and_configuration.html ). ## FAQs |
batch | Batch Aad Auth Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth-management.md | Title: Use Microsoft Entra ID to authenticate Batch Management solutions description: Explore using Microsoft Entra ID to authenticate from applications that use the Batch Management .NET library. Previously updated : 04/27/2017 Last updated : 06/24/2024 |
batch | Manage Private Endpoint Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/manage-private-endpoint-connections.md | Title: Manage private endpoint connections with Azure Batch accounts description: Learn how to manage private endpoint connections with Azure Batch accounts, including list, approve, reject and remove. Previously updated : 05/26/2022 Last updated : 06/24/2024 # Manage private endpoint connections with Azure Batch accounts |
batch | Batch Cli Sample Add Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-add-application.md | Title: Azure CLI Script Example - Add an Application in Batch | Microsoft Docs description: Learn how to add an application for use with an Azure Batch pool or a task using the Azure CLI. Previously updated : 05/24/2022 Last updated : 06/24/2024 keywords: batch, azure cli samples, azure cli code samples, azure cli script samples |
batch | Batch Cli Sample Create Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-account.md | Title: Azure CLI Script Example - Create Batch account - Batch service | Microsoft Docs description: Learn how to create a Batch account in Batch service mode with this Azure CLI script example. This script also shows how to query or update various properties of the account. Previously updated : 05/24/2022 Last updated : 06/24/2024 keywords: batch, azure cli samples, azure cli code samples, azure cli script samples |
batch | Batch Cli Sample Create User Subscription Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-user-subscription-account.md | Title: Azure CLI Script Example - Create Batch account - user subscription | Microsoft Docs description: Learn how to create an Azure Batch account in user subscription mode. This account allocates compute nodes into your subscription. Previously updated : 05/24/2022 Last updated : 06/24/2024 keywords: batch, azure cli samples, azure cli examples, azure cli code samples |
batch | Batch Cli Sample Manage Linux Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-linux-pool.md | Title: Azure CLI Script Example - Linux Pool in Batch | Microsoft Docs description: Learn the commands available in the Azure CLI to create and manage a pool of Linux compute nodes in Azure Batch. Previously updated : 05/24/2022 Last updated : 06/24/2024 keywords: linux, azure cli samples, azure cli code samples, azure cli script samples |
batch | Batch Cli Sample Manage Windows Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-windows-pool.md | Title: Azure CLI Script Example - Windows Pool in Batch | Microsoft Docs description: Learn some of the commands available in the Azure CLI to create and manage a pool of Windows compute nodes in Azure Batch. Previously updated : 05/24/2022 Last updated : 06/24/2024 keywords: windows pool, azure cli samples, azure cli code samples, azure cli script samples |
communication-services | Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md | If you wish to call Azure Communication Services' APIs manually using an access <a name='azure-ad-authentication'></a> -### Microsoft Entra authentication -The Azure platform provides role-based access (Azure RBAC) to control access to the resources. Azure RBAC security principal represents a user, group, service principal, or managed identity that is requesting access to Azure resources. Microsoft Entra authentication provides superior security and ease of use over other authorization options. For example, by using managed identity, you avoid having to store your account access key within your code, as you do with Access Key authorization. While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Microsoft Entra ID where possible. To set up a service principal, [create a registered application from the Azure CLI](../quickstarts/identity/service-principal.md?pivots=platform-azcli). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [service principal](../quickstarts/identity/service-principal.md) is used. -Communication services supports Microsoft Entra authentication for Communication services resources. You can find more details, about the managed identity support in the [Microsoft Entra documentation](/entra/identity/managed-identities-azure-resources/managed-identities-status). +Communication services supports Microsoft Entra ID authentication for Communication services resources. You can find more details, about the managed identity support in the [How to use Managed Identity with Azure Communication Services](/azure/communication-services/how-tos/managed-identity). +++### Microsoft Entra ID Authentication ++The Azure platform provides role-based access (Azure RBAC) to control access to resources. Azure RBAC security principal represents a user, group, service principal, or managed identity that is requesting access to Azure resources. Microsoft Entra ID authentication provides superior security and ease of use over other authorization options. ++- **Managed Identity:** + - By using managed identity, you avoid having to store your account access key within your code, as you do with Access Key authorization. Managed identity credentials are fully managed, rotated, and protected by the platform, reducing the risk of credential exposure. + - Managed identities can authenticate to Azure services and resources that support Microsoft Entra ID authentication. This method provides a seamless and secure way to manage credentials. + - For more information on how to use Managed Identity with Azure Communication Services, refer to [This Guide](/azure/communication-services/how-tos/managed-identity). + + ++- **Service Principal:** + - To set up a service principal [create a registered application from the Azure CLI](../quickstarts/identity/service-principal.md?pivots=platform-azcli). Then, the endpoint and credentials can be used to authenticate the SDKs. + - See examples of how [service principal](../quickstarts/identity/service-principal.md) is used. ++Communication Services supports Microsoft Entra ID authentication for Communication Services resources, While you can continue to use Access Key authorization with communication services applications, Microsoft recommends moving to Microsoft Entra ID where possible. ++ Use our [Trusted authentication service hero sample](../samples/trusted-auth-sample.md) to map Azure Communication Services access tokens with your Microsoft Entra ID. ### User Access Tokens |
communication-services | Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md | The Call Automation events are sent to the web hook callback URI specified when | -- | | | CallConnected | The call has successfully started (when using Answer or Create action) or your application has successfully connected to an ongoing call (when using Connect action)| | CallDisconnected | Your application has been disconnected from the call |+| CreateCallFailed (in preview)| The call that your application has initiated could not be created | | ConnectFailed | Your application failed to connect to a call (for connect call action only)| | CallTransferAccepted | Transfer action has successfully completed and the transferee is connected to the target participant | | CallTransferFailed | The transfer action has failed | |
communication-services | Actions For Call Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md | The response provides you with CallConnection object that you can use to take fu 2. `ParticipantsUpdated` event that contains the latest list of participants in the call. ![Sequence diagram for placing an outbound call.](media/make-call-flow.png) +In the case where the call fails, you will receive a `CallDisconnected` and `CreateCallFailed` event with error codes for further troubleshooting. + ## Connect to a call (in preview) Connect action enables your service to establish a connection with an ongoing call and take actions on it. This is useful to manage a Rooms call or when client applications started a 1:1 or group call that Call automation isn't part of. Connection is established using the CallLocator property and can be of types: ServerCallLocator, GroupCallLocator, and RoomCallLocator. These IDs can be found when the call is originally established or a Room is created, and also published as part of [CallStarted](./../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationcallstarted) event. |
communication-services | Handle Events With Event Processor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/handle-events-with-event-processor.md | CreateCallEventResult eventResult = await createCallResult.WaitForEventProcessor CallConnected returnedEvent = eventResult.SuccessResult; ``` -With EventProcessor, we can easily wait CallConnected event until the call is established. If the call was never established (that is, callee never picked up the phone), it throws Timeout Exception. +With EventProcessor, we can easily wait CallConnected event until the call is established. If the call was never established (that is, callee never picked up the phone), it throws Timeout Exception. If the creation of the call otherwise fails, you will receive the `CallDisconnected` and `CreateCallFailed` events with error codes to further troubleshoot. > [!NOTE] > If specific timeout was not given when waiting on EventProcessor, it will wait until its default timeout happens. The default timeout is 4 minutes. |
communication-services | Breakoutrooms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/breakoutrooms.md | The following tables show support of individual APIs in calling SDK to individua ### SDKs The following tables show support of breakout rooms feature in individual Azure Communication Services SDKs. -| | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows | -|-|--|--|--|--|-|--|| -|Is Supported | ✔️ | | | | | | | +| Support status | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows | +|-|--|--|--|--|-|--|| +| Is Supported | ✔️ | | | | | | | ## Breakout rooms |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/capabilities.md | Title: Get local user capabilities -description: Use Azure Communication Services SDKs to get capabilities of the local user in a call. +description: Use Azure Communication Services SDK to get capabilities of the local user in a call. -Do I have permission to turn on video, do I have permission to turn on mic, do I have permission to share screen? Those are some examples of participant capabilities that you can learn from the capabilities API. Learning the capabilities can help build a user interface that only shows the buttons related to the actions the local user has permissions to. +Do I have permission to turn on video, do I have permission to turn on mic, do I have permission to share screen? Those permissions are examples of participant capabilities that you can learn from the capabilities API. Learning the capabilities can help build a user interface that only shows the buttons related to the actions the local user has permissions to. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Do I have permission to turn on video, do I have permission to turn on mic, do I [!INCLUDE [Capabilities iOS](./includes/capabilities/capabilities-ios.md)] ::: zone-end -## Supported Calltype +## Supported call types The feature is currently supported only for Azure Communication Services Rooms call type and teams meeting call type +## Reasons ++The following table provides additional information about why action isn't available and provides tips how to make the action available. ++| Reason | Description | Resolution | +|||| +| Capable | Action is allowed. | | +| CapabilityNotApplicableForTheCallType | Call type blocks the action. | Consider other type of call if you need this action. The call types are: 1:1 call, group call, 1:1 Teams interop call, 1:1 Teams interop group call, Room, and Meeting. | +| ClientRestricted | The runtime environment is blocking this action | Unblock the action on your device by changing operating system, browsers, platform, or hardware. You can find supported environment in our documentation. | +| UserPolicyRestricted | Microsoft 365 user's policy blocks the action. | Enable this action by changing policy that is assigned to the organizer of the meeting, initiator of the call or Microsoft 365 user using ACS SDK. The target user depends on the type of action. Learn more about Teams policy in Teams. Teams administrator can change policies. | +| RoleRestricted | Assigned role blocks the action. | Promote user to different role to make the action available. | +| FeatureNotSupported | The capabilities feature isn't supported in this call type. | Let us know in Azure Feedback channel that you would like to have this feature available for this call type. | +| MeetingRestricted | Teams meeting option blocks the action. | Teams meeting organizer or co-organizer needs to change meeting option to enable this action. | +| NotInitialized | The capabilities feature isn't initialized yet. | Subscribe to event `capabilitiesChanged` on `this.call.feature(Features.Capabilities)` to know when capability is initialized. | +| NotCapable | User type blocks the action. | The action is only allowed to specific type of identity. Enable this action by using Microsoft 365 identity. | +| TeamsPremiumLicenseRestricted | Microsoft 365 user needs to have Teams Premium license assigned. | Enable this action by assigning Teams Premium license to the Teams meeting organizer or the Microsoft 365 user using SDK. The target user depends on the type of action. Microsoft 365 admin can assign required license. | + ## Next steps - [Learn how to manage video](./manage-video.md) - [Learn how to manage calls](./manage-calls.md) |
communication-services | Button Injection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/button-injection.md | This functionality provides a high degree of customization, and ensures that the ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ## Next steps |
communication-services | Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/events.md | + + Title: Handle events in the UI Library ++description: Handle events in the Azure Communication Services UI Library. ++++++ Last updated : 09/01/2024++zone_pivot_groups: acs-plat-ios-android ++#Customer intent: As a developer, I want to handle events in the UI Library +++# Subscribe events in the UI Library +++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- A user access token to enable the call client. [Get a user access token](../../quickstarts/access-tokens.md). +- Optional: Completion of the [quickstart for getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md). ++## Set up the feature ++++## Next steps ++- [Learn more about the UI Library](../../concepts/ui-library/ui-library-overview.md) |
communication-services | Setup Title Subtitle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/setup-title-subtitle.md | + + Title: Customize the title and subtitle of the call bar in the UI Library ++description: Customize the title and subtitle of the call in the Azure Communication Services UI Library. ++++++ Last updated : 09/01/2024++zone_pivot_groups: acs-plat-ios-android ++#Customer intent: As a developer, I want to customize the title and subtitle of the call in the UI Library +++# Customize the title and subtitle +++Developers now have the capability to customize the title and subtitle of a call, both during setup and while the call is in progress. This feature allows for greater flexibility in aligning the call experience with specific use cases. ++For instance, in a customer support scenario, the title could display the issue being addressed, while the subtitle could show the customer's name or ticket number. ++Additionally, if tracking time spent in various segments of the call is crucial, the subtitle could dynamically update to display the elapsed call duration, helping to manage the meeting or session effectively. ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- A user access token to enable the call client. [Get a user access token](../../quickstarts/access-tokens.md). +- Optional: Completion of the [quickstart for getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md). ++## Set up the feature ++++## Next steps ++- [Learn more about the UI Library](../../concepts/ui-library/ui-library-overview.md) |
container-apps | Dapr Keda Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-keda-scaling.md | resource orders 'Microsoft.App/containerApps@2022-03-01' = { name: 'topic-based-scaling' custom: { type: 'azure-servicebus'+ identity: 'system' metadata: { topicName: 'orders' subscriptionName: 'membership-orders' messageCount: '30' }- auth: [ - { - secretRef: 'sb-root-connectionstring' - triggerParameter: 'connection' - } - ] } } ] Notice the `messageCount` property on the scaler's configuration in the subscrib This property tells the scaler how many messages each instance of the application can process at the same time. In this example, the value is set to `30`, indicating that there should be one instance of the application created for each group of 30 messages waiting in the topic. -For example, if 150 messages are waiting, KEDA scales the app out to five instances. The `maxReplicas` property is set to `10`, meaning even with a large number of messages in the topic, the scaler never creates more than `10` instances of this application. This setting ensures you don't scale up too much and accrue too much cost. +For example, if 150 messages are waiting, KEDA scales the app out to five instances. The `maxReplicas` property is set to `10`. Even with a large number of messages in the topic, the scaler never creates more than `10` instances of this application. This setting ensures you don't scale up too much and accrue too much cost. ## Next steps |
cost-management-billing | Migrate Consumption Usage Details Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-consumption-usage-details-api.md | -Work is underway to retire Enterprise Agreement (EA) reporting APIs. We recommend that EA customers migrate to the Cost Management [Cost Details](/rest/api/cost-management/generate-cost-details-report) API. The older EA reporting APIs are only available to customers with an Enterprise Agreement. +The Enterprise Agreement (EA) reporting APIs, that use an API key for authentication and are accessed through the consumption.azure.com URI endpoint, are retired. EA customers using these APIs should migrate to the Cost Management [Cost Details](/rest/api/cost-management/generate-cost-details-report) API. These older EA reporting APIs are only available to customers with an Enterprise Agreement. If you use the [Consumption Usage Details API](/rest/api/consumption/usage-details/list) we *recommend*, but don't require that you migrate to the Cost Management [Cost Details](/rest/api/cost-management/generate-cost-details-report) API. |
cost-management-billing | Azure Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/azure-openai.md | The Azure OpenAI reservation size should be based on the total provisioned throu For example, assume that your total consumption of provisioned throughput units is 100 units. You want to purchase a reservation for all of it, so you should purchase 100 of reservation quantity. +> [!CAUTION] +> Capacity availability for model deployments is dynamic and changes frequently across regions and models. To prevent buying a reservation for more PTUs than you can use, create deployments first. Then buy the reservation to cover the PTUs you deployed. This best practice ensures that you maximize the reservation discount and helps to prevent you from purchasing a term commitment that you canΓÇÖt fully use. + ## Buy a Microsoft Azure OpenAI reservation To buy an Azure OpenAI reservation, follow these steps: |
data-factory | Source Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md | For more info about connecting Azure Repos to your organization's Active Directo ## Author with GitHub integration -Visual authoring with GitHub integration supports source control and collaboration for work on your data factory pipelines. You can associate a data factory with a GitHub account repository for source control, collaboration, versioning. A single GitHub account can have multiple repositories, but a GitHub repository can be associated with only one data factory. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources. +Visual authoring with GitHub integration supports source control and collaboration for work on your data factory pipelines. You can associate a data factory with a GitHub account repository for source control, collaboration, versioning. A single GitHub account can host multiple repositories, and each repository can be associated with multiple data factories. By configuring each data factory to use a different branch within the same repository, you can maintain separate environments (such as development, staging, and production) while managing their configurations independently. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources. The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)), GitHub Enterprise Cloud and GitHub Enterprise Server. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. To connect with a public repository, select the **Use Link Repository option**, as they aren't visible in the dropdown menu of **Repository name**. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases) |
governance | Query Language | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md | This query first uses the shared query, and then uses `limit` to further restric Resource Graph supports a subset of KQL [data types](/azure/data-explorer/kusto/query/scalar-data-types/), [scalar functions](/azure/data-explorer/kusto/query/scalarfunctions), [scalar operators](/azure/data-explorer/kusto/query/binoperators), and-[aggregation functions](/azure/data-explorer/kusto/query/any-aggfunction). Specific +[aggregation functions](/kusto/query/aggregation-functions). Specific [tabular operators](/azure/data-explorer/kusto/query/queries) are supported by Resource Graph, some of which have different behaviors. ### Supported tabular/top level operators |
hdinsight | Azure Monitor Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/azure-monitor-agent.md | Title: Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters description: Learn how to migrate to Azure Monitor Agent (AMA) in Azure HDInsight clusters. Previously updated : 09/06/2024 Last updated : 09/12/2024 # Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters Activate the new integration by going to your cluster's portal page and scrollin $hdinsightClusterResourceId = "/subscriptions/{subscription}/resourceGroups/{resourceGroup}/providers/Microsoft.HDInsight/clusters/{clusterName}" - $dcrAssociationName = "dcrAssociationName {yourDcrAssociation} " + $dcrAssociationName = "{yourDcrAssociation}" New-AzDataCollectionRuleAssociation -AssociationName $dcrAssociationName -ResourceUri $hdinsightClusterResourceId -DataCollectionRuleId $dcr.Id ``` |
healthcare-apis | Migration Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-faq.md | Last updated 9/27/2023 ## When will Azure API for FHIR be retired? -Azure API for FHIR will be retired on September 30, 2026. +Azure API for FHIR® will be retired on September 30, 2026. ## Are new deployments of Azure API for FHIR allowed? Azure API for FHIR is a service that was purpose built for protected health info Azure Health Data Service FHIR service offers a rich set of capabilities such as: -- Consumption-based pricing model where customers pay only for used storage and throughput-- Support for transaction bundles-- Chained search improvements-- Improved ingress and egress of data with \$import, \$export including new features such as incremental import-- Events to trigger new workflows when FHIR resources are created, updated or deleted-- Connectors to Azure Synapse Analytics, Power BI and Azure Machine Learning for enhanced analytics+- Consumption-based pricing model where customers pay only for used storage and throughput. +- Support for transaction bundles. +- Chained search improvements. +- Improved ingress and egress of data with `$import`, and `$export`, including new features such as incremental import. +- Events to trigger new workflows when FHIR resources are created, updated, or deleted. +- Connectors to Azure Synapse Analytics, Power BI and Azure Machine Learning for enhanced analytics. ## What are the steps to enable SMART on FHIR in Azure Health Data Service FHIR service? -SMART on FHIR proxy is retiring. Organizations need to transition to the SMART on FHIR (Enhanced), which uses Azure Health Data and AI OSS samples by **September 21, 2026**. After September 21, 2026, applications relying on SMART on FHIR proxy will report errors when accessing the FHIR service. +The SMART on FHIR proxy is retiring. Organizations need to transition to the SMART on FHIR (Enhanced), which uses Azure Health Data and AI OSS samples, by **September 21, 2026**. After September 21, 2026, applications relying on SMART on FHIR proxy will report errors when accessing the FHIR service. -SMART on FHIR (Enhanced) provides more capabilities than SMART on FHIR proxy and meets requirements in the SMART on FHIR Implementation Guide (v 1.0.0) and ┬º170.315(g)(10) Standardized API for patient and population services criterion. +SMART on FHIR (Enhanced) provides more capabilities than SMART on FHIR proxy, and meets requirements in the SMART on FHIR Implementation Guide (v 1.0.0) and ┬º170.315(g)(10) Standardized API for patient and population services criterion. ## What will happen after the service is retired on September 30, 2026? After September 30, 2026 customers won't be able to: -- Create or manage Azure API for FHIR accounts-- Access the data through the Azure portal or APIs/SDKs/client tools-- Receive service updates to Azure API for FHIR or APIs/SDKs/client tools-- Access customer support (phone, email, web)-- Where can customers go to learn more about migrating to Azure Health Data Services FHIR service?+- Create or manage Azure API for FHIR accounts. +- Access the data through the Azure portal or APIs/SDKs/client tools. +- Receive service updates to Azure API for FHIR or APIs/SDKs/client tools. +- Access customer support (phone, email, web). ++## Where can customers go to learn more about migrating to Azure Health Data Services FHIR service? Start with [migration strategies](migration-strategies.md) to learn more about Azure API for FHIR to Azure Health Data Services FHIR service migration. The migration from Azure API for FHIR to Azure Health Data Services FHIR service involves data migration and updating the applications to use Azure Health Data Services FHIR service. Find more documentation on the step-by-step approach to migrating your data and applications in the [migration tool](https://github.com/Azure/apiforfhir-migration-tool/blob/main/lift-and-shift-resources/Liftandshiftresources_README.md). Check out these resources if you need further assistance: - Get answers from community experts in [Microsoft Q&A](/answers/questions/1377356/retirement-announcement-azure-api-for-fhir). - If you have a support plan and require technical support, [contact us](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). - |
healthcare-apis | Migration Strategies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-strategies.md | Last updated 9/27/2023 [!INCLUDE [retirement banner](../includes/healthcare-apis-azure-api-fhir-retirement.md)] -Azure Health Data Services FHIR service is the next-generation platform for health data integration. It offers managed, enterprise-grade FHIR, DICOM, and MedTech services for diverse health data exchange. +Azure Health Data Services FHIR® service is the next-generation platform for health data integration. It offers managed, enterprise-grade FHIR, DICOM, and MedTech services for diverse health data exchange. When you migrate your FHIR data from Azure API for FHIR to Azure Health Data Services FHIR service, your organization can benefit from improved performance, scalability, security, and compliance. Organizations can also access new features and capabilities that aren't available in Azure API for FHIR. Azure API for FHIR will be retired on September 30, 2026, so you need to migrate ## Recommended approach -To migrate your data, follow these steps: +To migrate your data, follow these steps. - Step 1: Assess readiness - Step 2: Prepare to migrate Compare the differences between Azure API for FHIR and Azure Health Data Service |Capabilities|Azure API for FHIR|Azure Health Data Services| |||--| |**Settings**|Supported: <br> ΓÇó Local RBAC <br> ΓÇó SMART on FHIR Proxy|Planned deprecation: <br> ΓÇó Local RBAC (9/6/23) <br> ΓÇó SMART on FHIR Proxy (9/21/26)|-|**Data storage Volume**|More than 4 TB|Current support is 4 TB (Open an [Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you need more than 4 TB)| -|**Data ingress**|Tools available in OSS|$import operation| +|**Data storage Volume**|More than 4 TB|Current support is 4 TB. Open an [Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you need more than 4 TB| +|**Data ingress**|Tools available in OSS|`$import` operation| |**Autoscaling**|Supported on request and incurs charge|Enabled by default at no extra charge|-|**Search parameters**|Bundle type supported: Batch <br> ΓÇó Include and revinclude, iterate modifier not supported <br> ΓÇó Sorting supported by first name, last name, birthdate and clinical date|Bundle type supported: Batch and transaction <br> ΓÇó Selectable search parameters <br> ΓÇó Include, revinclude, and iterate modifier is supported <br>ΓÇó Sorting supported by string and dateTime fields| +|**Search parameters**|Bundle type supported: Batch <br> ΓÇó Include and revinclude, iterate modifier not supported <br> ΓÇó Sorting supported by first name, family name, birthdate and clinical date|Bundle type supported: Batch and transaction <br> ΓÇó Selectable search parameters <br> ΓÇó Include, revinclude, and iterate modifier is supported <br>ΓÇó Sorting supported by string and dateTime fields| |**Events**|Not Supported|Supported| |**Infrastructure**|Supported: <br> ΓÇó Customer managed keys <br> ΓÇó Cross region DR (disaster recovery) <br>|Supported: <br> ΓÇó PITR (point in time recovery) <br> ΓÇó [Customer managed keys](configure-customer-managed-keys.md) <br> Upcoming: <br> ΓÇó Availability zone support| Compare the differences between Azure API for FHIR and Azure Health Data Service - **Azure Health Data Services FHIR service does not support local RBAC and custom authority**. The token issuer authority needs to be the authentication endpoint for the tenant that the FHIR Service is running in. -- **The IoT connector is only supported using an Azure API for FHIR service**. The IoT connector is succeeded by the MedTech service. You need to deploy a MedTech service and corresponding FHIR service within an existing or new Azure Health Data Services workspace and point your devices to the new Azure Events Hubs device event hub. Use the existing IoT connector device and destination mapping files with the MedTech service deployment.+- **The IoT connector is only supported using an Azure API for FHIR service**. The IoT connector is succeeded by the MedTech service. You need to deploy a MedTech service and corresponding FHIR service within an existing or new Azure Health Data Services workspace, and point your devices to the new Azure Events Hubs device event hub. Use the existing IoT connector device and destination mapping files with the MedTech service deployment. If you want to migrate existing IoT connector device FHIR data from your Azure API for FHIR service to the Azure Health Data Services FHIR service, use the bulk export and import functionality in the migration tool. Another migration path would be to deploy a new MedTech service and replay the IoT device messages through the MedTech service. ## Step 2: Prepare to migrate -First, create a migration plan. We recommend the migration patterns described in the table. Depending on your organizationΓÇÖs tolerance for downtime, you may decide to use certain patterns and tools to help facilitate your migration. +First, create a migration plan. We recommend the migration patterns described in the following table. Depending on your organizationΓÇÖs tolerance for downtime, you may decide to use certain patterns and tools to help facilitate your migration. |Migration pattern|Details|How?| |--|-|-|-|**Lift and shift**|The simplest pattern. Ideal if your data pipeline can afford longer downtime.|Choose the option that works best for your organization: <br> ΓÇó Configure a workflow to [\$export](../azure-api-for-fhir/export-data.md) your data on Azure API for FHIR, and then [\$import](configure-import-data.md) into Azure Health Data Services FHIR service. <br> ΓÇó The [GitHub repo](https://github.com/Azure/apiforfhir-migration-tool/blob/main/lift-and-shift-resources/Liftandshiftresources_README.md) provides tips on running these commands, and a script to help automate creating the \$import payload. <br> ΓÇó Or create your own tool to migrate the data using \$export and \$import.| -|**Incremental copy**|Continuous version of lift and shift, with less downtime. Ideal for large amounts of data that take longer to copy, or if you want to continue running Azure API for FHIR during the migration.|Choose the option that works best for your organization. <br> ΓÇó We created an [OSS migration tool](https://github.com/Azure/apiforfhir-migration-tool/tree/main/FHIR-data-migration-tool-docs) to help with this migration pattern. <br> ΓÇó Or create your own tool to migrate the data incrementally.| +|**Lift and shift**|The simplest pattern. Ideal if your data pipeline can afford longer downtime.|Choose the option that works best for your organization: <br> ΓÇó Configure a workflow to [$export](../azure-api-for-fhir/export-data.md) your data on Azure API for FHIR, and then [$import](configure-import-data.md) into Azure Health Data Services FHIR service. <br> ΓÇó The [GitHub repo](https://github.com/Azure/apiforfhir-migration-tool/blob/main/lift-and-shift-resources/Liftandshiftresources_README.md) provides tips on running these commands, and a script to help automate creating the `$import` payload. <br> ΓÇó Create your own tool to migrate the data using `$export` and `$import`.| +|**Incremental copy**|Continuous version of lift and shift, with less downtime. Ideal for large amounts of data that take longer to copy, or if you want to continue running Azure API for FHIR during the migration.|Choose the option that works best for your organization. <br> ΓÇó We created an [OSS migration tool](https://github.com/Azure/apiforfhir-migration-tool/tree/main/FHIR-data-migration-tool-docs) to help with this migration pattern. <br> ΓÇó Create your own tool to migrate the data incrementally.| ### OSS migration tool considerations Identify data to migrate. Deploy a new Azure Health Data Services FHIR Service server. - First, deploy an Azure Health Data Services workspace. -- Then deploy an Azure Health Data Services FHIR Service server. More information: [Deploy a FHIR service within Azure Health Data Services](fhir-portal-quickstart.md)+- Then deploy an Azure Health Data Services FHIR Service server. More information is found here: [Deploy a FHIR service within Azure Health Data Services](fhir-portal-quickstart.md). - Configure your new Azure Health Data Services FHIR Service server. If you need to use the same configurations as you have in Azure API for FHIR for your new server, see the recommended list of what to check for in the [migration tool documentation](https://github.com/Azure/apiforfhir-migration-tool/blob/main/FHIR-data-migration-tool-docs/Appendix.md). Configure the settings before you migrate. Migrate applications that were pointing to the old FHIR server. - Set up permissions again for [these apps](/azure/storage/blobs/assign-azure-role-data-access). -- Reconfigure any remaining settings in the new Azure Health Data Services FHIR Service server after migration.+- After migration, reconfigure any remaining settings in the new Azure Health Data Services FHIR service server. -- If youΓÇÖd like to double check to make sure that the Azure Health Data Services FHIR Service and Azure API for FHIR servers have the same configurations, you can check both [metadata endpoints](use-postman.md#get-the-capability-statement) to compare and contrast the two servers.+- If youΓÇÖd like to double check that the Azure Health Data Services FHIR service and Azure API for FHIR servers have the same configurations, you can check both [metadata endpoints](use-postman.md#get-the-capability-statement) to compare the two servers. -- Set up any jobs that were previously running in your old Azure API for FHIR server (for example, \$export jobs)+- Set up any jobs that were previously running in your old Azure API for FHIR server (for example, `$export` jobs) ## Step 5: Cut over to Azure Health Data Services FHIR services -After youΓÇÖre confident that your Azure Health Data Services FHIR Service server is stable, you can begin using Azure Health Data Services FHIR service to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure API for FHIR, delete data from the intermediate storage account that was used in the migration tool if necessary, delete data from your Azure API for FHIR server, and decommission your Azure API for FHIR account. +After youΓÇÖre confident that your Azure Health Data Services FHIR Service server is stable, you can begin using Azure Health Data Services FHIR service to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure API for FHIR. If necessary, delete data from the intermediate storage account that was used in the migration tool. Delete data from your Azure API for FHIR server, and decommission your Azure API for FHIR account. + |
healthcare-apis | Overview Of Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md | -The Fast Healthcare Interoperability Resources (FHIR®) specification defines an API for querying resources in a FHIR server database. This article guides you through some key aspects of querying data in FHIR. For complete details about the FHIR search API, refer to the HL7 [FHIR Search](https://www.hl7.org/fhir/search.html) documentation. +The Fast Healthcare Interoperability Resources (FHIR®) specification defines an API for querying resources in a FHIR server database. This article guides you through key aspects of querying data in FHIR. For complete details about the FHIR search API, refer to the HL7 [FHIR Search](https://www.hl7.org/fhir/search.html) documentation. -Throughout this article, we'll demonstrate FHIR search syntax in example API calls with the `{{FHIR_URL}}` placeholder to represent the FHIR server URL. In the case of the FHIR service in Azure Health Data Services, this URL would be `https://<WORKSPACE-NAME>-<FHIR-SERVICE-NAME>.fhir.azurehealthcareapis.com`. +Throughout this article, we demonstrate FHIR search syntax in example API calls with the `{{FHIR_URL}}` placeholder to represent the FHIR server URL. If the FHIR service is in Azure Health Data Services, this URL would be `https://<WORKSPACE-NAME>-<FHIR-SERVICE-NAME>.fhir.azurehealthcareapis.com`. -FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources in the FHIR server database. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all `Patient` resources in the database, you could use the following request: +FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources in the FHIR server database. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all `Patient` resources in the database, you could use the following request. ```rest GET {{FHIR_URL}}/Patient GET {{FHIR_URL}}/Patient You can also search using `POST`. To search using `POST`, the search parameters are delivered in the body of the request. This makes it easier to send queries with longer, more complex series of parameters. -With either `POST` or `GET`, if the search request is successful, you'll receive a FHIR `searchset` bundle containing the resource instance(s) returned from the search. If the search fails, youΓÇÖll find the error details in an `OperationOutcome` response. +With either `POST` or `GET`, if the search request is successful, you receive a FHIR `searchset` bundle containing the resource instances returned from the search. If the search fails, youΓÇÖll find the error details in an `OperationOutcome` response. -In the following sections, we'll cover the various aspects of querying resources in FHIR. Once youΓÇÖve reviewed these topics, refer to the [FHIR search samples page](search-samples.md), which features examples of different FHIR search methods. +In the following sections, we cover the various aspects of querying resources in FHIR. Once youΓÇÖve reviewed these topics, refer to the [FHIR search samples page](search-samples.md), which features examples of different FHIR search methods. ## Search parameters -When you do a search in FHIR, you're searching the database for resources that match certain search criteria. The FHIR API specifies a rich set of search parameters for fine-tuning search criteria. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. In a FHIR search API call, if a positive match is found between the request's search parameters and the corresponding element values stored in a resource instance, then the FHIR server returns a bundle containing the resource instance(s) whose elements satisfied the search criteria. +When you do a search in FHIR, you're searching the database for resources that match certain criteria. The FHIR API specifies a rich set of search parameters for fine-tuning search criteria. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. In a FHIR search API call, if a positive match is found between the request's search parameters and corresponding element values stored in a resource instance, then the FHIR server returns a bundle containing the resource instances whose elements satisfied the search criteria. -For each search parameter, the FHIR specification defines the [data type(s)](https://www.hl7.org/fhir/search.html#ptypes) that can be used. Support in the FHIR service for the various data types is outlined below. +For each search parameter, the FHIR specification defines the [data type](https://www.hl7.org/fhir/search.html#ptypes) that can be used. Support in the FHIR service for the various data types is outlined below. | **Search parameter type** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**| | - | -- | - | |-| number | Yes | Yes | -| date | Yes | Yes | -| string | Yes | Yes | -| token | Yes | Yes | -| reference | Yes | Yes | -| composite | Partial | Partial | The list of supported composite types is given later in this article. | -| quantity | Yes | Yes | -| uri | Yes | Yes | -| special | No | No | +| number | Yes | Yes | | +| date | Yes | Yes | | +| string | Yes | Yes | | +| token | Yes | Yes | | +| reference | Yes | Yes | | +| composite | Partial | Partial | The list of supported composite types follows in this article. | +| quantity | Yes | Yes | | +| uri | Yes | Yes | | +| special | No | No | | ### Common search parameters -There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources in FHIR. These are listed below, along with their support in the FHIR service: +There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources in FHIR. These are listed as follows, along with their support in the FHIR service. | **Common search parameter** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**| | - | -- | - | |-| `_id ` | Yes | Yes -| `_lastUpdated` | Yes | Yes | -| `_tag` | Yes | Yes | -| `_type` | Yes | Yes | -| `_security` | Yes | Yes | -| `_profile` | Yes | Yes | -| `_has` | Yes | Yes | -| `_query` | No | No | -| `_filter` | No | No | -| `_list` | No | No | -| `_text` | No | No | -| `_content` | No | No | +| `_id ` | Yes | Yes | | +| `_lastUpdated` | Yes | Yes | | +| `_tag` | Yes | Yes | | +| `_type` | Yes | Yes | | +| `_security` | Yes | Yes | | +| `_profile` | Yes | Yes | | +| `_has` | Yes | Yes | | +| `_query` | No | No | | +| `_filter` | No | No | | +| `_list` | No | No | | +| `_text` | No | No | | +| `_content` | No | No | | ### Resource-specific parameters GET {{FHIR_URL}}/metadata To view the supported search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` for the resource-specific search parameters and `CapabilityStatement.rest.searchParam` for search parameters that apply to all resources. > [!NOTE]-> The FHIR service in Azure Health Data Services does not automatically index search parameters that aren't defined in the base FHIR specification. However, the FHIR service does support [custom search parameters](how-to-do-custom-search.md). +> The FHIR service in Azure Health Data Services does not automatically index search parameters that aren't defined in the base FHIR specification. The FHIR service does support [custom search parameters](how-to-do-custom-search.md). ### Composite search parameters-Composite searches in FHIR allow you to search against element pairs as logically connected units. For example, if you were searching for observations where the height of the patient was over 60 inches, you would want to make sure that a single property of the observation contained the height code *and* a value greater than 60 inches (the value should only pertain to height). You wouldn't want to return a positive match on an observation with the height code *and* an arm to arm length over 60 inches, for example. Composite search parameters prevent this problem by searching against pre-specified pairs of elements whose values must both meet the search criteria for a positive match to occur. +Composite searches in FHIR allow you to search against element pairs as logically connected units. For example, if you were searching for observations where the height of the patient was over 60 inches, you would want to make sure that a single property of the observation contained the height code *and* a value greater than 60 inches (the value should only pertain to height). For example, you wouldn't want a positive match on an observation with the height code *and* an arm length code over 60 inches. Composite search parameters prevent this problem by searching against pre-specified pairs of elements whose values must both meet the search criteria for a positive match to occur. -The FHIR service in Azure Health Data Services supports the following search parameter type pairings for composite searches: +The FHIR service in Azure Health Data Services supports the following search parameter type pairings for composite searches. * Reference, Token * Token, Date For more information, see the HL7 [Composite Search Parameters](https://www.hl7. ### Modifiers & prefixes -[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to qualify search parameters with additional conditions. Below is a list of FHIR modifiers and their support in the FHIR service: +[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to qualify search parameters with additional conditions. Below is a table of FHIR modifiers and their support in the FHIR service. | **Modifiers** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**| | - | -- | - | |-| `:missing` | Yes | Yes | -| `:exact` | Yes | Yes | -| `:contains` | Yes | Yes | -| `:text` | Yes | Yes | -| `:type` (reference) | Yes | Yes | -| `:not` | Yes | Yes | -| `:below` (uri) | Yes | Yes | -| `:above` (uri) | Yes | Yes | -| `:in` (token) | No | No | -| `:below` (token) | No | No | -| `:above` (token) | No | No | -| `:not-in` (token) | No | No | -| `:identifier` |No | No | +| `:missing` | Yes | Yes | | +| `:exact` | Yes | Yes | | +| `:contains` | Yes | Yes | | +| `:text` | Yes | Yes | | +| `:type` (reference) | Yes | Yes | | +| `:not` | Yes | Yes | | +| `:below` (uri) | Yes | Yes | | +| `:above` (uri) | Yes | Yes | | +| `:in` (token) | No | No | | +| `:below` (token) | No | No | | +| `:above` (token) | No | No | | +| `:not-in` (token) | No | No | | +| `:identifier` |No | No | | For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) before the parameter value to refine the search criteria (for example, `Patient?_lastUpdated=gt2022-08-01` where the prefix `gt` means "greater than"). The FHIR service in Azure Health Data Services supports all prefixes defined in the FHIR standard. ### Search result parameters-FHIR specifies a set of search result parameters to help manage the information returned from a search. For detailed information on how to use search result parameters in FHIR, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website. Below is a list of FHIR search result parameters and their support in the FHIR service. +FHIR specifies a set of search result parameters to help manage the information returned from a search. For details on how to use search result parameters in FHIR, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website. Following is a table of FHIR search result parameters and their support in the FHIR service. | **Search result parameters** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**| | - | -- | - | |-| `_elements` | Yes | Yes | +| `_elements` | Yes | Yes | | | `_count` | Yes | Yes | `_count` is limited to 1000 resources. If it's set higher than 1000, only 1000 are returned and a warning will be included in the bundle. | | `_include` | Yes | Yes | Items retrieved with `_include` are limited to 100. `_include` on PaaS and OSS on Azure Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |-| `_revinclude` | Yes | Yes |Items retrieved with `_revinclude` are limited to 100. `_revinclude` on PaaS and OSS on Azure Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319). | -| `_summary` | Yes | Yes | +| `_revinclude` | Yes | Yes |Items retrieved with `_revinclude` are limited to 100. `_revinclude` on PaaS and OSS on Azure Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request: [#1319](https://github.com/microsoft/fhir-server/issues/1319). | +| `_summary` | Yes | Yes | | | `_total` | Partial | Partial | `_total=none` and `_total=accurate` | | `_sort` | Partial | Partial | `sort=_lastUpdated` is supported on the FHIR service. For the FHIR service and the OSS SQL DB FHIR servers, sorting by strings and dateTime fields are supported. For Azure API for FHIR and OSS Azure Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |-| `_contained` | No | No | -| `_containedType` | No | No | -| `_score` | No | No | +| `_contained` | No | No | | +| `_containedType` | No | No | | +| `_score` | No | No | | Note: 1. By default, `_sort` arranges records in ascending order. You can also use the prefix `-` to sort in descending order. The FHIR service only allows you to sort on a single field at a time.-1. FHIR service supports wild card searches with revinclude. Adding "*.*" query parameter in revinclude query, it directs FHIR service to reference all the resources mapped to the source resource. +1. FHIR service supports wild card searches with revinclude. Adding a "*.*" query parameter in a revinclude query directs the FHIR service to reference all the resources mapped to the source resource. By default, the FHIR service in Azure Health Data Services is set to lenient handling. This means that the server ignores any unknown or unsupported parameters. If you want to use strict handling, you can include the `Prefer` header and set `handling=strict`. A [chained search](https://www.hl7.org/fhir/search.html#chaining) allows you to `GET {{FHIR_URL}}/Encounter?subject:Patient.name=Jane` -The `.` in the above request steers the path of the chained search to the target parameter (`name` in this case). +The `.` in the preceding request steers the path of the chained search to the target parameter (`name` in this case). Similarly, you can do a reverse chained search with the `_has` parameter. This allows you to retrieve resource instances by specifying criteria on other resources that reference the resources of interest. For examples of chained and reverse chained search, refer to the [FHIR search examples](search-samples.md) page. ## Pagination -As mentioned above, the results from a FHIR search is available in paginated form at a link provided in the `searchset` bundle. By default, the FHIR service displays 10 search results per page, but this can be increased (or decreased) by setting the `_count` parameter. If there are more matches than fit on one page, the bundle includes a `next` link. Repeatedly fetching from the `next` link yields the subsequent pages of results. Note that the `_count` parameter value can't exceed 1000. +As previously mentioned, the results from a FHIR search are available in paginated form at a link provided in the `searchset` bundle. By default, the FHIR service displays 10 search results per page, but this can be increased (or decreased) by setting the `_count` parameter. If there are more matches than fit on one page, the bundle includes a `next` link. Repeatedly fetching from the `next` link yields the subsequent pages of results. Note that the `_count` parameter value can't exceed 1000. Currently, the FHIR service in Azure Health Data Services only supports the `next` link and doesnΓÇÖt support `first`, `last`, or `previous` links in bundles returned from a search. Now that you've learned about the basics of FHIR search, see the search samples >[!div class="nextstepaction"] >[FHIR search examples](search-samples.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md | -The FHIR® service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR┬«) data standard. As part of a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud. +The FHIR® service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR) data standard. As part of a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud. The FHIR service offers: The healthcare industry is rapidly adopting [FHIR┬«](https://hl7.org/fhir) as th ### Securely manage health data in the cloud -The FHIR service in Azure Health Data Services makes FHIR data available to clients through a RESTful API. This API is an implementation of the HL7 FHIR API specification. As a managed PaaS offering in Azure, the FHIR service gives organizations a scalable and secure environment for the storage and exchange of Protected Health Information (PHI) in the native FHIR format. +The FHIR service in Azure Health Data Services makes FHIR data available to clients through a RESTful API. This API is an implementation of the HL7 FHIR API specification. As a managed PaaS offering in Azure, the FHIR service gives organizations a scalable and secure environment for the storage and exchange of PHI in the native FHIR format. ### Free up resources to innovate -Although you can build and maintain your own FHIR server, with the FHIR service in Azure Health Data Services Microsoft handles setting up server components, ensuring all compliance requirements are met so you can focus on building innovative solutions. +You can build and maintain your own FHIR server. However, with the FHIR service in Azure Health Data Services, Microsoft handles setting up server components, and ensuring all compliance requirements are met, so you can focus on building innovative solutions. ### Enable interoperability Because it belongs to the Azure family of services, the FHIR service protects yo ## Use cases for the FHIR service -FHIR servers are essential for interoperability of health data. The FHIR service is designed as a managed FHIR server with a RESTful API for connecting to a broad range of client systems and applications. Some of the key use cases for the FHIR service are: +FHIR servers are essential for interoperability of health data. The FHIR service is designed as a managed FHIR server with a RESTful API for connecting to a broad range of client systems and applications. Some of the key use cases for the FHIR service are as follows. - **Startup app development:** Customers developing a patient- or provider-centric app (mobile or web) can use the FHIR service as a fully managed backend for health data transactions. The FHIR service enables secure transfer of PHI. With SMART on FHIR, app developers can take advantage of the robust identities management in Microsoft Entra ID for authorization of FHIR RESTful API actions. -- **Healthcare ecosystems:** Although EHRs are the primary source of truth in many clinical settings, it's common for providers to have multiple databases that arenΓÇÖt connected to each other (often because the data is stored in different formats). By using the FHIR service as a conversion layer between these systems, organizations can standardize data in the FHIR format. Ingesting and persisting in FHIR format enables health data querying and exchange across multiple disparate systems.+- **Healthcare ecosystems:** Although EHRs are the primary source of truth in many clinical settings, it's common for providers to have multiple databases that arenΓÇÖt connected to each other (often because the data is stored in different formats). Using the FHIR service as a conversion layer between these systems, organizations can standardize data in the FHIR format. Ingesting and persisting in FHIR format enables health data querying and exchange across multiple disparate systems. -- **Research:** Health researchers use the FHIR standard because it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the data conversion and PHI deidentification capabilities in the FHIR service, researchers can prepare HIPAA-compliant data for secondary use before sending the data to Azure Machine Learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.+- **Research:** Health researchers use the FHIR standard because it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the data conversion and PHI de-identification capabilities in the FHIR service, researchers can prepare HIPAA-compliant data for secondary use before sending the data to Azure Machine Learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows. ## FHIR platforms from Microsoft FHIR capabilities from Microsoft are available in three configurations: -- The **FHIR service** is a managed platform as a service (PaaS) that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data, such as the DICOM service for medical imaging data and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace.+- The **FHIR service** is a managed PaaS that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data, such as the DICOM service for medical imaging data, and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace. - **Azure API for FHIR** is a managed FHIR server offered as a PaaS in Azure and is easily deployed in the Azure portal. Azure API for FHIR isn't part of Azure Health Data Services and lacks some of the features of the FHIR service. |
healthcare-apis | Patient Everything | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/patient-everything.md | -The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the Fast Healthcare Interoperability Resources (FHIR®) specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the FHIR service in Azure Health Data Services(hereby called FHIR service), Patient-everything is available to pull data related to a specific patient. +The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record, or for a provider or other user to perform a bulk data download related to a patient. According to the Fast Healthcare Interoperability Resources (FHIR®) specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the FHIR service in Azure Health Data Services, Patient-everything is available to pull data related to a specific patient. ## Use Patient-everything To call Patient-everything, use the following command: GET {FHIRURL}/Patient/{ID}/$everything > [!Note] > You must specify an ID for a specific patient. If you need all data for all patients, see [$export](../data-transformation/export-data.md). -FHIR service validates that it can find the patient matching the provided patient ID. If a result is found, the response will be a bundle of type `searchset` with the following information: +FHIR service validates that it can find the patient matching the provided patient ID. If a result is found, the response is a bundle of type `searchset` with the following information: * [Patient resource](https://www.hl7.org/fhir/patient.html). * Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that aren't of [see also](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.-* If there are `seealso` link reference(s) to other patient(s), the results will include Patient-everything operation against the `seealso` patient(s) listed. +* If there are `seealso` link references to other patients, the results include Patient-everything operation against the `seealso` patients listed. * Resources in the [Patient Compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html). * [Device resources](https://www.hl7.org/fhir/device.html) that reference the patient resource. > [!Note]-> If the patient has more than 100 devices linked to them, only 100 will be returned. +> Up to the first 100 devices linked to a patient will be returned. ## Patient-everything parameters-FHIR service supports the following query parameters. All of these parameters are optional: +FHIR service supports the following query parameters. All of these parameters are optional. |Query parameter | Description| |--|| | \_type | Allows you to specify which types of resources will be included in the response. For example, \_type=Encounter would return only `Encounter` resources associated with the patient. | | \_since | Will return only resources that have been modified since the time provided. | | start | Specifying the start date will pull in resources where their clinical date is after the specified start date. If no start date is provided, all records before the end date are in scope. |-| end | Specifying the end date will pull in resources where their clinical date is before the specified end date. If no end date is provided, all records after the start date are in scope. | +| end | Specifying the end date pulls in resources where their clinical date is before the specified end date. If no end date is provided, all records after the start date are in scope. | > [!Note] > This implementation of Patient-everything does not support the _count parameter. FHIR service supports the following query parameters. All of these parameters ar On a patient resource, there's an element called link, which links a patient to other patients or related persons. These linked patients help give a holistic view of the original patient. The link reference can be used when a patient is replacing another patient or when two patient resources have complementary information. One use case for links is when an ADT 38 or 39 HL7v2 message comes. It describes an update to a patient. This update can be stored as a reference between two patients in the link element. -The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion), but we've include a high-level summary: +The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion), but here we include a high-level summary: * [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) - The Patient resource replaces a different Patient. * [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) - Patient is valid, but it's not considered the main source of information. Points to another patient to retrieve additional information. The Patient-everything operation in the FHIR service processes patient links in Right now, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) and [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) links are ignored by the Patient-everything operation, and the linked patient isn't returned in the bundle. -As described, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we'll run Patient-everything on each of those five patients. +As described, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we run Patient-everything on each of those five patients. > [!Note] > This is set up to only follow `seealso` links one layer deep. It doesn't process a `seealso` link's `seealso` links. [![See also flow diagram.](media/patient-everything/see-also-flow.png)](media/patient-everything/see-also-flow.png#lightbox) -The final link type is [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by). In this case, the original patient resource is no longer being used and the `replaced-by` link points to the patient that should be used. This implementation of `Patient-everything` will include by default an operation outcome at the start of the bundle with a warning that the patient is no longer valid. This will also be the behavior when the `Prefer` header is set to `handling=lenient`. +The final link type is [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by). In this case, the original patient resource is no longer being used and the `replaced-by` link points to the patient that should be used. This implementation of `Patient-everything` by default includes an operation outcome at the start of the bundle with a warning that the patient is no longer valid. This will also be the behavior when the `Prefer` header is set to `handling=lenient`. -In addition, you can set the `Prefer` header to `handling=strict` to throw an error instead. In this case, a return of error code 301 `MovedPermanently` indicates that the current patient is out of date and returns the ID for the correct patient that's included in the link. The `ContentLocation` header of the returned error will point to the correct and up-to-date request. +In addition, you can set the `Prefer` header to `handling=strict` to throw an error instead. In this case, a return of error code 301 `MovedPermanently` indicates that the current patient is out of date and returns the ID for the correct patient that's included in the link. The `ContentLocation` header of the returned error points to the correct and up-to-date request. > [!Note] > If a `replaced-by` link is present, `Prefer: handling=lenient` and results are returned asynchronously in multiple bundles, only an operation outcome is returned in one bundle. In addition, you can set the `Prefer` header to `handling=strict` to throw an er The Patient-everything operation returns results in phases: -1. Phase 1 returns the `Patient` resource itself in addition to any `generalPractitioner` and `managingOrganization` resources ir references. +1. Phase 1 returns the `Patient` resource itself in addition to any `generalPractitioner` and `managingOrganization` resources it references. 1. Phase 2 and 3 both return resources in the patient compartment. If the `start` or `end` query parameters are specified, Phase 2 returns resources from the compartment that can be filtered by their clinical date, and Phase 3 returns resources from the compartment that can't be filtered by their clinical date. If neither of these parameters are specified, Phase 2 is skipped and Phase 3 returns all patient-compartment resources.-1. Phase 4 will return any devices that reference the patient. +1. Phase 4 returns any devices that reference the patient. -Each phase will return results in a bundle. If the results span multiple pages, the next link in the bundle will point to the next page of results for that phase. After all results from a phase are returned, the next link in the bundle will point to the call to initiate the next phase. +Each phase returns results in a bundle. If the results span multiple pages, the next link in the bundle will point to the next page of results for that phase. After all results from a phase are returned, the next link in the bundle will point to the call to initiate the next phase. If the original patient has any `seealso` links, phases 1 through 4 will be repeated for each of those patients. ## Examples of Patient-everything -Here are some examples of using the Patient-everything operation. In addition to these examples, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works. +Following are some examples of using the Patient-everything operation. In addition to these examples, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works. -To use Patient-everything to query a patient's "everything" between 2010 and 2020, use the following call: +To use Patient-everything to query a patient's "everything" between 2010 and 2020, use the following call. ```json GET {FHIRURL}/Patient/{ID}/$everything?start=2010&end=2020 ``` -To use Patient-everything to query a patient's Observation and Encounter, use the following call: +To use Patient-everything to query a patient's Observation and Encounter, use the following call. + ```json GET {FHIRURL}/Patient/{ID}/$everything?_type=Observation,Encounter ``` -To use Patient-everything to query a patient's "everything" since 2021-05-27T05:00:00Z, use the following call: +To use Patient-everything to query a patient's "everything" since 2021-05-27T05:00:00Z, use the following call. ```json GET {FHIRURL}/Patient/{ID}/$everything?_since=2021-05-27T05:00:00Z ``` -If a patient is found for each of these calls, you'll get back a 200 response with a `Bundle` of the corresponding resources. +If a patient is found for each of these calls, you'll get a 200 response with a `Bundle` of the corresponding resources. ## Next steps Now that you know how to use the Patient-everything operation, you can learn abo >[!div class="nextstepaction"] >[Overview of FHIR search](overview-of-search.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Purge History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/purge-history.md | -Since `$purge-history` is a resource level operation versus a type level or system level operation, you'll need to run the operation for every resource that you want remove the history from. +Since `$purge-history` is a resource level operation (versus a type level or system level operation) you need to run the operation for every resource from which you want to remove the history. ## Examples of purge history |
iot-hub | Tutorial Message Enrichments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md | The values for these variables should be for the same resources you used in the Create a second endpoint and route for the enriched messages. ++ # [Azure portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), go to your IoT hub. |
iot-hub | Tutorial Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md | There are no other prerequisites for the Azure portal. Register a new device in your IoT hub. + # [Azure portal](#tab/portal) 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub. Now set up the routing for the storage account. In this section you define a new [!INCLUDE [iot-hub-include-blob-storage-format](../../includes/iot-hub-include-blob-storage-format.md)] ++ # [Azure portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), go to your IoT hub. |
iot-hub | Tutorial Use Metrics And Diags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-use-metrics-and-diags.md | For this tutorial, we've provided a CLI script that performs the following steps 4. Register a device identity for the simulated device that sends messages to your IoT hub. Save the device connection string to use to configure the simulated device. + ### Set up resources using Azure CLI Copy and paste the following commands into Cloud Shell or a local command line instance that has the Azure CLI installed. Some of the commands may take some time to execute. The new resources are created in the resource group *ContosoResources*. |
load-balancer | Configure Vm Scale Set Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md | In this article, you'll learn how to configure a Virtual Machine Scale Set with ## Prerequisites -- An Azure subscription.-- An existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed.-- An Azure Virtual Network for the Virtual Machine Scale Set.+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- An existing [standard sku load balancer](quickstart-load-balancer-standard-internal-portal.md) in the subscription where the Virtual Machine Scale Set will be deployed. Ensure the load balancer has a backend pool. + ## Sign in to the Azure portal In this section, you'll create a Virtual Machine Scale Set in the Azure portal w | **Orchestration** | | | Orchestration mode | Select **Uniform** | | Security type | Select **Standard** |+ | **Scaling** | | + | Scaling mode | Select **Manual** | + | Instance count | Enter **2** | | **Instance details** | |- | Image | Select **Ubuntu Server 18.04 LTS** | + | Image | Select **Ubuntu Server 22.04 LTS** | | Azure Spot instance | Select **No** | | Size | Leave at default | | **Administrator account** | |- | Authentication type | Select **Password** | - | Username | Enter your admin username | - | Password | Enter your admin password | - | Confirm password | Reenter your admin password | + | Authentication type | Select **SSH public key** | + | Username | Enter a username for the SSH public key. | + | SSH public key source | Select **Generate new key pair**. | + | SSH key type | Select **RSA SSH Format**. | + | Key pair name | Enter a name for the key pair. | - :::image type="content" source="media/vm-scale-sets/create-virtual-machine-scale-set-thumb.png" alt-text="Screenshot of Create a Virtual Machine Scale Set page." lightbox="media/vm-scale-sets/create-virtual-machine-scale-set.png"::: +2. Select the **Networking** tab or select **Next: Spot > Next: Disks > Next: Networking**. -4. Select the **Networking** tab. --5. Enter or select this information in the **Networking** tab: +3. Enter or select this information in the **Networking** tab: | Setting | Value | |--|-| | **Virtual Network Configuration** | | | Virtual network | Select **myVNet** or your existing virtual network. |- | **Load balancing** | | - | Use a load balancer | Select **Yes** | - | **Load balancing settings** | | + | **Load balancing** | | | Load balancing options | Select **Azure load balancer** | | Select a load balancer | Select **myLoadBalancer** or your existing load balancer |- | Select a backend pool | Select **myBackendPool** or your existing backend pool. | -- :::image type="content" source="media/vm-scale-sets/create-virtual-machine-scale-set-network-thumb.png" alt-text="Screenshot shows the Create Virtual Machine Scale Set Networking tab." lightbox="media/vm-scale-sets/create-virtual-machine-scale-set-network.png"::: + | Select a backend pool | Select **myBackendPool** or your existing backend pool. | -6. Select the **Management** tab. +4. Select the **Management** tab. -7. In the **Management** tab, set **Boot diagnostics** to **Off**. +5. In the **Management** tab, set **Boot diagnostics** to **Off**. -8. Select the blue **Review + create** button. +6. Select the blue **Review + create** button. -9. Review the settings and select the **Create** button. +7. Review the settings and select the **Create** button. # [Azure CLI](#tab/cli) ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- You need an existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed.--- You need an Azure Virtual Network for the Virtual Machine Scale Set.+- You need an existing [standard sku load balancer](quickstart-load-balancer-standard-internal-cli.md) in the subscription where the Virtual Machine Scale Set will be deployed. Ensure the load balancer has a backend pool. +- You need an [Azure Virtual Network](../virtual-network/quick-create-cli.md) for the Virtual Machine Scale Set. [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] The below example deploys a Virtual Machine Scale Set with: az vmss create \ --resource-group myResourceGroup \ --name myVMSS \- --image Canonical:UbuntuServer:18.04-LTS:latest \ + --image Ubuntu2204 \ --admin-username adminuser \ --generate-ssh-keys \ --upgrade-policy-mode Automatic \ az vmss create \ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing resource group for all resources.-- An existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed.-- An Azure Virtual Network for the Virtual Machine Scale Set.++- An existing [standard sku load balancer](quickstart-load-balancer-standard-internal-powershell.md) in the subscription where the Virtual Machine Scale Set will be deployed. Ensure the load balancer has a backend pool. + [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] $lbn = <load-balancer-name> $pol = <upgrade-policy-mode> $img = <image-name> $bep = <backend-pool-name>+$img = <image-name> $lb = Get-AzLoadBalancer -ResourceGroupName $rsg -Name $lbn The below example deploys a Virtual Machine Scale Set with the following values: ```azurepowershell-interactive $rsg = "myResourceGroup"-$loc = "East US 2" +$loc = "East US" $vms = "myVMSS"-$vnt = "myVnet" -$sub = "mySubnet" +$vnt = "myVNet" +$sub = "default" $pol = "Automatic" $lbn = "myLoadBalancer" $bep = "myBackendPool"+$img = "Ubuntu2204" $lb = Get-AzLoadBalancer -ResourceGroupName $rsg -Name $lbn -New-AzVmss -ResourceGroupName $rsg -Location $loc -VMScaleSetName $vms -VirtualNetworkName $vnt -SubnetName $sub -LoadBalancerName $lb -UpgradePolicyMode $pol -BackendPoolName $bep +New-AzVmss -ResourceGroupName $rsg -Location $loc -VMScaleSetName $vms -VirtualNetworkName $vnt -SubnetName $sub -LoadBalancerName $lb -UpgradePolicyMode $pol -BackendPoolName $bep -ImageName $img ``` > [!NOTE] |
load-balancer | Ipv6 Add To Existing Vnet Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-cli.md | Title: Add IPv6 to an IPv4 application in Azure virtual network - Azure CLI -description: This article shows how to deploy IPv6 addresses to an existing application in Azure virtual network using Azure CLI. +description: This article shows how to deploy IPv6 addresses to an existing application in an Azure virtual network for a Standard Load Balancer using Azure CLI. |
load-balancer | Ipv6 Add To Existing Vnet Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md | Title: Add an IPv4 application to IPv6 in Azure Virtual Network - PowerShell + Title: Add IPv6 to an IPv4 application in Azure Virtual Network - PowerShell description: This article shows how to deploy IPv6 addresses to an existing application in Azure virtual network using Azure PowerShell. |
managed-grafana | How To Connect To Data Source Privately | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-to-data-source-privately.md | While managed private endpoints are free, there may be charges associated with p > [!NOTE] > Managed private endpoints are currently only available in Azure Global. +> [!NOTE] +> If you're running a private data source in an AKS cluster, when the serviceΓÇÖs `externalTrafficPolicy` is set to local, Azure Private Link Service needs to use a different subnet than the PodΓÇÖs subnet. If the same subnet is required, the service should use Cluster `externalTrafficPolicy`. See [Cloud Provider Azure](https://cloud-provider-azure.sigs.k8s.io/topics/pls-integration/#restrictions). + ## Supported data sources Managed private endpoints work with Azure services that support private link. Using them, you can connect your Azure Managed Grafana workspace to the following Azure data stores over private connectivity: |
network-watcher | Nsg Flow Logs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-overview.md | Currently, these Azure services don't support NSG flow logs: > [!NOTE] > App services deployed under an Azure App Service plan don't support NSG flow logs. To learn more, see [How virtual network integration works](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works). +### Incompatible virtual machines +NSG flow logs aren't supported on the following virtual machine sizes: +- D family v6 series +- E family v6 series +- F family v6 series ++We recommend that you use virtual network flow logs [Virtual network flow logs](vnet-flow-logs-overview.md) for these virtual machine sizes. ++> [!NOTE] +> Virtual Machines that run heavy networking traffic might encounter flow logging failures. We recommend that you migrate NSG flow logs to virtual network flow logs [Virtual network flow logs](vnet-flow-logs-overview.md) for these types of workloads. + ## Best practices - **Enable NSG flow logs on critical subnets**: Flow logs should be enabled on all critical subnets in your subscription as an auditing and security best practice. |
network-watcher | Vpn Troubleshoot Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vpn-troubleshoot-overview.md | The following example shows the contents of the Scrubbed-wfpdiag.txt file. In th ``` ...-[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Deleted ICookie from the high priority thread pool list -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|IKE diagnostic event: -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Event Header: -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Timestamp: 1601-01-01T00:00:00.000Z -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Flags: 0x00000106 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Local address field set -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Remote address field set -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IP version field set -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IP version: IPv4 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IP protocol: 0 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Local address: 13.78.238.92 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Remote address: 52.161.24.36 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Local Port: 0 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Remote Port: 0 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Application ID: -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| User SID: <invalid> -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Failure type: IKE/Authip Main Mode Failure -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Type specific info: -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Failure error code:0x000035e9 -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IKE authentication credentials are unacceptable -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| -[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Failure point: Remote +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Deleted ICookie from the high priority thread pool list +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| IKE diagnostic event: +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Event Header: +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Timestamp: 1601-01-01T00:00:00.000Z +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Flags: 0x00000106 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Local address field set +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Remote address field set +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| IP version field set +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| IP version: IPv4 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| IP protocol: 0 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Local address: 203.0.113.92 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Remote address: 203.0.113.36 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Local Port: 0 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Remote Port: 0 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Application ID: +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| User SID: <invalid> +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Failure type: IKE/Authip Main Mode Failure +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Type specific info: +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Failure error code:0x000035e9 +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| IKE authentication credentials are unacceptable +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| +[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|203.0.113.36| Failure point: Remote ... ``` |
operator-5g-core | Concept Observability Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-5g-core/concept-observability-analytics.md | Title: Observability and analytics in Azure Operator 5G Core Preview description: Learn how metrics, tracing, and logs are used for observability and analytics in Azure Operator 5G Core Preview--++ Last updated 04/12/2024 |
operator-5g-core | Concept Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-5g-core/concept-security.md | Title: Security in Azure Operator 5G Core Preview description: Review the security features embedded in Azure Operator 5G Core Preview.--++ Last updated 03/21/2024 |
operator-nexus | Quickstarts Kubernetes Cluster Deployment Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-powershell.md | Title: Create an Azure Nexus Kubernetes cluster by using Azure PowerShell description: Learn how to create an Azure Nexus Kubernetes cluster by using Azure PowerShell. --++ Last updated 09/26/2023 |
operator-nexus | Quickstarts Virtual Machine Deployment Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-virtual-machine-deployment-ps.md | Title: Create an Azure Operator Nexus virtual machine by using Azure PowerShell description: Learn how to create an Azure Operator Nexus virtual machine (VM) for virtual network function (VNF) workloads using PowerShell--++ Last updated 09/20/2023 |
reliability | Reliability Fabric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-fabric.md | Fabric makes commercially reasonable efforts to provide availability zone suppor | **Asia Pacific** | | | | | | | | Australia East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | Japan East | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | |-| Southeast Asia | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | +| Southeast Asia | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | | | :::image type="icon" source="media/yes-icon.svg" border="false"::: | | ### Zone down experience During a zone-wide outage, no action is required during zone recovery. Fabric capabilities in regions listed in [supported regions](#supported-regions) self-heal and rebalance automatically to take advantage of the healthy zone. Running Spark Jobs may fail if the master node is in the failed zone. In such a case, the jobs will need to be resubmitted. |
reliability | Reliability Microsoft Purview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-microsoft-purview.md | Microsoft Purview makes commercially reasonable efforts to provide availability |West US 3|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| |North Europe|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| |South Africa North|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|-|Sweden Central|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| +|Sweden Central|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: | |Switzerland North|||:::image type="icon" source="media/yes-icon.svg":::||| |USGov Virginia|:::image type="icon" source="media/yes-icon.svg":::||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| |South Central US|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| |
sap | About Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/about-azure-monitor-sap-solutions.md | |
sap | Data Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/data-reference.md | Azure Monitor for SAP solutions doesn't support metrics. ## Azure Monitor logs tables -This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Monitor for SAP solutions and available for query by Log Analytics. Azure Monitor for SAP solutions uses custom logs. The schemas for some tables are defined by third-party providers, such as SAP. Here are the current custom logs for Azure Monitor for SAP solutions with links to sources for more information. +This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Monitor for SAP solutions and available for query by Log Analytics. Azure Monitor for SAP solutions uses custom logs. The schemas for some tables are defined by non-Microsoft providers, such as SAP. Here are the current custom logs for Azure Monitor for SAP solutions with links to sources for more information. ### SapHana_HostConfig_CL |
sap | Enable Sap Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/enable-sap-insights.md | -The Insights capability in Azure Monitor for SAP Solutions helps you troubleshoot Availability and Performance issues on your SAP workloads. It helps you correlate key SAP components issues with SAP logs, Azure platform metrics and health events. +The Insights capability in Azure Monitor for SAP Solutions helps you troubleshoot Availability and Performance issues on your SAP workloads. It helps you correlate key SAP components issues with SAP logs, Azure platform metrics, and health events. In this how-to-guide, learn to enable Insights in Azure Monitor for SAP solutions. You can use SAP Insights with only the latest version of the service, *Azure Monitor for SAP solutions* and not *Azure Monitor for SAP solutions (classic)* > [!NOTE] cd <script_path> ```PowerShell $armId = "<AMS ARM ID>" ```-7. If the VMs belong to a different subscription than AMS, set the list of subscriptions in which VMs of the SAP system are present (use subscription IDs): +7. If the virtual machines (VMs) belong to a different subscription than AMS, set the list of subscriptions in which VMs of the SAP system are present (use subscription IDs): ```PowerShell $subscriptions = "<Subscription ID 1>","<Subscription ID 2>" ``` This capability helps you get an overview regarding availability of your SAP sys #### Steps to use availability insights 1. Open the AMS instance of your choice and visit the insights tab under Monitoring on the left navigation pane. :::image type="content" source="./media/enable-sap-insights/visit-insights-tab.png" alt-text="Screenshot that shows the landing page of Insights on AMS.":::-1. If you completed all [the steps mentioned](#steps-to-enable-insights-in-azure-monitor-for-sap-solutions), you should see the above screen asking for context to be set up. You can set the Time range, SID and the provider (optional, All selected by default). +1. If you completed all [the steps mentioned](#steps-to-enable-insights-in-azure-monitor-for-sap-solutions), you should see the screen shown in step 1 asking for context to be set up. You can set the Time range, SID, and the provider (optional, All selected by default). 1. On the top, you're able to see all the fired alerts related to SAP system and instance availability on this screen. :::image type="content" source="./media/enable-sap-insights/availability-overview.png" alt-text="Screenshot of the overview page of availability insights."::: 1. If you're able to see SAP system availability trend, categorized by VM - SAP process list. If you selected a fired alert in the previous step, you're able to see these trends in context with the fired alert. If not, these trends respect the time range you set on the main Time range filter. This capability helps you get an overview regarding availability of your SAP sys It has two categories of insights: * Azure platform: VM health events filtered by the time range set, either by the workbook filter or the selected alert. This pane also consists of VM availability metric trend for the chosen VM. :::image type="content" source="./media/enable-sap-insights/availability-vm-health.png" alt-text="Screenshot of the VM health events of availability insights.":::- * SAP Application: Process availability and contextual insights on the process like error messages (SM21), Lock entries (SM12) and Canceled jobs (SM37) which can help you find issues that might exist in parallel in the system, at the point in time. + * SAP Application: Process availability and contextual insights on the process like error messages (SM21), Lock entries (SM12), and Canceled jobs (SM37) which can help you find issues that might exist in parallel in the system, at the point in time. ### Performance Insights This capability helps you get an overview regarding performance of your SAP system in one place. You can also correlate key SAP performance issues with related SAP application logs alongside Azure platform utilization metrics and SAP workload configuration drifts easing the overall root-causing process. This capability helps you get an overview regarding performance of your SAP syst #### Scope of the preview We have insights only for a limited set of issues as part of the preview. We extend this capability to most of the issues supported by AMS alerts before this capability is Generally Available(GA). -* Availability insights let you detect and troubleshoot unavailability of Netweaver system, instance and HANA DB. +* Availability insights let you detect and troubleshoot unavailability of Netweaver system, instance, and HANA DB. * Performance insights are provided for NetWeaver metrics - High response time(ST03) and Long running batch jobs. ## Next steps |
sap | Enable Tls Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/enable-tls-azure-monitor-sap-solutions.md | To enable secure communication in Azure Monitor for SAP solutions, you can choos We highly recommend that you use root certificates. For root certificates, Azure Monitor for SAP solutions supports only certificates from [certificate authorities (CAs) that participate in the Microsoft Trusted Root Program](/security/trusted-root/participants-list). -Certificates must be signed by a trusted root authority. Self-signed certificates are not supported. +Certificates must be signed by a trusted root authority. Self-signed certificates aren't supported. ## How does it work? |
sap | Get Alerts Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/get-alerts-portal.md | To access the new Alerts experience in Azure Monitor for SAP Solutions: 1. Navigate to the Azure portal. 1. Select your Azure Monitor for SAP Solutions instance. :::image type="content" source="./media/get-alerts-portal/new-alerts-view.png" alt-text="Screenshot showing central alerts view." lightbox="./media/get-alerts-portal/new-alerts-view.png":::-1. Click on the "Alerts" tab to explore the enhanced alert management capabilities. +1. Select the "Alerts" tab to explore the enhanced alert management capabilities. ## Next steps |
sap | Provider Ha Pacemaker Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ha-pacemaker-cluster.md | For SUSE-based Pacemaker clusters, Please follow below steps to install in each sudo systemctl enable prometheus-ha_cluster_exporter ``` -1. Data is then collected in the system by ha_cluster_exporter. You can export the data via URL `http://<ip address of the server>:9664/metrics`. -To check if the metrics are fetched via URL on the server where the ha_cluster_exporter is installed, Run below command on the server. +1. Data is collected in the system through the ha_cluster_exporter. You can export the data via URL `http://<ip address of the server>:9664/metrics`. +To check if the metrics are fetched via URL on the server where the ha_cluster_exporter is installed, Run the following command on the server. ```bash curl http://localhost:9664/metrics For RHEL-based Pacemaker clusters, Please follow below steps to install in each sudo systemctl enable pmcd ``` -1. Install and enable the HA cluster PMDA. Replace `$PCP_PMDAS_DIR` with the path where `hacluster` is installed. Use the `find` command in Linux to find the path of "hacluster" bits. usually hacluster will be in path "/var/lib/pcp/pmdas". -Example : cd /var/lib/pcp/pmdas/hacluster +1. Install and enable the HA cluster PMDA. Replace `$PCP_PMDAS_DIR` with the path where `hacluster` is installed. Use the `find` command in Linux to find the path of "hacluster" bits. Usually hacluster is in path "/var/lib/pcp/pmdas". +Example: cd /var/lib/pcp/pmdas/hacluster ```bash cd $PCP_PMDAS_DIR/hacluster Example : cd /var/lib/pcp/pmdas/hacluster sudo systemctl enable pmproxy ``` -1. Data is then collected in the system by PCP. You can export the data by using `pmproxy` via URL `http://<ipaddress of the serrver>:44322/metrics?names=ha_cluster`. -To check if the metrics are fetched via URL on the server where the hacluster is installed, Run below command on the server. +1. Data gets collected in the system by PCP. You can export the data by using `pmproxy` via URL `http://<ipaddress of the serrver>:44322/metrics?names=ha_cluster`. +To check if the metrics are fetched via URL on the server where the hacluster is installed, Run the following command on the server. ```bash curl http://localhost:44322/metrics?names=ha_cluster To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to the Azure Monitor for SAP solutions service. 1. Open your Azure Monitor for SAP solutions resource.-1. On the resource's menu, under **Settings**, select **Providers**. +1. On the resource menu, under **Settings**, select **Providers**. 1. Select **Add** to add a new provider. ![Diagram that shows Azure Monitor for SAP solutions resource in the Azure portal, showing button to add a new provider.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-start.png) To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow ![Diagram that shows the setup for an Azure Monitor for SAP solutions resource, showing the fields for RHEL-based clusters.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-rhel.png) -1. Enter the SID - SAP system ID, Hostname - SAP hostname of the Virtual machine (Command `hostname -s` for SUSE and RHEL based servers will give hostname detail.), Cluster - Provide any custom name that is easy to identify the SAP system cluster - this Name will be visible in the workbook for metrics (need not have to be the cluster name configured on the server). +1. Enter the SID - SAP system ID, Hostname - SAP hostname of the Virtual machine (Command `hostname -s` for SUSE and RHEL based servers provide hostname detail), and Cluster - Provide any custom name that is easy to identify the SAP system cluster - this Name is visible in the workbook for metrics (need not have to be the cluster name configured on the server). -1. Click on "Start test" under "Prerequisite check (Preview) - highly recommended" - This test will help validate the connectivity from AMS subnet to the SAP source system and list out if any error's found - which need to be addressed before provider creation otherwise the provider creation will fail with error. +1. Select "Start test" under "Prerequisite check (Preview) - highly recommended" - This test helps validate the connectivity from AMS subnet to the SAP source system and list out if any errors are found - which need to be addressed before provider creation otherwise the provider creation will fail with error. 1. Select **Create** to finish creating the Provider. -1. Create provider for each of the servers in the cluster to be able to see the metrics in the workbook -For example - If the Cluster has three servers configured, Create three providers for each of the three servers with all of the above steps followed. +1. Create provider for each of the servers in the cluster to be able to see the metrics in the workbook. For example, if the Cluster has three servers configured, Create three providers for each of the three servers with all of the above steps followed. ## Troubleshooting |
sap | Provider Hana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-hana.md | |
sap | Provider Ibm Db2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ibm-db2.md | To create the IBM Db2 provider for Azure Monitor for SAP solutions: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to the Azure Monitor for SAP solutions service. 1. Open the Azure Monitor for SAP solutions resource you want to modify.-1. On the resource's menu, under **Settings**, select **Providers**. +1. On the resource menu, under **Settings**, select **Providers**. 1. Select **Add** to add a new provider. 1. For **Type**, select **IBM Db2**. 1. (Optional) Select **Enable secure communication** and choose a certificate type from the dropdown list. |
sap | Provider Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-linux.md | In this how-to guide, you learn how to create a Linux OS provider for Azure Moni - An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md). - Install the [node exporter latest version](https://prometheus.io/download/#node_exporter) in each SAP host that you want to monitor, either BareMetal or Azure virtual machine (VM). For more information, see the [node exporter GitHub repository](https://github.com/prometheus/node_exporter). - Node exporter uses the default port 9100 to expose the metrics. If you want to use a custom port, make sure to open the port in the firewall and use the same port while creating the provider.-- Default port 9100 or custom port that will be configured for node exporter should be open and listening on the Linux host.+- Default port 9100 or custom port that is configured for node exporter should be open and listening on the Linux host. To install the node exporter on Linux: -Right click on the relevant node exporter version for linux from https://prometheus.io/download/#node_exporter and copy the link address which will be used in the below command. -For example - https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz +Right click on the relevant node exporter version for linux from https://prometheus.io/download/#node_exporter and copy the link address to be used in the following command. +For example, https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz 1. Change to the directory where you want to install the node exporter. 1. Run `wget https://github.com/prometheus/node_exporter/releases/download/v<xxx>/node_exporter-<xxx>.linux-amd64.tar.gz`. Replace `xxx` with the version number. When the provider settings validation operation fails with the code `PrometheusU 1. Try to restart the node exporter agent: 1. Go to the folder where you installed the node exporter (the file name resembles `node_exporter-<xxxx>-amd64`). 1. Run `./node_exporter`.- 1. Run `nohup ./node_exporter &` command to enable node_exporter. Adding nohup and & to above command decouples the node_exporter from linux machine commandline. If not included node_exporter would stop when the commandline is closed. + 1. Run `nohup ./node_exporter &` command to enable node_exporter. Adding nohup and & to the previous command decouples the node_exporter from the linux machine commandline. If not included, the node_exporter stops when the commandline is closed. 1. Verify that the Prometheus endpoint is reachable from the subnet that you provided when you created the Azure Monitor for SAP solutions resource. ## Suggestion |
sap | Provider Netweaver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-netweaver.md | -In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with *Azure Monitor for SAP solutions*. +In this how-to guide, learn how to configure the SAP NetWeaver provider for use with *Azure Monitor for SAP solutions*. User can select between the two connection types when configuring SAP Netweaver provider to collect information from SAP system. Metrics are collected by using - **SAP Control** - The SAP start service provides multiple services, including monitoring the SAP system. Both versions of Azure Monitor for SAP solutions use **SAP Control**, which is a SOAP web service interface that exposes these capabilities. The **SAP Control** interface [differentiates between protected and unprotected web service methods](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv). It's necessary to unprotect some methods to use Azure Monitor for SAP solutions with NetWeaver. - **SAP RFC** - Azure Monitor for SAP solutions also provides ability to collect additional information from the SAP system using Standard SAP RFC. It's available only as part of Azure Monitor for SAP solution. -You can collect the below metric using SAP NetWeaver Provider +You can collect the following metric using SAP NetWeaver Provider: - SAP system and application server availability (for example Instance process availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS Watchdog) (SAP Control) - Work process usage statistics and trends (SAP Control) You can collect the below metric using SAP NetWeaver Provider ## Configure NetWeaver for Azure Monitor for SAP solutions -To configure the NetWeaver provider for the current Azure Monitor for SAP solutions version, you'll need to: +To configure the NetWeaver provider for the current Azure Monitor for SAP solutions version, you need to do the following: 1. [Prerequisite - Unprotect methods for metrics](#prerequisite-unprotect-methods-for-metrics) 1. [Prerequisite to enable RFC metrics](#prerequisite-to-enable-rfc-metrics) to unprotect the web-methods in the SAP Windows virtual machine. ### Prerequisite to enable RFC metrics -RFC metrics are only supported for **AS ABAP applications** and do not apply to SAP JAVA systems. This step is **mandatory** when the connection type selected is **SOAP+RFC**. -Below steps need to be performed as a pre-requisite to enable RFC +RFC metrics are only supported for **AS ABAP applications** and don't apply to SAP JAVA systems. This step is **mandatory** when the connection type selected is **SOAP+RFC**. +Below steps need to be performed as a prerequisite to enable RFC 1. **Create or upload role** in the SAP NW ABAP system. Azure Monitor for SAP solutions requires this role to connect to SAP. The role uses the least privileged access. Download and unzip [Z_AMS_NETWEAVER_MONITORING.zip](https://github.com/MicrosoftDocs/azure-docs-pr/files/12528831/Z_AMS_NETWEAVER_MONITORING.zip) Below steps need to be performed as a pre-requisite to enable RFC 4. **Enable SICF Services** to access the RFC via the SAP Internet Communication Framework (ICF) 1. Go to transaction code **SICF**. 1. Go to the service path `/default_host/sap/bc/soap/`.- 1. Activate the services **wsdl**, **wsdl11** and **RFC**. + 1. Activate the services **wsdl**, **wsdl11, and **RFC**. -It's also recommended to check that you enabled the ICF ports. +It's recommended to check that you enabled the ICF ports. -4. **SMON** - Enable **SMON** to monitor the system performance.Make sure the version of **ST-PI** is **SAPK-74005INSTPI**. - You'll see empty visualization as part of the workbook when it isn't configured. +4. **SMON** - Enable **SMON** to monitor the system performance. Make sure the version of **ST-PI** is **SAPK-74005INSTPI**. + You see empty visualization as part of the workbook when it isn't configured. 1. Enable the **SDF/SMON** snapshot service for your system. Turn on daily monitoring. For instructions, see [SAP Note 2651881](https://userapps.support.sap.com/sap/support/knowledge/2651881). 2. Configure **SDF/SMON** metrics to be aggregated every minute. 3. Recommended scheduling **SDF/SMON** as a background job in your target SAP client each minute.- 4. If you notice empty visualization as part of the workbook tab "System Performance - CPU and Memory (/SDF/SMON)", please apply the below SAP note: - 1. Release 740 SAPKB74006-SAPKB74025 - Release 755 Until SAPK-75502INSAPBASIS. For specific support package versions please refer to the SAP NOTE.- [SAP Note 2246160](https://launchpad.support.sap.com/#/notes/2246160). - 2. If the metric collection does not work with the above note then please try - [SAP Note 3268727](https://launchpad.support.sap.com/#/notes/3268727) + 4. If you notice empty visualization as part of the workbook tab "System Performance - CPU and Memory (/SDF/SMON)", apply the following SAP note: + 1. Release 740 SAPKB74006-SAPKB74025 - Release 755 Until SAPK-75502INSAPBASIS. For specific support package versions, refer to the SAP NOTE.- [SAP Note 2246160](https://launchpad.support.sap.com/#/notes/2246160). + 2. If the metric collection doesn't work with the previous note, try - [SAP Note 3268727](https://launchpad.support.sap.com/#/notes/3268727) 6. **To enable secure communication** - To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md) with SAP NetWeaver provider please execute steps mentioned on this [SAP document](https://help.sap.com/docs/ABAP_PLATFORM_NEW/e73bba71770e4c0ca5fb2a3c17e8e229/4923501ebf5a1902e10000000a42189c.html?version=201909.002) + To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md) with SAP NetWeaver provider, execute the steps mentioned in the [SAP document](https://help.sap.com/docs/ABAP_PLATFORM_NEW/e73bba71770e4c0ca5fb2a3c17e8e229/4923501ebf5a1902e10000000a42189c.html?version=201909.002) **Check if SAP systems are configured for secure communication using TLS 1.2 or higher** 1. Go to transaction RZ10.- 2. Open DEFAULT profile, select Extended Maintenance and click change. - 3. Below configuration is for TLS1.2 the bit mask will be 544: PFS. If TLS version is higher, then bit mask will be greater than 544. + 2. Open DEFAULT profile, select Extended Maintenance and select change. + 3. The following configuration is for TLS1.2, the bit mask will be 544: PFS. If the TLS version is higher, the bit mask will be greater than 544. ![tlsimage1](https://user-images.githubusercontent.com/74435183/219510020-0b26dacd-be34-4441-bf44-f3198338d416.png) Ensure all the prerequisites are successfully completed. To add the NetWeaver pr 3. For **System ID (SID)**, enter the three-character SAP system identifier. 4. For **Application Server**, enter the IP address or the fully qualified domain name (FQDN) of the SAP NetWeaver system to monitor. For example, `sapservername.contoso.com` where `sapservername` is the hostname and `contoso.com` is the domain. If you're using a hostname, make sure there's connectivity from the virtual network that you used to create the Azure Monitor for SAP solutions resource. 5. For **Instance number**, specify the instance number of SAP NetWeaver (00-99)- 6. For **Connection type** - select either [SOAP](#prerequisite-unprotect-methods-for-metrics) + [RFC](#prerequisite-to-enable-rfc-metrics) or [SOAP](#prerequisite-unprotect-methods-for-metrics) based on the metric collected (refer above section for details) + 6. For **Connection type** - select either [SOAP](#prerequisite-unprotect-methods-for-metrics) + [RFC](#prerequisite-to-enable-rfc-metrics) or [SOAP](#prerequisite-unprotect-methods-for-metrics) based on the metric collected (refer to the previous section for details) 7. For **SAP client ID**, provide the SAP client identifier. 8. For **SAP ICM HTTP Port**, enter the port that the ICM is using, for example, 80(NN) where (NN) is the instance number. 9. For **SAP username**, enter the name of the user that you created to connect to the SAP system. 10. For **SAP password**, enter the password for the user. 11. For **Host file entries**, provide the DNS mappings for all SAP VMs associated with the SID Enter **all SAP application servers and ASCS** host file entries in **Host file entries**. Enter host file mappings in comma-separated format. The expected format for each entry is IP address, FQDN, hostname. For example: **192.X.X.X sapservername.contoso.com sapservername,192.X.X.X sapservername2.contoso.com sapservername2**.- To determine all SAP hostnames associated with the SID, Sign in to the SAP system using the `sidadm` user. Then, run the following command (or) you can leverage the script below to generate the hostfile entries. + To determine all SAP hostnames associated with the SID, Sign in to the SAP system using the `sidadm` user. Then, run the following command (or) you can use the following script to generate the host file entries. Command to find a list of instances associated with a given SID Ensure all the prerequisites are successfully completed. To add the NetWeaver pr /usr/sap/hostctrl/exe/sapcontrol -nr <instancenumber> -function GetSystemInstanceList ``` - **Scripts to generate hostfile entries** + **Scripts to generate host file entries** - We highly recommend following the detailed instructions in the [link](https://github.com/Azure/Azure-Monitor-for-SAP-solutions-preview/tree/main/Provider_Pre_Requisites/SAP_NetWeaver_Pre_Requisites/GenerateHostfileMappings) for generating hostfile entries. These entries are crucial for the successful creation of the Netweaver provider for your SAP system. + We highly recommend following the detailed instructions in the [link](https://github.com/Azure/Azure-Monitor-for-SAP-solutions-preview/tree/main/Provider_Pre_Requisites/SAP_NetWeaver_Pre_Requisites/GenerateHostfileMappings) for generating host file entries. These entries are crucial for the successful creation of the Netweaver provider for your SAP system. ## Troubleshooting for SAP Netweaver Provider Ensure all the prerequisites are successfully completed. To add the NetWeaver pr 1. **Unable to reach the SAP hostname. ErrorCode: SOAPApiConnectionError** 1. Check the input hostname, instance number, and host file mappings for the hostname provided.- 2. Follow the instruction for determining the [hostfile entries](#adding-netweaver-provider) Host file entries section. - 3. Ensure the NSG/firewall is not blocking the port ΓÇô 5XX13 or 5XX14. (XX - SAP Instance Number) + 2. Follow the instruction for determining the [host file entries](#adding-netweaver-provider) Host file entries section. + 3. Ensure the NSG/firewall isn't blocking the port ΓÇô 5XX13 or 5XX14. (XX - SAP Instance Number) 4. Check if AMS and SAP VMs are in the same vNet or are attached using vNet peering. If not attached, see the following [link](/azure/virtual-network/tutorial-connect-virtual-networks-portal) to connect vNets: Ensure all the prerequisites are successfully completed. To add the NetWeaver pr 2. Batch job metrics - If you notice empty visualization as part of the workbook tab "Application Performance -Batch Jobs (SM37)", please apply the below SAP note + If you notice empty visualization as part of the workbook tab "Application Performance -Batch Jobs (SM37)", apply the following SAP note [SAP Note 2469926](https://launchpad.support.sap.com/#/notes/2469926) in your SAP System. - After you apply this OSS note you need to execute the RFC function module - BAPI_XMI_LOGON_WS with the following parameters: + Once you apply the OSS note, you need to execute the RFC function module - BAPI_XMI_LOGON_WS with the following parameters: This function module has the same parameters as BAPI_XMI_LOGON but stores them in the table BTCOPTIONS. Ensure all the prerequisites are successfully completed. To add the NetWeaver pr 4. SWNC metrics - To ensure a successful retrieval of the SWNC metrics, it is essential to confirm that both the SAP system and the operating system (OS) have synchronized times. + To ensure a successful retrieval of the SWNC metrics, you must confirm that the SAP system and the operating system (OS) have synchronized times. ## Next steps |
sap | Provider Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-sql-server.md | |
sap | Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/providers.md | You can configure one or more providers of the provider type SAP NetWeaver to en With the SAP NetWeaver provider, you can get the: -- SAP system and application server availability (for example, instance process availability of Dispatcher, ICM, Gateway, Message Server, Enqueue Server, IGS Watchdog) (SAPOsControl).+- SAP system and application server availability, for example, instance process availability of Dispatcher, ICM, Gateway, Message Server, Enqueue Server, and IGS Watchdog (SAPOsControl). - Work process usage statistics and trends (SAPOsControl). - Enqueue lock statistics and trends (SAPOsControl). - Queue usage statistics and trends (SAPOsControl). For SOAP web methods: For SOAP+RFC: - FQDN of the SAP Web Dispatcher or the SAP application server. - SAP system ID, Instance no.- - SAP client ID, HTTP port, SAP username and password for login. + - SAP client ID, HTTP port, and SAP username and password for sign in. - Host file entries of all SAP application servers that get listed via the SAPcontrol `GetSystemInstanceList` web method. For more information, see [Configure SAP NetWeaver for Azure Monitor for SAP solutions](provider-netweaver.md). With the SAP HANA provider, you can see the: - SAP HANA system replication. - SAP HANA backup data. - Fetching services.-- Network throughput between the nodes in a scaleout system.+- Network throughput between the nodes in a scale-out system. - SAP HANA long-idling cursors. - SAP HANA long-running transactions. - Checks for configuration parameter values. Configuring SQL Server provider requires the: ## Provider type: High-availability cluster -You can configure one or more providers of the provider type *high-availability cluster* to enable data collection from the Pacemaker cluster within the SAP landscape. The high-availability cluster provider connects to Pacemaker by using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE**-based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL**-based clusters. Azure Monitor for SAP solutions then pulls data from the cluster and pushes it to the Log Analytics workspace in your subscription. The high-availability cluster provider collects data every 60 seconds from Pacemaker. +You can configure one or more providers of the provider type *high-availability cluster* to enable data collection from the Pacemaker cluster within the SAP landscape. The high-availability cluster provider connects to Pacemaker by using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE**-based clusters and by using [Performance copilot](https://access.redhat.com/articles/6139852) for **RHEL**-based clusters. Azure Monitor for SAP solutions then pulls data from the cluster and pushes it to the Log Analytics workspace in your subscription. The high-availability cluster provider collects data every 60 seconds from Pacemaker. With the high-availability cluster provider, you can get the: |
sap | Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-portal.md | In this quickstart, you get started with Azure Monitor for SAP solutions by usin - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Set up a network](./set-up-network.md) before you create an Azure Monitor instance. - Create or choose a virtual network for Azure Monitor for SAP solutions that has access to the source SAP system's virtual network.-- Create a subnet with an address range of IPv4/25 or larger in the virtual network that's associated with Azure Monitor for SAP solutions, with subnet delegation assigned to **Microsoft.Web/serverFarms**.+- Create a subnet with an address range of IPv4/25 or larger in the virtual network associated with Azure Monitor for SAP solutions, with subnet delegation assigned to **Microsoft.Web/serverFarms**. > [!div class="mx-imgBorder"] > ![Screenshot that shows subnet creation for Azure Monitor for SAP solutions.](./media/quickstart-portal/subnet-creation.png) In this quickstart, you get started with Azure Monitor for SAP solutions by usin 2. For **Resource group**, create a new resource group or select an existing one under the subscription. 3. For **Resource name**, enter the name for the Azure Monitor for SAP solutions instance. 4. For **Workload region**, select the region where the monitoring resources are created. Make sure that it matches the region for your virtual network.- 5. **Service region** is where your proxy resource is created. The proxy resource manages monitoring resources deployed in the workload region. The service region is automatically selected based on your **Workload region** selection. + 5. **Service region** is where your proxy resource is created. The proxy resource manages all monitoring resources deployed in the workload region. The service region is automatically selected based on your **Workload region** selection. 6. For **Virtual network**, select a virtual network that has connectivity to your SAP systems for monitoring. 7. For **Subnet**, select a subnet that has connectivity to your SAP systems. You can use an existing subnet or create a new one. It must be an IPv4/25 block or larger.- 8. For **Log analytics**, you can use an existing Log Analytics workspace or create a new one. If you create a new workspace, it's created inside the managed resource group along with other monitoring resources. - 9. For **Managed resource group name**, enter a unique name. This name is used to create a resource group that will contain all the monitoring resources. You can't change this name after the resource is created. + 8. For **Log analytics**, you can use an existing Log Analytics workspace or create a new one. If you create a new workspace, it gets created inside the managed resource group along with other monitoring resources. + 9. For **Managed resource group name**, enter a unique name. This name is used to create a resource group that contains all the monitoring resources. You can't change this name after the resource is created. > [!div class="mx-imgBorder"] > ![Screenshot that shows basic details for an Azure Monitor for SAP solutions instance.](./media/quickstart-portal/azure-monitor-quickstart-2-new.png) In this quickstart, you get started with Azure Monitor for SAP solutions by usin 5. On the **Tags** tab, you can add tags to the monitoring resource. Make sure to add all the mandatory tags if you have a tag policy in place. -6. On the **Review + create** tab, review the details and select **Create**. +6. On the **Review + create** tab, review the details then select **Create**. ## Create a provider in Azure Monitor for SAP solutions |
sap | Quickstart Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-powershell.md | In this quickstart, get started with Azure Monitor for SAP solutions by using th ``` - Create or choose a virtual network for Azure Monitor for SAP solutions that has access to the source SAP system's virtual network.-- Create a subnet with an address range of IPv4/25 or larger in the virtual network that's associated with Azure Monitor for SAP solutions, with subnet delegation assigned to **Microsoft.Web/serverFarms**.+- Create a subnet with an address range of IPv4/25 or larger in the virtual network associated with Azure Monitor for SAP solutions, with subnet delegation assigned to **Microsoft.Web/serverFarms**. > [!div class="mx-imgBorder"] > ![Screenshot that shows subnet creation for Azure Monitor for SAP solutions.](./media/quickstart-powershell/subnet-creation.png) To create an SAP NetWeaver provider, use the [New-AzWorkloadsProviderInstance](/ Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 ``` -In the following code, `hostname` is the host name or IP address for SAP Web Dispatcher or the application server. `SapHostFileEntry` is the IP address, fully qualified domain name, or host name of every instance that's listed in [GetSystemInstanceList](./provider-netweaver.md#adding-netweaver-provider) point 6 (xi). +In the following code, `hostname` is the host name or IP address for SAP Web Dispatcher or the application server. `SapHostFileEntry` is the IP address, fully qualified domain name, or host name of every instance listed in [GetSystemInstanceList](./provider-netweaver.md#adding-netweaver-provider) point 6 (xi). ```azurepowershell-interactive $subscription_id = '00000000-0000-0000-0000-000000000000' |
sap | Security Baseline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/security-baseline.md | -This security baseline applies guidance from the Microsoft cloud security benchmarks version 1.0. The Microsoft cloud security benchmark provides recommendations on how you can secure your cloud solutions on Azure. +This security baseline applies guidance from the Microsoft cloud security benchmark version 1.0. The Microsoft cloud security benchmark provides recommendations on how you can secure your cloud solutions on Azure. -You can monitor this security baseline and its recommendations using Microsoft Defender for Cloud. Azure Policy definitions is listed in the Regulatory Compliance section of the Microsoft Defender for Cloud dashboard. +You can monitor this security baseline and its recommendations using Microsoft Defender for Cloud. Azure Policy definitions are listed in the Regulatory Compliance section of the Microsoft Defender for Cloud dashboard. -When a feature has relevant Azure Policy Definitions, they are listed in this baseline to help you measure compliance with the Microsoft cloud security benchmark controls and recommendations. Some recommendations may require a paid Microsoft Defender plan to enable certain security scenarios. +When a feature has relevant Azure Policy Definitions, they're listed in this baseline to help you measure compliance with the Microsoft cloud security benchmark controls and recommendations. Some recommendations can require a paid Microsoft Defender plan to enable certain security scenarios. When Azure Monitor for SAP solutions is deployed, a managed resource group is deployed with it. -This managed resource group contains services such as Azure Log Analytics, Azure Functions, Azure Storage and Azure Key Vault. +This managed resource group contains services such as Azure Log Analytics, Azure Functions, Azure Storage, and Azure Key Vault. -## Security baseline for relevant servicea +## Security baseline for relevant services - [Azure Log Analytics](/security/benchmark/azure/baselines/azure-monitor-security-baseline) - [Azure Functions](/security/benchmark/azure/baselines/functions-security-baseline) |
sap | Set Up Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/set-up-network.md | You can use this option after you deploy an Azure Monitor for SAP solutions reso 1. Select the subnet's name to find the associated NSG. Note the NSG's information. 1. Set new NSG rules for outbound network traffic: 1. Go to the NSG resource in the Azure portal.- 1. On the NSG's menu, under **Settings**, select **Outbound security rules**. + 1. On the NSG menu, under **Settings**, select **Outbound security rules**. 1. Select **Add** to add the following new rules: | Priority | Name | Port | Protocol | Source | Destination | Action | |
sap | Sap On Azure Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/sap-on-azure-overview.md | For more information, see the [SAP on Azure VM workloads](workloads/get-started. ### SAP Integration with Microsoft Services -In addition to the capabilities to run SAP IaaS and SaaS workloads on Azure, Microsoft offers a variety of capabilities, scenarios, best-practice guides, and tutorials to integrate SAP workloads running anywhere with other Microsoft products and services. Among them are popular services such as Microsoft Entra ID, Exchange Online, Power Platform and Power BI, Azure Integration Services, Excel, SAP Business Technology Platform, SAP Analytics Cloud, SAP Data Warehouse Cloud, and SAP Success Factors to name a few. +In addition to the capabilities to run SAP IaaS and SaaS workloads on Azure, Microsoft offers various capabilities, scenarios, best-practice guides, and tutorials to integrate SAP workloads running anywhere with other Microsoft products and services. Among them are popular services such as Microsoft Entra ID, Exchange Online, Power Platform and Power BI, Azure Integration Services, Excel, SAP Business Technology Platform, SAP Analytics Cloud, SAP Data Warehouse Cloud, and SAP Success Factors to name a few. For more information, see the [SAP Integration with Microsoft Services](workloads/integration-get-started.md) documentation. For more information, see the [Azure Center for SAP solutions](center-sap-soluti ## SAP on Azure deployment automation framework -The SAP on Azure deployment automation framework is an open-source orchestration tool for deploying, installing and maintaining SAP environments. +The SAP on Azure deployment automation framework is an open-source orchestration tool for deploying, installing, and maintaining SAP environments. For more information, see the [SAP on Azure deployment automation framework](automation/deployment-framework.md) documentation. |
sap | High Availability Guide Suse Pacemaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md | You can configure the SBD device by using either of two options: - SBD with an Azure shared disk: - To configure an SBD device, you need to attach at least one [Azure shared disk](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/disks-shared.md) to all virtual machines that are part of Pacemaker cluster. The advantage of SBD device using an Azure shared disk is that you don't need to deploy additional virtual machines. + To configure an SBD device, you need to attach at least one Azure shared disk to all virtual machines that are part of Pacemaker cluster. The advantage of SBD device using an Azure shared disk is that you don't need to deploy additional virtual machines. ![Diagram of the Azure shared disk SBD device for SLES Pacemaker cluster.](./media/high-availability-guide-suse-pacemaker/azure-shared-disk-sbd-device.png) |
sentinel | Add Advanced Conditions To Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-advanced-conditions-to-automation-rules.md | appliesto: - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a security operations center (SOC) analyst, I want to add advanced conditions to automation rules so that I can more effectively triage incidents and improve response efficiency. + # Add advanced conditions to Microsoft Sentinel automation rules |
sentinel | Automate Incident Handling With Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md | appliesto: - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a SOC analyst, I want to automate incident response tasks using automation rules so that I can streamline threat management and improve operational efficiency. + # Automate threat response in Microsoft Sentinel with automation rules |
sentinel | Authenticate Playbooks To Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/authenticate-playbooks-to-sentinel.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer, I want to understand my options when authenticating from playbooks to Microsoft Sentinel. ++#Customer intent: As a security analyst, I want to authenticate playbooks to Microsoft Sentinel so that I can automate and orchestrate security tasks efficiently. + # Authenticate playbooks to Microsoft Sentinel |
sentinel | Automate Responses With Playbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/automate-responses-with-playbooks.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer, I want to understand how Microsoft Sentinel playbooks can help make my SOC team more efficient. +#Customer intent: As a SOC analyst, I want to automate threat response using playbooks so that I can efficiently manage security alerts and incidents, reducing manual intervention and focusing on deeper investigations. + # Automate threat response with playbooks in Microsoft Sentinel |
sentinel | Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/automation.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer, I want to understand how automation in Microsoft Sentinel can help my SOC team be more efficient and remediate threats quicker. +#Customer intent: As a SOC analyst, I want to automate incident response and remediation tasks using SOAR capabilities so that I can focus on investigating advanced threats and reduce the risk of missed alerts. + # Automation in Microsoft Sentinel: Security orchestration, automation, and response (SOAR) |
sentinel | Create Playbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/create-playbooks.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customer-intent: As a SOC engineer, I want to understand how to create playbooks in Microsoft Sentinel so that my team can automate threat responses in our environment. +#Customer intent: As a security analyst, I want to manage automated response playbooks so that I can efficiently handle incidents and alerts in my environment. + # Create and manage Microsoft Sentinel playbooks |
sentinel | Create Tasks Playbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/create-tasks-playbook.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC analyst, I want to understand how to use playbooks to manage complex analysis processes in Microsoft Sentinel. ++#Customer intent: As a security analyst, I want to automate incident management tasks using playbooks so that I can streamline and manage complex workflows efficiently. + # Create and perform incident tasks in Microsoft Sentinel using playbooks |
sentinel | Define Playbook Access Restrictions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/define-playbook-access-restrictions.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer who's using Standard-plan playbooks, I want to understand how to define an access restriction policy, ensure that only Microsoft Sentinel has access to my Standard logic app with my playbook workflows. ++#Customer intent: As a security engineer using Standard-plan playbooks, I want to define an access restriction policy for playbooks so that I can ensure only authorized services can access sensitive workflows. + # Define an access restriction policy for Standard-plan playbooks |
sentinel | Logic Apps Playbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/logic-apps-playbooks.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customer intent: As a SOC engineer, I want to understand more about how Azure Logic Apps works with Microsoft Sentinel playbooks to help me automate threat prevention and response. ++#Customer intent: As a security engineer, I want to manage automated workflows using Azure Logic Apps for Microsoft Sentinel so that I can efficiently respond to security incidents and alerts. |
sentinel | Migrate Playbooks To Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/migrate-playbooks-to-automation-rules.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer, I want to understand how to migrate alert-trigger playbooks to automation rules, and why I might want to do so. ++#Customer intent: As a security engineer, I want to migrate my alert-trigger playbooks to automation rules so that I can streamline automation management and prepare for the deprecation of analytics rule triggers. + # Migrate your Microsoft Sentinel alert-trigger playbooks to automation rules |
sentinel | Playbook Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/playbook-recommendations.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer, I want to understand the sorts of use cases where playbooks are recommended, as well as recommended templates, and samples. +++#Customer intent: As a SOC analyst, I want to use recommended, pre-configured playbooks for incident response and automation so that I can streamline threat detection, investigation, and remediation processes. + # Recommended playbook use cases, templates, and examples |
sentinel | Playbook Triggers Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/playbook-triggers-actions.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer, I want to understand the types of triggers and actions available for use in Microsoft Sentinel playbooks. +#Customer intent: As a security analyst, I want to understand the supported triggers and actions in Microsoft Sentinel playbooks so that I can automate incident response and threat management effectively. + # Supported triggers and actions in Microsoft Sentinel playbooks |
sentinel | Run Playbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/run-playbooks.md | -#customer-intent: As a SOC engineer, I want to understand how to automate and run playbooks in Microsoft Sentinel so that my team can remediate security threats in our environment more efficiently. appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a security analyst, I want to automate incident response using playbooks so that I can streamline and enhance the efficiency of threat management. + # Automate and run Microsoft Sentinel playbooks |
sentinel | Tutorial Respond Threats Playbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/tutorial-respond-threats-playbook.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a SOC engineer, I'd like to understand a sample scenario of how I might use a playbook and automation rule to help make my SOC team more efficient. +#Customer intent: As a security engineer, I want to use playbooks and automation rules to quickly and effectively stop potentially compromised users from moving around the network and stealing information. + # Use a Microsoft Sentinel playbook to stop potentially compromised users |
sentinel | Use Playbook Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/use-playbook-templates.md | appliesto: - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a security operations center (SOC) analyst, I want to create and customize automation workflows from playbook templates so that I can efficiently respond to security incidents and streamline threat management. + # Create and customize Microsoft Sentinel playbooks from templates |
sentinel | Deploy Power Platform Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/business-applications/deploy-power-platform-solution.md | -#CustomerIntent: As a security engineer, I want to ingest Power Platform activity logs into Microsoft Sentinel for security monitoring, detect related threats, and respond to incidents. +++#Customer intent: As a security administrator, I want to deploy and configure the Microsoft Sentinel solution for Power Platform so that I can monitor and detect suspicious activities in my Power Platform environment. + # Deploy the Microsoft Sentinel solution for Microsoft Power Platform |
sentinel | Power Platform Solution Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/business-applications/power-platform-solution-overview.md | +#Customer intent: As a security operations manager, I want to understand how I can use Microsoft Sentinel to monitor and detect suspicious activities in my Power Platform environment so that I can protect my organization from potential threats and data breaches. + # Microsoft Sentinel solution for Microsoft Power Platform overview |
sentinel | Power Platform Solution Security Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/business-applications/power-platform-solution-security-content.md | +#Customer intent: As a security analyst, I want to understand Microsoft Sentinel's built-in analytics rules and parsers for Microsoft Power Platform so that I can detect and respond to potential security threats effectively. + # Microsoft Sentinel solution for Microsoft Power Platform: security content reference |
sentinel | Power Platform Solution Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/business-applications/power-platform-solution-troubleshoot.md | -#CustomerIntent: As a security engineer, I want to learn how to troubleshoot common issues with the Power Platform inventory connector for Microsoft Sentinel. +++#Customer intent: As a security engineer, I want to troubleshoot data collection issues in my Power Platform inventory connector so that I can ensure accurate and timely data ingestion for effective threat detection and response. + # Troubleshoot the Microsoft Sentinel solution for Microsoft Power Platform |
sentinel | Create Manage Use Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a SOC analyst, I want to manage automation rules for incident and alert responses so that I can enhance the efficiency and effectiveness of my security operations center. + # Create and use Microsoft Sentinel automation rules to manage response |
sentinel | Create Tasks Automation Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-tasks-automation-rule.md | appliesto: - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a SOC manager, I want to manage incident tasks using automation rules so that I can standardize and streamline analyst workflows. + # Create incident tasks in Microsoft Sentinel using automation rules |
sentinel | Data Type Cloud Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-type-cloud-support.md | +#Customer intent: As a security analyst, I want to understand the data type support for Microsoft Sentinel across different cloud environments so that I can ensure comprehensive threat detection and response. + # Support for data types in Microsoft Sentinel across different clouds |
sentinel | Deploy Dynamics 365 Finance Operations Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dynamics-365/deploy-dynamics-365-finance-operations-solution.md | +#Customer intent: As a security administrator, I want to deploy a monitoring solution for Dynamics 365 Finance and Operations so that I can detect and respond to threats and suspicious activities in real-time. + # Deploy Microsoft Sentinel solution for Dynamics 365 Finance and Operations |
sentinel | False Positives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/false-positives.md | +#Customer intent: As a security analyst, I want to handle false positives in my SIEM system so that I can reduce noise and focus on genuine threats. + # Handle false positives in Microsoft Sentinel |
sentinel | Feature Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md | +#Customer intent: As a security operations manager, I want to understand the Microsoft Sentinel's feature availability across different Azure environments so that I can effectively plan and manage our security operations. + # Microsoft Sentinel feature support for Azure commercial/other clouds |
sentinel | Geographical Availability Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/geographical-availability-data-residency.md | -#Customer intent: As a security operator setting up Microsoft Sentinel, I want to understand where data is stored, so I can meet compliance guidelines. +++#Customer intent: As a compliance officer or a security operator setting up Microsoft Sentinel, I want to understand the geographical availability and data residency of Microsoft Sentinel so that I can ensure our data meets regional compliance requirements. + # Geographical availability and data residency in Microsoft Sentinel |
sentinel | Get Visibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/get-visibility.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a security analyst, I want to learn how to get an initial view into Microsoft Sentinel data generated for my environment. +++#Customer intent: As a security analyst, I want to visualize and monitor data on a unified dashboard so that I can efficiently track incidents, automation, data records, and analytics in my environment. + # Visualize collected data on the Overview page |
sentinel | Health Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/health-audit.md | +#Customer intent: As a security analyst, I want to monitor and audit Microsoft Sentinel's health and activity so that I can ensure the service is functioning correctly and detect any unauthorized actions. + # Auditing and health monitoring in Microsoft Sentinel |
sentinel | Monitor Automation Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-automation-health.md | +#Customer intent: As a security analyst, I want to monitor the health of my automation rules and playbooks so that I can ensure the proper functioning and performance of my security orchestration and response operations. + # Monitor the health of your automation rules and playbooks |
sentinel | Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resources.md | Title: Compare playbooks, workbooks, and notebooks | Microsoft Sentinel description: Learn about the differences between playbooks, workbooks, and notebooks in Microsoft Sentinel. Previously updated : 02/26/2024 Last updated : 09/11/2024 +++#Customer intent: As a SOC engineer or analyst, I want to understand the differences between playbooks, workbooks, and notebooks so that I can choose the appropriate tool for automation, visualization, and data analysis tasks. + -# Compare playbooks, workbooks, and notebooks +# Compare workbooks, playbooks, and notebooks ++Workbooks, playbooks, and notebooks are key resources in Microsoft Sentinel that help you automate responses, visualize data, and analyze data, respectively. Sometimes it can be challenging to track which type of resource is right for your task. -This article describes the differences between playbooks, workbooks, and notebooks in Microsoft Sentinel. +This article helps to differentiate between workbooks, playbooks, and notebooks in Microsoft Sentinel: ++- After you connect your data sources to Microsoft Sentinel, visualize and monitor the data using [workbooks in Microsoft Sentinel](monitor-your-data.md). Microsoft Sentinel workbooks are based on [Azure Monitor workbooks](/azure/azure-monitor/visualize/workbooks-overview), and add tables and charts with analytics for your logs and queries to the tools already available in Azure. +- [Jupyter notebooks in Microsoft Sentinel](notebooks.md) are a powerful tool for security investigations and hunting, providing full programmability with a huge collection of libraries for machine learning, visualization, and data analysis. While many common tasks can be carried out in the portal, Jupyter extends the scope of what you can do with this data. +- Use [Microsoft Sentinel playbooks](automate-responses-with-playbooks.md) to run preconfigured sets of remediation actions to help automate and orchestrate your threat response. ## Compare by persona The following table compares Microsoft Sentinel playbooks, workbooks, and notebo |Resource |Description | |||-|**Playbooks** | <ul><li>SOC engineers</li><li>Analysts of all tiers</li></ul> | -|**Workbooks** | <ul><li> SOC engineers</li><li>Analysts of all tiers</li></ul> | -|**Notebooks** | <ul><li>Threat hunters and Tier-2/Tier-3 analysts</li><li>Incident investigators</li><li>Data scientists</li><li>Security researchers</li></ul> | +|**[Workbooks](monitor-your-data.md)** | <ul><li> SOC engineers</li><li>Analysts of all tiers</li></ul> | +|**[Notebooks](notebooks.md)** | <ul><li>Threat hunters and Tier-2/Tier-3 analysts</li><li>Incident investigators</li><li>Data scientists</li><li>Security researchers</li></ul> | +|**[Playbooks](automate-responses-with-playbooks.md)** | <ul><li>SOC engineers</li><li>Analysts of all tiers</li></ul> | ## Compare by use The following table compares Microsoft Sentinel playbooks, workbooks, and notebo |Resource |Description | |||-|**Playbooks** | Automation of simple, repeatable tasks:<ul><li>Ingesting external data </li><li>Data enrichment with TI, GeoIP lookups, and more </li><li> Investigation </li><li>Remediation </li></ul> | -|**Workbooks** | <ul><li>Visualization</li></ul> | -|**Notebooks** | <ul><li>Querying Microsoft Sentinel data and external data </li><li>Data enrichment with TI, GeoIP lookups, and WhoIs lookups, and more </li><li> Investigation </li><li> Visualization </li><li> Hunting </li><li>Machine learning and big data analytics </li></ul> | +|**[Playbooks](automate-responses-with-playbooks.md)** | Automation of simple, repeatable tasks:<ul><li>Ingesting external data </li><li>Data enrichment with TI, GeoIP lookups, and more </li><li> Investigation </li><li>Remediation </li></ul> | +|**[Notebooks](notebooks.md)** | <ul><li>Querying Microsoft Sentinel data and external data </li><li>Data enrichment with TI, GeoIP lookups, and WhoIs lookups, and more </li><li> Investigation </li><li> Visualization </li><li> Hunting </li><li>Machine learning and big data analytics </li></ul> | +|**[Workbooks](monitor-your-data.md)** | <ul><li>Visualization</li></ul> | ## Compare by advantages and challenges The following table compares the advantages and disadvantages of playbooks, work |Resource |Advantages | Challenges | ||||-|**Playbooks** | <ul><li> Best for single, repeatable tasks </li><li>No coding knowledge required </li></ul> | <ul><li>Not suitable for ad-hoc and complex chains of tasks </li><li>Not ideal for documenting and sharing evidence</li></ul> | -|**Workbooks** | <ul><li>Best for a high-level view of Microsoft Sentinel data </li><li>No coding knowledge required</li></ul> | <ul><li>Can't integrate with external data </li></ul> | -|**Notebooks** | <ul><li>Best for complex chains of repeatable tasks </li><li>Ad-hoc, more procedural control</li><li>Easier to pivot with interactive functionality </li><li>Rich Python libraries for data manipulation and visualization </li><li>Machine learning and custom analysis </li><li>Easy to document and share analysis evidence </li></ul> | <ul><li> High learning curve and requires coding knowledge </li></ul> | --## Related content --For more information, see: --- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)-- [Visualize collected data with workbooks](get-visibility.md) -- [Use Jupyter notebooks to hunt for security threats](notebooks.md)+|**[Playbooks](automate-responses-with-playbooks.md)** | <ul><li> Best for single, repeatable tasks </li><li>No coding knowledge required </li></ul> | <ul><li>Not suitable for ad-hoc and complex chains of tasks </li><li>Not ideal for documenting and sharing evidence</li></ul> | +|**[Notebooks](notebooks.md)** | <ul><li>Best for complex chains of repeatable tasks </li><li>Ad-hoc, more procedural control</li><li>Easier to pivot with interactive functionality </li><li>Rich Python libraries for data manipulation and visualization </li><li>Machine learning and custom analysis </li><li>Easy to document and share analysis evidence </li></ul> | <ul><li> High learning curve and requires coding knowledge </li></ul> | +|**[Workbooks](monitor-your-data.md)** | <ul><li>Best for a high-level view of Microsoft Sentinel data </li><li>No coding knowledge required</li></ul> | <ul><li>Can't integrate with external data </li></ul> | |
sentinel | Collect Sap Hana Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md | +#Customer intent: As a security analyst, I want to collect and analyze SAP HANA audit logs to Microsoft Sentinel so that I can monitor and respond to security events effectively. + # Collect SAP HANA audit logs in Microsoft Sentinel |
sentinel | Configure Audit Log Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit-log-rules.md | -#Customer.intent: As a security operator, I want to monitor the SAP audit logs and easily manage the logs, so I can reduce noise without compromising security value. +++#Customer intent: As a security analyst, I want to configure SAP audit log monitoring rules so that I can detect and respond to security anomalies efficiently. + # Configure SAP audit log monitoring rules |
sentinel | Configure Snc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md | -# CustomerIntent: As an SAP admin and Microsoft Sentinel user, I want to know how to use SNC to deploy the Microsoft Sentinel for SAP data connector over a secure connection. +++#Customer intent: As a security engineer, I want to deploy a secure data connector for SAP logs using SNC so that I can ensure encrypted and authenticated data transmission between SAP systems and my monitoring solution. + # Deploy the Microsoft Sentinel for SAP data connector by using SNC |
sentinel | Cross Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/cross-workspace.md | -# customer intent: As a security admin or SAP admin, I want to know how to use the Microsoft Sentinel solution for SAP applications in multiple workspaces so that I can plan a deployment. +++#Customer intent: As a security operations center (SOC) manager, I want to use Microsoft Sentinel for SAP applications across multiple workspaces so that I can ensure compliance with data residency requirements and facilitate collaboration between SOC and SAP teams. + # Work with the Microsoft Sentinel solution for SAP applications in multiple workspaces |
sentinel | Deploy Data Connector Agent Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md | +#Customer intent: As an SAP BASIS team member, I want to deploy and configure a containerized SAP data connector agent so that I can ingest SAP data into Microsoft Sentinel for enhanced monitoring and threat detection. + # Deploy and configure the container hosting the SAP data connector agent |
sentinel | Deploy Sap Btp Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-btp-solution.md | -# customer intent: As an SAP admin, I want to know how to deploy the Microsoft Sentinel solution for SAP BTP so that I can plan a deployment. +++#Customer intent: As a security administrator, I want to deploy a monitoring solution for SAP BTP so that I can detect and respond to threats and suspicious activities in my SAP environment. + # Deploy the Microsoft Sentinel solution for SAP BTP |
sentinel | Deploy Sap Security Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md | -# customer intent: As an SAP admin, I want to know how to deploy the Microsoft Sentinel solution for SAP applications from the content hub so that I can plan a deployment. +++#Customer intent: As a security administrator, I want to deploy and configure security monitoring for SAP applications using a cloud-based SIEM solution so that I can enhance the security posture and threat detection capabilities of my SAP environment. + # Deploy the Microsoft Sentinel solution for SAP applications from the content hub |
sentinel | Deployment Attack Disrupt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-attack-disrupt.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customerIntent: As a security engineer, I want to deploy automatic attack disruption for SAP with the unified security operations platform. +++#Customer intent: As a security engineer, I want to configure automatic attack disruption for SAP so that I can minimize the impact of sophisticated attacks and maintain control over investigation and remediation processes. + # Automatic attack disruption for SAP |
sentinel | Deployment Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md | -# customer intent: As a business user or decision maker, I want to get an overview of how to deploy the Microsoft Sentinel solution for SAP applications so that I know the scope of the information I need and how to access it. +#Customer intent: As a security analyst, I want to deploy and configure a monitoring solution for SAP applications so that I can detect and respond to security threats within my SAP environment. + # Deploy Microsoft Sentinel solution for SAP applications |
sentinel | Deployment Solution Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md | +#Customer intent: As a security analyst, I want to configure and monitor SAP systems using a cloud-based SIEM solution so that I can detect and respond to suspicious activities and threats effectively. + # Configure Microsoft Sentinel solution for SAP® applications |
sentinel | Preparing Sap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md | +#Customer intent: As an SAP BASIS team member, I want to configure SAP authorizations and deploy optional SAP Change Requests so that I can ensure proper connectivity and log retrieval from SAP systems for security monitoring. + # Configure SAP authorizations and deploy optional SAP Change Requests |
sentinel | Prerequisites For Deploying Sap Continuous Threat Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md | +#Customer intent: As a security administrator, I want to understand the prerequisites for deploying a Microsoft Sentinel solution for SAP applications so that I can ensure a smooth and compliant deployment process. + # Prerequisites for deploying Microsoft Sentinel solution for SAP® applications |
sentinel | Reference Kickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md | +#Customer intent: As an SAP BASIS team member, I want to understand the options in the kickstart script used to deploy and configure a container hosting the SAP data connector, so that I can streamline the setup process and manage secrets storage and network configurations efficiently. + # Kickstart script reference |
sentinel | Reference Systemconfig Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig-json.md | +#Customer intent: As an SAP BASIS team member, I want to understand the configuration options in the systemconfig.json file so that I can properly set up and manage the data collector for SAP applications. + # Systemconfig.json file reference |
sentinel | Reference Systemconfig | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md | +#Customer intent: As an SAP BASIS team member using the legacy systemconfig.ini file, I want to understand the configuration options so that I can properly configure the data collector for SAP applications. + # Systemconfig.ini file reference |
sentinel | Sap Audit Log Workbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-audit-log-workbook.md | +#Customer intent: As a security analyst, I want to use the SAP Security Audit log and Initial Access workbook so that I can monitor and investigate user audit activity across SAP systems for enhanced security and quick detection of suspicious actions. + # Microsoft Sentinel solution for SAP® applications - SAP -Security Audit log and Initial Access workbook |
sentinel | Sap Btp Security Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-security-content.md | +#Customer intent: As a security analyst, I want to use the Microsoft Sentinel solution for SAP BTP so that I can monitor, detect, and respond to security threats within my SAP BTP environment. + # Microsoft Sentinel Solution for SAP BTP: security content reference |
sentinel | Sap Btp Solution Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-solution-overview.md | +#Customer intent: As a security analyst, I want to monitor and protect SAP BTP applications so that I can detect and respond to security threats and suspicious activities effectively. + # Microsoft Sentinel Solution for SAP BTP overview |
sentinel | Sap Deploy Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md | +#Customer intent: As an SAP BASIS team member, I want to troubleshoot issues with my Microsoft Sentinel for SAP applications data connector so that I can ensure accurate and timely data ingestion and monitoring. + # Troubleshooting your Microsoft Sentinel solution for SAP® applications deployment |
sentinel | Sap Solution Deploy Alternate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-deploy-alternate.md | +#Customer intent: As an SAP BASIS team member, I want to deploy and configure a custom Microsoft Sentinel for SAP applications data connector so that I can securely integrate SAP logs into my cloud-based SIEM for enhanced monitoring and analysis. + # Expert configuration options, on-premises deployment, and SAPControl log sources |
sentinel | Sap Solution Log Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md | +#Customer intent: As a security analyst, I want to understand the functions, logs, and tables available in the Microsoft Sentinel solution for SAP applications so that I can effectively monitor and analyze SAP system security and performance. + # Microsoft Sentinel solution for SAP® applications data reference |
sentinel | Sap Solution Security Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md | +#Customer intent: As a security analyst, I want to use built-in workbooks and analytics rules for SAP applications so that I can monitor, detect, and respond to security incidents effectively. + # Microsoft Sentinel solution for SAP® applications: security content reference |
sentinel | Sap Suspicious Configuration Security Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-suspicious-configuration-security-parameters.md | +#Customer intent: As a security administrator, I want to monitor SAP security parameters so that I can detect and respond to suspicious configuration changes effectively. + # Monitored SAP security parameters for detecting suspicious configuration changes |
sentinel | Solution Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/solution-overview.md | +#Customer intent: As a security operations team member, I want to monitor and protect SAP systems using a comprehensive solution so that I can detect, analyze, and respond to threats effectively across all layers of the SAP environment. + # Microsoft Sentinel solution for SAP® applications overview |
sentinel | Update Sap Data Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/update-sap-data-connector.md | Last updated 03/27/2024 appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal+++#Customer intent: As a security operations engineer, I want to update the Microsoft Sentinel for SAP applications data connector agent so that I can ensure my SAP data integration is using the latest features and security updates. + # Update Microsoft Sentinel's SAP data connector agent |
sentinel | Soc Optimization Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-optimization/soc-optimization-access.md | Last updated 07/01/2024 appliesto: - Microsoft Sentinel in the Microsoft Defender portal - Microsoft Sentinel in the Azure portal-#customerIntent: As a SOC admin or SOC engineer, I want to learn about about how to optimize my security operations center with SOC optimization recommendations. +++#Customer intent: As a SOC analyst, I want to optimize security controls and data ingestion so that I can enhance threat detection and reduce costs without compromising coverage. + # Optimize your security operations |
sentinel | Soc Optimization Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-optimization/soc-optimization-api.md | Last updated 06/09/2024 appliesto: - Microsoft Sentinel in the Microsoft Defender portal - Microsoft Sentinel in the Azure portal-#customerIntent: As a SOC engineer, I want to learn about about how to interact with SOC optimziation recommendations programmatically via API. +++#Customer intent: As a security operations center (SOC) manager, I want to programmatically interact with SOC optimization recommendations so that I can automate evaluations, integrate with third-party tools, and manage multiple environments efficiently. + # Using SOC optimizations programmatically (Preview) |
sentinel | Soc Optimization Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-optimization/soc-optimization-reference.md | Last updated 06/09/2024 appliesto: - Microsoft Sentinel in the Microsoft Defender portal - Microsoft Sentinel in the Azure portal-#customerIntent: As a SOC admin or SOC engineer, I want to learn about the SOC optimization recommendations available to help me optimize my security operations. +++#Customer intent: As a SOC manager, I want to implement SOC optimization recommendations so that I can close coverage gaps and improve data usage efficiency without manual analysis. + # SOC optimization reference of recommendations |
sentinel | Tutorial Enrich Ip Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-enrich-ip-information.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a security analyst, I want to automate the process of checking IP address reputations in incidents so that I can quickly assess the severity of threats and save time. + # Tutorial: Automatically check and record IP address reputation information in incidents |
sentinel | Tutorial Extract Incident Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-extract-incident-entities.md | appliesto: - Microsoft Sentinel in the Microsoft Defender portal +++#Customer intent: As a security analyst, I want to extract and use non-native entity information from incidents so that I can enhance my investigative and remedial actions. + # Tutorial: Extract incident entities with non-native actions |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | Threat intelligence search and filtering capabilities have been enhanced, and th For more information, see the updated screenshot in [View and manage your threat indicators](understand-threat-intelligence.md#view-and-manage-your-threat-indicators). -## May 2024 --- [Optimize your security operations with SOC optimizations](#optimize-your-security-operations-with-soc-optimizations-preview)-- [Incident and entity triggers in playbooks are now Generally Available (GA)](#incident-and-entity-triggers-in-playbooks-are-now-generally-available-ga)--### Incident and entity triggers in playbooks are now Generally Available (GA) --The ability to use incident and entity triggers is playbooks is now supported as GA. ---For more information, see [Create a playbook](tutorial-respond-threats-playbook.md#create-a-playbook). --### Optimize your security operations with SOC optimizations (preview) --Microsoft Sentinel now provides SOC optimizations, which are high-fidelity and actionable recommendations that help you identify areas where you can reduce costs, without affecting SOC needs or coverage, or where you can add security controls and data where its found to be missing. --Use SOC optimization recommendations to help you close coverage gaps against specific threats and tighten your ingestion rates against data that doesn't provide security value. SOC optimizations help you optimize your Microsoft Sentinel workspace, without having your SOC teams spend time on manual analysis and research. --If your workspace is onboarded to the unified security operations platform, SOC optimizations are also available in the Microsoft Defender portal. --For more information, see: --- [Optimize your security operations](soc-optimization/soc-optimization-access.md)-- [SOC optimization reference of recommendations](soc-optimization/soc-optimization-reference.md)--## April 2024 --- [Unified security operations platform in the Microsoft Defender portal (preview)](#unified-security-operations-platform-in-the-microsoft-defender-portal-preview)-- [Microsoft Sentinel now generally available (GA) in Azure China 21Vianet](#microsoft-sentinel-now-generally-available-ga-in-azure-china-21vianet)-- [Two anomaly detections discontinued](#two-anomaly-detections-discontinued)-- [Microsoft Sentinel now available in Italy North region](#microsoft-sentinel-is-now-available-in-italy-north-region)--### Unified security operations platform in the Microsoft Defender portal (preview) --The unified security operations platform in the Microsoft Defender portal is now available. This release brings together the full capabilities of Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Copilot in Microsoft Defender. For more information, see the following resources: --- Blog announcement: [ΓÇïΓÇïUnified security operations platform with Microsoft Sentinel and Microsoft Defender XDR](https://aka.ms/unified-soc-announcement)-- [Microsoft Sentinel in the Microsoft Defender portal](https://go.microsoft.com/fwlink/p/?linkid=2263690)-- [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard)-- [Microsoft Security Copilot in Microsoft Defender XDR](/microsoft-365/security/defender/security-copilot-in-microsoft-365-defender)--### Microsoft Sentinel now generally available (GA) in Azure China 21Vianet --Microsoft Sentinel is now generally available (GA) in Azure China 21Vianet. Individual features might still be in public preview, as listed on [Microsoft Sentinel feature support for Azure commercial/other clouds](feature-availability.md). --For more information, see also [Geographical availability and data residency in Microsoft Sentinel](geographical-availability-data-residency.md). --### Two anomaly detections discontinued --The following anomaly detections are discontinued as of March 26, 2024, due to low quality of results: -- Domain Reputation Palo Alto anomaly-- Multi-region logins in a single day via Palo Alto GlobalProtect--For the complete list of anomaly detections, see the [anomalies reference page](anomalies-reference.md). --### Microsoft Sentinel is now available in Italy North region --Microsoft Sentinel is now available in Italy North Azure region with the same feature set as all other Azure Commercial regions as listed on [Microsoft Sentinel feature support for Azure commercial/other clouds](feature-availability.md). --For more information, see also [Geographical availability and data residency in Microsoft Sentinel](geographical-availability-data-residency.md). --## March 2024 --- [SIEM migration experience now generally available (GA)](#siem-migration-experience-now-generally-available-ga)-- [Amazon Web Services S3 connector now generally available (GA)](#amazon-web-services-s3-connector-now-generally-available-ga)-- [Codeless Connector builder (preview)](#codeless-connector-builder-preview)-- [Data connectors for Syslog and CEF based on Azure Monitor Agent now generally available (GA)](#data-connectors-for-syslog-and-cef-based-on-azure-monitor-agent-now-generally-available-ga)--### SIEM migration experience now generally available (GA) --At the beginning of the month, we announced the SIEM migration preview. Now at the end of the month, it's already GA! The new Microsoft Sentinel Migration experience helps customers and partners automate the process of migrating their security monitoring use cases hosted in non-Microsoft products into Microsoft Sentinel. -- This first version of the tool supports migrations from Splunk--For more information, see [Migrate to Microsoft Sentinel with the SIEM migration experience](siem-migration.md) --Join our Security Community for a [webinar](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR_0A4IaJRDNBnp8pjCkWnwhUM1dFNFpVQlZJREdEQjkwQzRaV0RZRldEWC4u) showcasing the SIEM migration experience on May 2nd, 2024. --### Amazon Web Services S3 connector now generally available (GA) --Microsoft Sentinel has released the AWS S3 data connector to general availability (GA). You can use this connector to ingest logs from several AWS services to Microsoft Sentinel using an S3 bucket and AWS's simple message queuing service. --Concurrent with this release, this connector's configuration has changed slightly for Azure Commercial Cloud customers. User authentication to AWS is now done using an OpenID Connect (OIDC) web identity provider, instead of through the Microsoft Sentinel application ID in combination with the customer workspace ID. Existing customers can continue using their current configuration for the time being, and will be notified well in advance of the need to make any changes. --To learn more about the AWS S3 connector, see [Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](connect-aws.md) --### Codeless connector builder (preview) --We now have a workbook to help navigate the complex JSON involved in deploying an ARM template for codeless connector platform (CCP) data connectors. Use the friendly interface of the **codeless connector builder** to simplify your development. --See our blog post for more details, [Create Codeless Connectors with the Codeless Connector Builder (Preview)](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/create-codeless-connectors-with-the-codeless-connector-builder/ba-p/4082050). --For more information on the CCP, see [Create a codeless connector for Microsoft Sentinel (Public preview)](create-codeless-connector.md). ---### Data connectors for Syslog and CEF based on Azure Monitor Agent now generally available (GA) --Microsoft Sentinel has released two more data connectors based on the Azure Monitor Agent (AMA) to general availability. You can now use these connectors to deploy Data Collection Rules (DCRs) to Azure Monitor Agent-installed machines to collect Syslog messages, including those in Common Event Format (CEF). --To learn more about the Syslog and CEF connectors, see [Ingest Syslog and CEF logs with the Azure Monitor Agent](connect-cef-syslog-ama.md). --## February 2024 --- [Install domain solutions with dependencies](#install-domain-solutions-with-dependencies)-- [Microsoft Sentinel solution for Microsoft Power Platform preview available](#microsoft-sentinel-solution-for-microsoft-power-platform-preview-available)-- [New Google Pub/Sub-based connector for ingesting Security Command Center findings (Preview)](#new-google-pubsub-based-connector-for-ingesting-security-command-center-findings-preview)-- [Incident tasks now generally available (GA)](#incident-tasks-now-generally-available-ga)-- [AWS and GCP data connectors now support Azure Government clouds](#aws-and-gcp-data-connectors-now-support-azure-government-clouds)-- [Windows DNS Events via AMA connector now generally available (GA)](#windows-dns-events-via-ama-connector-now-generally-available-ga)--### Install domain solutions with dependencies --Some Microsoft Sentinel content hub solutions, including many [domain solutions](sentinel-solutions-catalog.md#domain-solutions) and solutions that use the unified AMA connectors for [CEF, Syslog](cef-syslog-ama-overview.md), or [custom logs](connect-custom-logs-ama.md), don't necessarily include a data connector of their own. Instead, they rely on data connectors from other solutions to provide visibility in a specific area across data connectors. The data connectors they use are prerequisites for the domain solution to work properly. --When installing a domain solution, you can now select **Install with dependencies** to ensure that the data connectors required by the domain solution are also installed: ---For more information, see [Install with dependencies](sentinel-solutions-deploy.md#install-with-dependencies) and [Domain solutions](sentinel-solutions-catalog.md#domain-solutions). --### Microsoft Sentinel solution for Microsoft Power Platform preview available --The Microsoft Sentinel solution for Power Platform (preview) allows you to monitor and detect suspicious or malicious activities in your Power Platform environment. The solution collects activity logs from different Power Platform components and inventory data. It analyzes those activity logs to detect threats and suspicious activities like the following activities: --- Power Apps execution from unauthorized geographies-- Suspicious data destruction by Power Apps-- Mass deletion of Power Apps-- Phishing attacks made possible through Power Apps-- Power Automate flows activity by departing employees-- Microsoft Power Platform connectors added to the environment-- Update or removal of Microsoft Power Platform data loss prevention policies--Find this solution in the Microsoft Sentinel content hub. --For more information, see: -- [Microsoft Sentinel solution for Microsoft Power Platform overview](business-applications/power-platform-solution-overview.md)-- [Microsoft Sentinel solution for Microsoft Power Platform: security content reference](business-applications/power-platform-solution-security-content.md)-- [Deploy the Microsoft Sentinel solution for Microsoft Power Platform](business-applications/deploy-power-platform-solution.md)--### New Google Pub/Sub-based connector for ingesting Security Command Center findings (Preview) --You can now ingest logs from Google Security Command Center, using the new Google Cloud Platform (GCP) Pub/Sub-based connector (now in PREVIEW). --The Google Cloud Platform (GCP) Security Command Center is a robust security and risk management platform for Google Cloud. It provides features such as asset inventory and discovery, detection of vulnerabilities and threats, and risk mitigation and remediation. These capabilities help you gain insights into and control over your organization's security posture and data attack surface, and enhance your ability to efficiently handle tasks related to findings and assets. --The integration with Microsoft Sentinel allows you to have visibility and control over your entire multicloud environment from a "single pane of glass." --- Learn how to [set up the new connector](connect-google-cloud-platform.md) and ingest events from Google Security Command Center.---### Incident tasks now generally available (GA) --Incident tasks, which help you standardize your incident investigation and response practices so you can more effectively manage incident workflow, are now generally available (GA) in Microsoft Sentinel. --- Learn more about incident tasks in the Microsoft Sentinel documentation:- - [Use tasks to manage incidents in Microsoft Sentinel](incident-tasks.md) - - [Work with incident tasks in Microsoft Sentinel](work-with-tasks.md) - - [Audit and track changes to incident tasks in Microsoft Sentinel](audit-track-tasks.md) --- See [this blog post by Benji Kovacevic](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/create-tasks-repository-in-microsoft-sentinel/ba-p/4038563) that shows how you can use incident tasks in combination with watchlists, automation rules, and playbooks to build a task management solution with two parts:- - A repository of incident tasks. - - A mechanism that automatically attaches tasks to newly created incidents, according to the incident title, and assigns them to the proper personnel. --### AWS and GCP data connectors now support Azure Government clouds --Microsoft Sentinel data connectors for Amazon Web Services (AWS) and Google Cloud Platform (GCP) now include supporting configurations to ingest data into workspaces in Azure Government clouds. --The configurations for these connectors for Azure Government customers differ slightly from the public cloud configuration. See the relevant documentation for details: --- [Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](connect-aws.md)-- [Ingest Google Cloud Platform log data into Microsoft Sentinel](connect-google-cloud-platform.md)--### Windows DNS Events via AMA connector now generally available (GA) --Windows DNS events can now be ingested to Microsoft Sentinel using the Azure Monitor Agent with the now generally available data connector. This connector allows you to define Data Collection Rules (DCRs) and powerful, complex filters so that you ingest only the specific DNS records and fields you need. --- For more information, see [Stream and filter data from Windows DNS servers with the AMA connector](connect-dns-ama.md).--## January 2024 --[Reduce false positives for SAP systems with analytics rules](#reduce-false-positives-for-sap-systems-with-analytics-rules) --### Reduce false positives for SAP systems with analytics rules --Use analytics rules together with the [Microsoft Sentinel solution for SAP applications](sap/solution-overview.md) to lower the number of false positives triggered from your SAP systems. The Microsoft Sentinel solution for SAP applications now includes the following enhancements: --- The [**SAPUsersGetVIP**](sap/sap-solution-log-reference.md#sapusersgetvip) function now supports excluding users according to their SAP-given roles or profile.--- The **SAP_User_Config** watchlist now supports using wildcards in the **SAPUser** field to exclude all users with a specific syntax.--For more information, see [Microsoft Sentinel solution for SAP applications data reference](sap/sap-solution-log-reference.md) and [Handle false positives in Microsoft Sentinel](false-positives.md). - ## Next steps > [!div class="nextstepaction"] |
storage | Elastic San Expand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-expand.md | This article covers increasing or decreasing the size of an Elastic storage area ## Resize your SAN -To increase the size of your volumes, increase the size of your Elastic SAN first. To decrease the size of your SAN, make sure your volumes aren't using the extra size, or decrease the size of your volumes first. +To increase the size of your volumes, increase the size of your Elastic SAN first. To decrease the size of your SAN, make sure your volumes aren't using the extra size and then change the size of the SAN. # [PowerShell](#tab/azure-powershell) |
storage | Storage Files Configure S2s Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-s2s-vpn.md | To add a new or existing virtual network to your storage account, follow these s # [Azure PowerShell](#tab/azure-powershell) -1. Sign in to the Azure portal. +1. Sign in to Azure. ```azurepowershell-interactive Connect-AzAccount To add a new or existing virtual network to your storage account, follow these s # [Azure CLI](#tab/azure-cli) -1. Sign in to the Azure portal. +1. Sign in to Azure. ```azurecli-interactive az login |
synapse-analytics | Apache Spark Development Using Notebooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md | description: In this article, you learn how to create and develop Synapse notebo -+ Previously updated : 05/08/2021 Last updated : 09/11/2024 This article describes how to use notebooks in Synapse Studio. ## Create a notebook -You can create a new notebook or import an existing notebook to a Synapse workspace from **Object Explorer**. Select **Develop**, right-click **Notebooks**, and then select **New notebook** or **Import**. Synapse notebooks recognize standard Jupyter Notebook IPYNB files. +You can create a new notebook or import an existing notebook to a Synapse workspace from **Object Explorer**. Select the **Develop** menu. Select the **+** button and select **Notebook** or right-click **Notebooks**, and then select **New notebook** or **Import**. Synapse notebooks recognize standard Jupyter Notebook IPYNB files. ![Screenshot of selections for creating or importing a notebook.](./media/apache-spark-development-using-notebooks/synapse-create-import-notebook-2.png) To move a cell, select the left side of the cell and drag the cell to the desire ### <a name = "move-a-cell"></a>Copy a cell -To copy a cell, create a new cell, select all the text in your original cell, copy the text, and paste the text into the new cell. When your cell is in edit mode, traditional keyboard shortcuts to select all text are limited to the cell. +To copy a cell, first create a new cell, then select all the text in your original cell, copy the text, and paste the text into the new cell. When your cell is in edit mode, traditional keyboard shortcuts to select all text are limited to the cell. >[!TIP] >Synapse notebooks also provide [snippits](#code-snippets) of commonly used code patterns. The `%run` magic command has these limitations: * The command supports nested calls but not recursive calls. * The command supports passing an absolute path or notebook name only as a parameter. It doesn't support relative paths. * The command currently supports only four parameter value types: `int`, `float`, `bool`, and `string`. It doesn't support variable replacement operations.-* The referenced notebooks must be published. You need to publish the notebooks to reference them, unless you select the [option to enable an unpublished notebook reference](#reference-unpublished-notebook). Synapse Studio does not recognize the unpublished notebooks from the Git repo. +* The referenced notebooks must be published. You need to publish the notebooks to reference them, unless you select the [option to enable an unpublished notebook reference](#reference-unpublished-notebook). Synapse Studio doesn't recognize the unpublished notebooks from the Git repo. * Referenced notebooks don't support statement depths larger than five. ### Use the variable explorer The number of tasks for each job or stage helps you identify the parallel level ### <a name = "spark-session-configuration"></a>Configure a Spark session -On the **Configure session** pane, you can specify the timeout duration, the number of executors, and the size of executors to give to the current Spark session. Restart the Spark session for configuration changes to take effect. All cached notebook variables are cleared. +On the **Configure session** pane, which you can find by selecting the gear icon at the top of the notebook, you can specify the timeout duration, the number of executors, and the size of executors to give to the current Spark session. Restart the Spark session for configuration changes to take effect. All cached notebook variables are cleared. You can also create a configuration from the Apache Spark configuration or select an existing configuration. For details, refer to [Manage Apache Spark configuration](../../synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md). |
synapse-analytics | Develop Openrowset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md | The `OPENROWSET` function can be referenced in the `FROM` clause of a query as i ## Data source -OPENROWSET function in Synapse SQL reads the content of the file(s) from a data source. The data source is an Azure storage account and it can be explicitly referenced in the `OPENROWSET` function or can be dynamically inferred from URL of the files that you want to read. +OPENROWSET function in Synapse SQL reads the content of the files from a data source. The data source is an Azure storage account and it can be explicitly referenced in the `OPENROWSET` function or can be dynamically inferred from URL of the files that you want to read. The `OPENROWSET` function can optionally contain a `DATA_SOURCE` parameter to specify the data source that contains files. - `OPENROWSET` without `DATA_SOURCE` can be used to directly read the contents of the files from the URL location specified as `BULK` option: The `OPENROWSET` function can optionally contain a `DATA_SOURCE` parameter to sp FORMAT = 'PARQUET') AS [file] ``` -This is a quick and easy way to read the content of the files without pre-configuration. This option enables you to use the basic authentication option to access the storage (Microsoft Entra passthrough for Microsoft Entra logins and SAS token for SQL logins). +This is a quick and easy way to read the content of the files without preconfiguration. This option enables you to use the basic authentication option to access the storage (Microsoft Entra passthrough for Microsoft Entra logins and SAS token for SQL logins). - `OPENROWSET` with `DATA_SOURCE` can be used to access files on specified storage account: This is a quick and easy way to read the content of the files without pre-config This option enables you to configure location of the storage account in the data source and specify the authentication method that should be used to access storage. > [!IMPORTANT]- > `OPENROWSET` without `DATA_SOURCE` provides quick and easy way to access the storage files but offers limited authentication options. As an example, Microsoft Entra principals can access files only using their [Microsoft Entra identity](develop-storage-files-storage-access-control.md?tabs=user-identity) or publicly available files. If you need more powerful authentication options, use `DATA_SOURCE` option and define credential that you want to use to access storage. + > `OPENROWSET` without `DATA_SOURCE` provides quick and easy way to access the storage files but offers limited authentication options. As an example, Microsoft Entra principals can access files only using their [Microsoft Entra identity](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) or publicly available files. If you need more powerful authentication options, use `DATA_SOURCE` option and define credential that you want to use to access storage. ## Security You have three choices for input files that contain the target data for querying - 'CSV' - Includes any delimited text file with row/column separators. Any character can be used as a field separator, such as TSV: FIELDTERMINATOR = tab. -- 'PARQUET' - Binary file in Parquet format +- 'PARQUET' - Binary file in Parquet format. -- 'DELTA' - A set of Parquet files organized in Delta Lake (preview) format+- 'DELTA' - A set of Parquet files organized in Delta Lake (preview) format. -Values with blank spaces are not valid, e.g. 'CSV ' is not a valid value. +Values with blank spaces aren't valid. For example, 'CSV ' isn't a valid value. **'unstructured_data_path'** -The unstructured_data_path that establishes a path to the data may be an absolute or relative path: +The unstructured_data_path that establishes a path to the data could be an absolute or relative path: - Absolute path in the format `\<prefix>://\<storage_account_path>/\<storage_path>` enables a user to directly read the files. - Relative path in the format `<storage_path>` that must be used with the `DATA_SOURCE` parameter and describes the file pattern within the <storage_account_path> location defined in `EXTERNAL DATA SOURCE`. Below you'll find the relevant \<storage account path> values that will link to '\<storage_path>' -Specifies a path within your storage that points to the folder or file you want to read. If the path points to a container or folder, all files will be read from that particular container or folder. Files in subfolders won't be included. +Specifies a path within your storage that points to the folder or file you want to read. If the path points to a container or folder, all files will be read from that particular container or folder. Files in subfolders won't be included. You can use wildcards to target multiple files or folders. Usage of multiple nonconsecutive wildcards is allowed. Below is an example that reads all *csv* files starting with *population* from all folders starting with */csv/population*: The WITH clause allows you to specify columns that you want to read from files. > Column names in Parquet and Delta Lake files are case sensitive. If you specify column name with casing different from column name casing in the files, the `NULL` values will be returned for that column. -column_name = Name for the output column. If provided, this name overrides the column name in the source file and column name provided in JSON path if there is one. If json_path is not provided, it will be automatically added as '$.column_name'. Check json_path argument for behavior. +column_name = Name for the output column. If provided, this name overrides the column name in the source file and column name provided in JSON path if there's one. If json_path isn't provided, it will be automatically added as '$.column_name'. Check json_path argument for behavior. column_type = Data type for the output column. The implicit data type conversion will take place here. Specifies the field terminator to be used. The default field terminator is a com ROWTERMINATORΓÇ»='row_terminator'` -Specifies the row terminator to be used. If row terminator is not specified, one of default terminators will be used. Default terminators for PARSER_VERSION = '1.0' are \r\n, \n and \r. Default terminators for PARSER_VERSION = '2.0' are \r\n and \n. +Specifies the row terminator to be used. If row terminator isn't specified, one of default terminators will be used. Default terminators for PARSER_VERSION = '1.0' are \r\n, \n and \r. Default terminators for PARSER_VERSION = '2.0' are \r\n and \n. > [!NOTE] > When you use PARSER_VERSION='1.0' and specify \n (newline) as the row terminator, it will be automatically prefixed with a \r (carriage return) character, which results in a row terminator of \r\n. Specifies parser version to be used when reading files. Currently supported CSV - PARSER_VERSION = '1.0' - PARSER_VERSION = '2.0' -CSV parser version 1.0 is default and feature rich. Version 2.0 is built for performance and does not support all options and encodings. +CSV parser version 1.0 is default and feature rich. Version 2.0 is built for performance and doesn't support all options and encodings. CSV parser version 1.0 specifics: CSV parser version 2.0 specifics: - Maximum row size limit is 8 MB. - Following options aren't supported: DATA_COMPRESSION. - Quoted empty string ("") is interpreted as empty string.-- DATEFORMAT SET option is not honored.+- DATEFORMAT SET option isn't honored. - Supported format for DATE data type: YYYY-MM-DD - Supported format for TIME data type: HH:MM:SS[.fractional seconds] - Supported format for DATETIME2 data type: YYYY-MM-DD HH:MM:SS[.fractional seconds] Specifies the code page of the data in the data file. The default value is 65001 ROWSET_OPTIONS = '{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}' -This option will disable the file modification check during the query execution, and read the files that are updated while the query is running. This is useful option when you need to read append-only files that are appended while the query is running. In the appendable files, the existing content is not updated, and only new rows are added. Therefore, the probability of wrong results is minimized compared to the updateable files. This option might enable you to read the frequently appended files without handling the errors. See more information in [querying appendable CSV files](query-single-csv-file.md#querying-appendable-files) section. +This option will disable the file modification check during the query execution, and read the files that are updated while the query is running. This is useful option when you need to read append-only files that are appended while the query is running. In the appendable files, the existing content isn't updated, and only new rows are added. Therefore, the probability of wrong results is minimized compared to the updateable files. This option might enable you to read the frequently appended files without handling the errors. See more information in [querying appendable CSV files](query-single-csv-file.md#querying-appendable-files) section. Reject Options Parquet files contain column metadata, which will be read, type mappings can be For the CSV files, column names can be read from header row. You can specify whether header row exists using HEADER_ROW argument. If HEADER_ROW = FALSE, generic column names will be used: C1, C2, ... Cn where n is number of columns in file. Data types will be inferred from first 100 data rows. Check [reading CSV files without specifying schema](#read-csv-files-without-specifying-schema) for samples. -Have in mind that if you are reading number of files at once, the schema will be inferred from the first file service gets from the storage. This can mean that some of the columns expected are omitted, all because the file used by the service to define the schema did not contain these columns. In that case, please use OPENROWSET WITH clause. +Have in mind that if you're reading number of files at once, the schema will be inferred from the first file service gets from the storage. This can mean that some of the columns expected are omitted, all because the file used by the service to define the schema didn't contain these columns. In that case, use OPENROWSET WITH clause. > [!IMPORTANT]-> There are cases when appropriate data type cannot be inferred due to lack of information and larger data type will be used instead. This brings performance overhead and is particularly important for character columns which will be inferred as varchar(8000). For optimal performance, please [check inferred data types](./best-practices-serverless-sql-pool.md#check-inferred-data-types) and [use appropriate data types](./best-practices-serverless-sql-pool.md#use-appropriate-data-types). +> There are cases when appropriate data type cannot be inferred due to lack of information and larger data type will be used instead. This brings performance overhead and is particularly important for character columns which will be inferred as varchar(8000). For optimal performance, [check inferred data types](./best-practices-serverless-sql-pool.md#check-inferred-data-types) and [use appropriate data types](./best-practices-serverless-sql-pool.md#use-appropriate-data-types). ### Type mapping for Parquet |
update-manager | Migration Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-troubleshoot.md | Your organization requires to use `Connect-AzAccount` with `DeviceCode` paramet ### Resolution -- Modify this [line](https://github.com/azureautomation/Preqrequisite-for-Migration-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/1750c1758cf9be93153a24b6eb9bfccc174ce66b/MigrationPrerequisites.ps1#L1224) in the Prerequisite script where it has the Connect-AzAccount Command to use the - [UseDeviceAuthentication](https://review.learn.microsoft.com/powershell/module/az.accounts/connect-azaccount?view=azps-12.2.0&branch=main#-usedeviceauthentication) parameter.+- Modify this [line](https://github.com/azureautomation/Preqrequisite-for-Migration-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/1750c1758cf9be93153a24b6eb9bfccc174ce66b/MigrationPrerequisites.ps1#L1224) in the Prerequisite script where it has the Connect-AzAccount Command to use the - [UseDeviceAuthentication](/powershell/module/az.accounts/connect-azaccount#-usedeviceauthentication) parameter. ## Encountering Get-AzOperationInsightsWorkspace exception message Delete custom Az Modules and ensure that default Az Module is updated to 8.0.0 f - [Migration using Azure portal](migration-using-portal.md) - [Migration using runbook scripts](migration-using-runbook-scripts.md) - [Manual migration guidance](migration-manual.md)-- [Key points during migration](migration-key-points.md)+- [Key points during migration](migration-key-points.md) |
update-manager | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md | Following is the list of supported images and no other marketplace images releas | |cis-oracle-linux-8-l1 | cis-oracle8-l1|| | |cis-rhel | cis-redhat7-l1-gen1 </br> cis-redhat8-l1-gen1 </br> cis-redhat8-l2-gen1 </br> cis-redhat9-l1-gen1 </br> cis-redhat9-l1-gen2| | | |cis-rhel-7-l2 | cis-rhel7-l2 | |-| |cis-rhel-8-l1 | | | | +| |cis-rhel-8-l1 | | | | |cis-rhel-8-l2 | cis-rhel8-l2 | | | |cis-rhel9-l1 | cis-rhel9-l1 </br> cis-rhel9-l1-gen2 || | |cis-ubuntu | cis-ubuntu1804-l1 </br> cis-ubuntulinux2004-l1-gen1 </br> cis-ubuntulinux2204-l1-gen1 </br> cis-ubuntulinux2204-l1-gen2 || Following is the list of supported images and no other marketplace images releas |microsoft-dsvm |aml-workstation | ubuntu | |microsoftcblmariner |cbl-mariner | cbl-mariner-1 </br> 1-gen2 </br> cbl-mariner-2 </br> cbl-mariner-2-gen2. | | |microsoftcblmariner|cbl-mariner | cbl-mariner-1,1-gen2, cbl-mariner-2, cbl-mariner-2-gen2 |-|microsoftsqlserver | * | * | |**Offers**: sql2019-sles* </br> sql2019-rhel7 </br> sql2017-rhel 7 </br></br> Example </br> Publisher: </br> microsoftsqlserver </br> Offer: sql2019-sles12sp5 </br> sku:webARM </br></br> Publisher: microsoftsqlserver </br> Offer: sql2019-rhel7 </br> sku: web-ARM | -|microsoftsqlserver | * | *||**Offers**: sql2019-sles*</br> sql2019-rhel7 </br> sql2017-rhel7 | +|microsoftsqlserver | * | * |**Offers**: sql2019-sles* </br> sql2019-rhel7 </br> sql2017-rhel 7 </br></br> Example </br> Publisher: </br> microsoftsqlserver </br> Offer: sql2019-sles12sp5 </br> sku:webARM </br></br> Publisher: microsoftsqlserver </br> Offer: sql2019-rhel7 </br> sku: web-ARM | +|microsoftsqlserver | * | *|**Offers**: sql2019-sles*</br> sql2019-rhel7 </br> sql2017-rhel7 | |nginxinc|nginx-plus-ent-v1 | nginx-plus-ent-centos7 | |ntegralinc1586961136942|ntg_oracle_8_7| ntg_oracle_8_7| |openlogic | centos | 7.2, 7.3, 7.4, 7.5, 7.6, 7_8, 7_9, 7_9-gen2 | Following is the list of supported images and no other marketplace images releas |redhat | rhel-sap | 7.7 | |redHat |rhel | 8_9|| |redhat |rhel-byos | rhel-lvm79 </br> rhel-lvm79-gen2 </br> rhel-lvm8 </br> rhel-lvm82-gen2 </br> rhel-lvm83 </br> rhel-lvm84 </br> rhel-lvm84-gen2 </br> rhel-lvm85-gen2 </br> rhel-lvm86 </br> rhel-lvm86-gen2 </br> rhel-lvm87-gen2 </br> rhel-raw76 </br> |-|redhat |rhel-byos |rhel-lvm88 </br> rhel-lvm88-gent2 </br> rhel-lvm92 </br>rhel-lvm92-gen2 || | +|redhat |rhel-byos |rhel-lvm88 </br> rhel-lvm88-gent2 </br> rhel-lvm92 </br>rhel-lvm92-gen2 || |redhat |rhel-ha | 8* | 81_gen2 | |redhat |rhel-raw | 7*,8*,9* | | |redhat |rhel-sap | 7*| | |
update-manager | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md | Title: Troubleshoot known issues with Azure Update Manager description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Previously updated : 09/06/2024 Last updated : 09/11/2024 To find more information, review the logs in the file path provided in the error Set a longer time range for maximum duration when you're triggering an [on-demand update deployment](deploy-updates.md) to help avoid the problem. ++### Windows/Linux OS update extension isn't installed ++#### Issue ++The Windows/Linux OS Update extension must be successfully installed on Arc machines to perform on-demand assessments, patching, and scheduled patching. ++#### Resolution ++Trigger an on-demand assessment or patching to install the extension on the machine. You can also attach the machine to a maintenance configuration schedule which will install the extension when patching is performed as per the schedule. ++If the extension is already present on the machine but the extension status is not **Succeeded**, ensure that you [remove the extension](../azure-arc/servers/manage-vm-extensions-portal.md#remove-extensions) and trigger an on-demand operation so that it is installed again. ++### Windows/Linux patch update extension isn't installed ++#### Issue +The Windows/Linux patch update extension must be successfully installed on Azure machines to perform on-demand assessment or patching, scheduled patching and for periodic assessments. ++#### Resolution +Trigger an on-demand assessment or patching to install the extension on the machine. You can also attach the machine to a maintenance configuration schedule which will install the extension when patching is performed as per the schedule. ++If the extension is already present on the machine but the extension status is not **Succeeded**, ensure that you [remove the extension](../azure-arc/servers/manage-vm-extensions-portal.md#remove-extensions) and trigger an on-demand operation which will install it again. +++### Allow Extension Operations check failed ++#### Issue ++The property [AllowExtensionOperations](https://learn.microsoft.com/dotnet/api/microsoft.azure.management.compute.models.osprofile.allowextensionoperations?view=azure-dotnet-legacy) is set to false in the machine OSProfile. ++#### Resolution +The property should be set to true to allow extensions to work properly. ++### Sudo privileges not present ++#### Issue ++Sudo privileges are not granted to the extensions for assessment or patching operations on Linux machines. ++#### Resolution +Grant sudo privileges to ensure assessment or patching operations succeed. ++### Proxy is configured ++#### Issue ++Proxy is configured on Windows or Linux machines that may block access to endpoints required for assessment or patching operations to succeed. ++#### Resolution ++For Windows, see [issues related to proxy](https://learn.microsoft.com/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting?toc=%2Fwindows%2Fdeployment%2Ftoc.json&bc=%2Fwindows%2Fdeployment%2Fbreadcrumb%2Ftoc.json#issues-related-to-httpproxy). ++For Linux, ensure proxy setup doesn't block access to repositories that are required for downloading and installing updates. ++### TLS 1.2 Check Failed ++#### Issue ++TLS 1.0 and TLS 1.1 are deprecated. ++#### Resolution ++Use TLS 1.2 or higher. + +For Windows, see [Protocols in TLS/SSL Schannel SSP](https://learn.microsoft.com/windows/win32/secauthn/protocols-in-tls-ssl--schannel-ssp-). ++For Linux, execute the following command to see the supported versions of TLS for your distro. +`nmap --script ssl-enum-ciphers -p 443 www.azure.com` ++### Https connection check failed ++#### Issue ++Https connection is not available which is required to download and install updates from required endpoints for each operating system. ++#### Resolution ++Allow Https connection from your machine. ++### MsftLinuxPatchAutoAssess service is not running, or Time is not active ++#### Issue ++[MsftLinuxPatchAutoAssess](https://github.com/Azure/LinuxPatchExtension) is required for successful periodic assessments on Linux machines. ++#### Resolution ++Ensure that the LinuxPatchExtension status is succeeded for the machine. Reboot the machine to check if the issue is resolved. ++### Linux repositories aren't accessible ++#### Issue ++The updates are downloaded from configured public or private repositories for each Linux distro. The machine is unable to connect to these repositories to download or assess the updates. ++#### Resolution ++Ensure that network security rules donΓÇÖt hinder connection to required repositories for update operations. + ## Next steps * To learn more about Update Manager, see the [Overview](overview.md). |
virtual-desktop | Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-recommendations.md | Session hosts are virtual machines that run inside an Azure subscription and vir ### Enable endpoint protection -To protect your deployment from known malicious software, we recommend enabling endpoint protection on all session hosts. You can use either Windows Defender Antivirus or a third-party program. To learn more, see [Deployment guide for Windows Defender Antivirus in a VDI environment](/windows/security/threat-protection/windows-defender-antivirus/deployment-vdi-windows-defender-antivirus). +To protect your deployment from known malicious software, we recommend enabling endpoint protection on all session hosts. You can use either Windows Defender Antivirus or a third-party program. For more information, see [Deployment guide for Windows Defender Antivirus in a VDI environment](/windows/security/threat-protection/windows-defender-antivirus/deployment-vdi-windows-defender-antivirus#configure-antivirus-file-and-folder-exclusions). -For profile solutions like FSLogix or other solutions that mount virtual hard disk files, we recommend excluding those file extensions. +For profile solutions like FSLogix or other solutions that mount virtual hard disk files, we recommend excluding those file extensions. For more information, see ### Install an endpoint detection and response product |
virtual-desktop | Whats New Multimedia Redirection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-multimedia-redirection.md | -This article has the latest updates for multimedia redirection (MMR) for Azure Virtual Desktop. +This article has the latest updates for host component of multimedia redirection (MMR) for Azure Virtual Desktop. ## Latest available version The following table shows the latest available version of the MMR extension for | Release | Latest version | Download | ||-|-|-| Public | 1.0.2024.4003 | [MMR extension](https://aka.ms/avdmmr/msi) | +| Public | 1.0.2404.4003 | [MMR extension](https://aka.ms/avdmmr/msi) | -## Updates for version 1.0.2024.4003 +## Updates for version 1.0.2404.4003 *Published: July 23, 2024* In this release, we've made the following changes: ## Next steps -Learn more about MMR at [Understanding multimedia direction for Azure Virtual Desktop](multimedia-redirection-intro.md) and [Use multimedia redirection for Azure Virtual Desktop](multimedia-redirection.md). +Learn more about MMR at [Understanding multimedia direction for Azure Virtual Desktop](multimedia-redirection-intro.md) and [Use multimedia redirection for Azure Virtual Desktop](multimedia-redirection.md). |
virtual-network | Ipv6 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md | The current IPv6 for Azure Virtual Network release has the following limitations - Azure Firewall doesn't currently support IPv6. It can operate in a dual stack virtual network using only IPv4, but the firewall subnet must be IPv4-only. +- Azure Database for PostgreSQL - Flexible Server doesn't currently support IPv6. Even if the subnet for the Postgres Flexible Server doesn't have any IPv6 addresses assigned, it cannot be deployed if there are IPv6 addresses in the VNet. + ## Pricing There's no charge to use Public IPv6 Addresses or Public IPv6 Prefixes. Associated resources and bandwidth are charged at the same rates as IPv4. You can find details about pricing for [public IP addresses](https://azure.microsoft.com/pricing/details/ip-addresses/), [network bandwidth](https://azure.microsoft.com/pricing/details/bandwidth/), or [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/). |
virtual-wan | Monitor Virtual Wan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md | Title: 'Monitoring Azure Virtual WAN - Data reference' -description: Learn about Azure Virtual WAN logs and metrics using Azure Monitor. --+ Title: Monitoring data reference for Azure Virtual WAN +description: This article contains important reference material you need when you monitor Azure Virtual WAN by using Azure Monitor. Last updated : 09/10/2024+ Previously updated : 02/15/2024+ --+ +# Azure Virtual WAN monitoring data reference -# Monitoring Virtual WAN - Data reference -This article provides a reference of log and metric data collected to analyze the performance and availability of Virtual WAN. See [Monitoring Virtual WAN](monitor-virtual-wan.md) for instructions and additional context on monitoring data for Virtual WAN. +See [Monitor Azure Virtual WAN](monitor-virtual-wan.md) for details on the data you can collect for Virtual WAN and how to use it. -## <a name="metrics"></a>Metrics -### <a name="hub-router-metrics"></a>Virtual hub router metrics +### <a name="hub-router-metrics"></a>Supported metrics for Microsoft.Network/virtualhubs -The following metric is available for virtual hub router within a virtual hub: +The following table lists the metrics available for the Microsoft.Network/virtualhubs resource type. -| Metric | Description| -| | | -| **Virtual Hub Data Processed** | Data on how much traffic traverses the virtual hub router in a given time period. Only the following flows use the virtual hub router: VNet to VNet (same hub and interhub) and VPN/ExpressRoute branch to VNet (interhub). If a virtual hub is secured with routing intent, then these flows traverse the firewall instead of the hub router. | -| **Routing Infrastructure Units** | The virtual hub's routing infrastructure units (RIU). The virtual hub's RIU determines how much bandwidth the virtual hub router can process for flows traversing the virtual hub router. The hub's RIU also determines how many VMs in spoke VNets the virtual hub router can support. For more details on routing infrastructure units, see [Virtual Hub Capacity](hub-settings.md#capacity). -| **Spoke VM Utilization** | The approximate number of deployed spoke VMs as a percentage of the total number of spoke VMs that the hub's routing infrastructure units can support. For example, if the hub's RIU is set to 2 (which supports 2000 spoke VMs), and 1000 VMs are deployed across spoke VNets, then this metric's value will be approximately 50%. | -### <a name="s2s-metrics"></a>Site-to-site VPN gateway metrics +This table contains more information about some of the metrics in the preceding table. -The following metrics are available for Virtual WAN site-to-site VPN gateways: +| Metric | Description | +|:-|:| +| **Routing Infrastructure Units** | The virtual hub's routing infrastructure units (RIU). The virtual hub's RIU determines how much bandwidth the virtual hub router can process for flows traversing the virtual hub router. The hub's RIU also determines how many VMs in spoke VNets the virtual hub router can support. For more information on routing infrastructure units, see [Virtual Hub Capacity](hub-settings.md#capacity). +| **Spoke VM Utilization** | The approximate number of deployed spoke VMs as a percentage of the total number of spoke VMs that the hub's routing infrastructure units can support. For example, if the hub's RIU is set to 2, which supports 2,000 spoke VMs, and 1,000 VMs are deployed across spoke virtual networks, this metric's value is approximately 50%. | ++### <a name="s2s-metrics"></a>Supported metrics for microsoft.network/vpngateways ++The following table lists the metrics available for the microsoft.network/vpngateways resource type. ++++These tables contain more information about some of the metrics in the preceding table. #### Tunnel Packet Drop metrics The following metrics are available for Virtual WAN site-to-site VPN gateways: #### IPSec metrics -| Metric | Description| -| | | +| Metric | Description | +|:-|:| | **Tunnel MMSA Count** | Number of MMSAs getting created or deleted.| | **Tunnel QMSA Count** | Number of IPSEC QMSAs getting created or deleted.| #### Routing metrics -| Metric | Description| -| | | +| Metric | Description | +|:-|:| | **BGP Peer Status** | BGP connectivity status per peer and per instance.| | **BGP Routes Advertised** | Number of routes advertised per peer and per instance.| | **BGP Routes Learned** | Number of routes learned per peer and per instance.|-| **VNET Address Prefix Count** | Number of VNet address prefixes that are used/advertised by the gateway.| +| **VNET Address Prefix Count** | Number of virtual network address prefixes that the gateway uses and advertises.| You can review per peer and instance metrics by selecting **Apply splitting** and choosing the preferred value. #### Traffic Flow metrics -| Metric | Description| -| | | -| **Gateway Bandwidth** | Average site-to-site aggregate bandwidth of a gateway in bytes per second.| +| Metric | Description | +|:-|:| +| **Gateway S2S Bandwidth** | Average site-to-site aggregate bandwidth of a gateway in bytes per second.| | **Gateway Inbound Flows** | Number of distinct 5-tuple flows (protocol, local IP address, remote IP address, local port, and remote port) flowing into a VPN Gateway. Limit is 250k flows.| | **Gateway Outbound Flows** | Number of distinct 5-tuple flows (protocol, local IP address, remote IP address, local port, and remote port) flowing out of a VPN Gateway. Limit is 250k flows.| | **Tunnel Bandwidth** | Average bandwidth of a tunnel in bytes per second.| | **Tunnel Egress Bytes** | Outgoing bytes of a tunnel. | | **Tunnel Egress Packets** | Outgoing packet count of a tunnel. | | **Tunnel Ingress Bytes** | Incoming bytes of a tunnel.|-| **Tunnel Ingress Packet** | Incoming packet count of a tunnel.| +| **Tunnel Ingress Packets** | Incoming packet count of a tunnel.| | **Tunnel Peak PPS** | Number of packets per second per link connection in the last minute.|-| **Tunnel Flow Count** | Number of distinct 3-tuple (protocol, local IP address, remote IP address) flows created per link connection.| +| **Tunnel Total Flow Count** | Number of distinct 3-tuple (protocol, local IP address, remote IP address) flows created per link connection.| -### <a name="p2s-metrics"></a>Point-to-site VPN gateway metrics +### <a name="p2s-metrics"></a>Supported metrics for microsoft.network/p2svpngateways -The following metrics are available for Virtual WAN point-to-site VPN gateways: +The following table lists the metrics available for the microsoft.network/p2svpngateways resource type. -| Metric | Description| -| | | +++This table contains more information about some of the metrics in the preceding table. ++| Metric | Description | +|:-|:| | **Gateway P2S Bandwidth** | Average point-to-site aggregate bandwidth of a gateway in bytes per second. | | **P2S Connection Count** |Point-to-site connection count of a gateway. To ensure you're viewing accurate Metrics in Azure Monitor, select the **Aggregation Type** for **P2S Connection Count** as **Sum**. You can also select **Max** if you split By **Instance**. | | **User VPN Routes Count** | Number of User VPN Routes configured on the VPN gateway. This metric can be broken down into **Static** and **Dynamic** Routes. -### <a name="er-metrics"></a>Azure ExpressRoute gateway metrics +### <a name="er-metrics"></a>Supported metrics for microsoft.network/expressroutegateways -The following metrics are available for Azure ExpressRoute gateways: +The following table lists the metrics available for the microsoft.network/expressroutegateways resource type. -| Metric | Description| -| | | +++This table contains more information about some of the metrics in the preceding table. ++| Metric | Description | +|:-|:| | **BitsInPerSecond** | Bits per second ingressing Azure via ExpressRoute that can be further split for specific connections. | | **BitsOutPerSecond** | Bits per second egressing Azure via ExpressRoute that can be further split for specific connections. | | **Bits Received Per Second** | Total Bits received on ExpressRoute gateway per second. | | **CPU Utilization** | CPU Utilization of the ExpressRoute gateway.|-| **Packets per second** | Total Packets received on ExpressRoute gateway per second.| +| **Packets received per second** | Total Packets received on ExpressRoute gateway per second.| | **Count of routes advertised to peer**| Count of Routes Advertised to Peer by ExpressRoute gateway. | | **Count of routes learned from peer**| Count of Routes Learned from Peer by ExpressRoute gateway.|-| **Frequency of routes changed** | Frequency of Route changes in ExpressRoute gateway.| +| **Frequency of routes change** | Frequency of Route changes in ExpressRoute gateway.| -## <a name="diagnostic"></a>Diagnostic logs +### ExpressRoute gateway diagnostics -The following diagnostic logs are available, unless otherwise specified. +In Azure Virtual WAN, ExpressRoute gateway metrics can be exported as logs by using a diagnostic setting. -### <a name="s2s-diagnostic"></a>Site-to-site VPN gateway diagnostics -The following diagnostics are available for Virtual WAN site-to-site VPN gateways: -| Metric | Description| -| | | -| **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and additional diagnostics.| -| **Tunnel Diagnostic Logs** | These are IPsec tunnel-related logs such as connect and disconnect events for a site-to-site IPsec tunnel, negotiated SAs, disconnect reasons, and additional diagnostics. For connect and disconnect events, these logs also display the remote IP address of the corresponding on-premises VPN device.| -| **Route Diagnostic Logs** | These are logs related to events for static routes, BGP, route updates, and additional diagnostics. | -| **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections. | +Microsoft.Network/virtualhubs -### <a name="p2s-diagnostic"></a>Point-to-site VPN gateway diagnostics +- bgppeerip +- bgppeertype +- routeserviceinstance -The following diagnostics are available for Virtual WAN point-to-site VPN gateways: +microsoft.network/vpngateways -| Metric | Description| -| | | +- BgpPeerAddress +- ConnectionName +- DropType +- FlowType +- Instance +- NatRule +- RemoteIP ++microsoft.network/p2svpngateways ++- Instance +- Protocol +- RouteType ++microsoft.network/expressroutegateways ++- BgpPeerAddress +- ConnectionName +- direction +- roleInstance ++<a name="diagnostic"></a> ++### <a name="p2s-diagnostic"></a>Supported resource logs for microsoft.network/p2svpngateways +++This table contains more information about the preceding table. ++| Metric | Description | +|:-|:| | **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and other diagnostics. | | **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections.|-| **P2S Diagnostic Logs** | These are User VPN P2S (Point-to-site) configuration and client events. They include client connect/disconnect, VPN client address allocation, and other diagnostics.| +| **P2S Diagnostic Logs** | These events are User VPN P2S (Point-to-site) configuration and client events. They include client connect/disconnect, VPN client address allocation, and other diagnostics.| -### ExpressRoute gateway diagnostics +### <a name="s2s-diagnostic"></a>Supported resource logs for microsoft.network/vpngateways -In Azure Virtual WAN, ExpressRoute gateway metrics can be exported as logs via a diagnostic setting. ++This table contains more information about the preceding table. ++| Metric | Description | +|:-|:| +| **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and other diagnostics. | +| **Tunnel Diagnostic Logs** | IPsec tunnel-related logs such as connect and disconnect events for a site-to-site IPsec tunnel, negotiated SAs, disconnect reasons, and other diagnostics. For connect and disconnect events, these logs also display the remote IP address of the corresponding on-premises VPN device. | +| **Route Diagnostic Logs** | Logs related to events for static routes, BGP, route updates, and other diagnostics. | +| **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections. | ### Log Analytics sample query The following example contains a query to obtain site-to-site route diagnostics. `AzureDiagnostics | where Category == "RouteDiagnosticLog"` -Replace the following values, after the **= =**, as needed based on the tables reported in the previous section of this article. +Replace the following values, after the `==`, as needed based on the tables in this article. -* "GatewayDiagnosticLog" -* "IKEDiagnosticLog" -* "P2SDiagnosticLogΓÇ¥ -* "TunnelDiagnosticLog" -* "RouteDiagnosticLog" +- GatewayDiagnosticLog +- IKEDiagnosticLog +- P2SDiagnosticLog +- TunnelDiagnosticLog +- RouteDiagnosticLog -In order to execute the query, you have to open the Log Analytics resource you configured to receive the diagnostic logs, and then select **Logs** under the **General** tab on the left side of the pane: +In order to run the query, you have to open the Log Analytics resource you configured to receive the diagnostic logs, and then select **Logs** under the **General** tab on the left side of the pane: :::image type="content" source="./media/monitor-virtual-wan-reference/log-analytics-query-samples.png" alt-text="Screenshot of Log Analytics Query samples." lightbox="./media/monitor-virtual-wan-reference/log-analytics-query-samples.png"::: For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided to make log analysis easier. Using its graphical interface, you can investigate the diagnostic data without manually writing any Log Analytics query. -## <a name="activity-logs"></a>Activity logs --[**Activity log**](/azure/azure-monitor/essentials/activity-log) entries are collected by default and can be viewed in the Azure portal. You can use Azure activity logs (formerly known as *operational logs* and *audit logs*) to view all operations submitted to your Azure subscription. --You can view activity logs independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. --For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema). --## <a name="schemas"></a>Schemas --For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](/azure/azure-monitor/essentials/resource-logs-schema). --When reviewing any metrics through Log Analytics, the output contains the following columns: --|**Column**|**Type**|**Description**| -| | | | -|TimeGrain|string|PT1M (metric values are pushed every minute)| -|Count|real|Usually equal to 2 (each MSEE pushes a single metric value every minute)| -|Minimum|real|The minimum of the two metric values pushed by the two MSEEs| -|Maximum|real|The maximum of the two metric values pushed by the two MSEEs| -|Average|real|Equal to (Minimum + Maximum)/2| -|Total|real|Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried)| -## <a name="azure-firewall"></a>Monitoring secured hub (Azure Firewall) +### Microsoft.Network/vpnGateways (Virtual WAN site-to-site VPN gateways) -If you chose to secure your virtual hub using Azure Firewall, relevant logs and metrics are available here: [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md). -You can monitor the Secured Hub using Azure Firewall logs and metrics. You can also use activity logs to audit operations on Azure Firewall resources. -For every Azure Virtual WAN you secure and convert to a Secured Hub, an explicit firewall resource object is created in the resource group where the hub is located. +- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns) +- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns) +- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns) +<a name="activity-logs"></a> +- [Microsoft.Network resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftnetwork) -## Next steps +## Related content -* To learn how to monitor Azure Firewall logs and metrics, see [Tutorial: Monitor Azure Firewall logs](../firewall/firewall-diagnostics.md). -* For more information about Virtual WAN monitoring, see [Monitoring Azure Virtual WAN](monitor-virtual-wan.md). -* To learn more about metrics in Azure Monitor, see [Metrics in Azure Monitor](/azure/azure-monitor/essentials/data-platform-metrics). +- See [Monitor Azure Virtual WAN](monitor-virtual-wan.md) for a description of monitoring Virtual WAN. +- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. +- To learn how to monitor Azure Firewall logs and metrics, see [Tutorial: Monitor Azure Firewall logs](../firewall/firewall-diagnostics.md). |
virtual-wan | Monitor Virtual Wan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md | Title: Monitoring Virtual WAN- -description: Start here to learn how to monitor Virtual WAN. + Title: Monitor Azure Virtual WAN +description: Start here to learn how to monitor availability and performance for Azure Virtual WAN by using Azure Monitor. Last updated : 09/10/2024++ -- Previously updated : 02/15/2024 -# Monitoring Azure Virtual WAN +# Monitor Azure Virtual WAN -When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability and performance. -This article describes the monitoring data generated by Azure Virtual WAN. Virtual WAN uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). -## Prerequisites +Virtual WAN uses Network Insights to provide users and operators with the ability to view the state and status of a Virtual WAN, presented through an autodiscovered topological map. Resource state and status overlays on the map give you a snapshot view of the overall health of the Virtual WAN. You can navigate resources on the map by using one-click access to the resource configuration pages of the Virtual WAN portal. For more information, see [Azure Monitor Network Insights for Virtual WAN](azure-monitor-insights.md). -You have a virtual WAN deployed and configured. For help with deploying a virtual WAN: -* [Creating a site-to-site connection](virtual-wan-site-to-site-portal.md) -* [Creating a User VPN (point-to-site) connection](virtual-wan-point-to-site-portal.md) -* [Creating an ExpressRoute connection](virtual-wan-expressroute-portal.md) -* [Creating an NVA in a virtual hub](how-to-nva-hub.md) -* [Installing Azure Firewall in a Virtual hub](howto-firewall.md) +For more information about the resource types for Virtual WAN, see [Azure Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md). -## Analyzing metrics -Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic. -For a list of the platform metrics collected for Virtual WAN, see [Monitoring Virtual WAN data reference metrics](monitor-virtual-wan-reference.md#metrics). +For a list of available metrics for Virtual WAN, see [Azure Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md#metrics). -### <a name="metrics-steps"></a>View metrics for Virtual WAN +<a name="metrics-steps"></a> -The following steps help you locate and view metrics: +You can view metrics for Virtual WAN by using the Azure portal. The following steps help you locate and view metrics: -1. In the portal, navigate to the virtual hub. --1. Select **VPN (Site to site)** to locate a site-to-site gateway, **ExpressRoute** to locate an ExpressRoute gateway, or **User VPN (Point to site)** to locate a point-to-site gateway. --1. Select **Monitor Gateway** and then **Metrics**. You can also click **Metrics** at the bottom to view a dashboard of the most important metrics for site-to-site and point-to-site VPN. +1. Select **Monitor Gateway** and then **Metrics**. You can also select **Metrics** at the bottom to view a dashboard of the most important metrics for site-to-site and point-to-site VPN. :::image type="content" source="./media/monitor-virtual-wan-reference/site-to-site-vpn-metrics-dashboard.png" alt-text="Screenshot shows the sie-to-site VPN metrics dashboard." lightbox="./media/monitor-virtual-wan-reference/site-to-site-vpn-metrics-dashboard.png"::: -1. On the **Metrics** page, you can view the metrics that you're interested in. +1. On the **Metrics** page, you can view the metrics. :::image type="content" source="./media/monitor-virtual-wan-reference/metrics-page.png" alt-text="Screenshot that shows the 'Metrics' page with the categories highlighted." lightbox="./media/monitor-virtual-wan-reference/metrics-page.png"::: 1. To see metrics for the virtual hub router, you can select **Metrics** from the virtual hub **Overview** page. - :::image type="content" source="./media/monitor-virtual-wan-reference/hub-metrics.png" alt-text="Screenshot that shows the virtual hub page with the metrics button." lightbox="./media/monitor-virtual-wan-reference/hub-metrics.png"::: + :::image type="content" source="./media/monitor-virtual-wan-reference/hub-metrics.png" alt-text="Screenshot that shows the virtual hub page with the metrics button." lightbox="./media/monitor-virtual-wan-reference/hub-metrics.png"::: -#### PowerShell steps +For more information, see [Analyze metrics for an Azure resource](/azure/azure-monitor/essentials/tutorial-metrics). -To query, use the following example PowerShell commands. The necessary fields are explained below the example. +#### PowerShell steps -**Step 1:** +You can view metrics for Virtual WAN by using PowerShell. To query, use the following example PowerShell commands. ```azurepowershell-interactive $MetricInformation = Get-AzMetric -ResourceId "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/VirtualHubs/<VirtualHubName>" -MetricName "VirtualHubDataProcessed" -TimeGrain 00:05:00 -StartTime 2022-2-20T01:00:00Z -EndTime 2022-2-20T01:30:00Z -AggregationType Sum-``` --**Step 2:** -```azurepowershell-interactive $MetricInformation.Data ``` -* **Resource ID** - Your virtual hub's Resource ID can be found on the Azure portal. Navigate to the virtual hub page within vWAN and select **JSON View** under Essentials. --* **Metric Name** - Refers to the name of the metric you're querying, which in this case is called 'VirtualHubDataProcessed'. This metric shows all the data that the virtual hub router has processed in the selected time period of the hub. +- **Resource ID**. Your virtual hub's Resource ID can be found on the Azure portal. Navigate to the virtual hub page within vWAN and select **JSON View** under Essentials. +- **Metric Name**. Refers to the name of the metric you're querying, which in this case is called `VirtualHubDataProcessed`. This metric shows all the data that the virtual hub router processed in the selected time period of the hub. +- **Time Grain**. Refers to the frequency at which you want to see the aggregation. In the current command, you see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D. +- **Start Time and End Time**. This time is based on UTC. Ensure that you're entering UTC values when inputting these parameters. If these parameters aren't used, the past one hour's worth of data is shown by default. +- **Sum Aggregation Type**. The **sum** aggregation type shows you the total number of bytes that traversed the virtual hub router during a selected time period. For example, if you set the Time granularity to 5 minutes, each data point corresponds to the number of bytes sent in that five-minute interval. To convert this value to Gbps, you can divide this number by 37500000000. Based on the virtual hub's [capacity](hub-settings.md#capacity), the hub router can support between 3 Gbps and 50 Gbps. The **Max** and **Min** aggregation types aren't meaningful at this time. -* **Time Grain** - Refers to the frequency at which you want to see the aggregation. In the current command, you'll see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D. -* **Start Time and End Time** - This time is based on UTC. Ensure that you're entering UTC values when inputting these parameters. If these parameters aren't used, the past one hour's worth of data is shown by default. +For the available resource log categories, their associated Log Analytics tables, and the log schemas for Virtual WAN, see [Azure Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md#resource-logs). -* **Sum Aggregation Type** - The **sum** aggregation type shows you the total number of bytes that traversed the virtual hub router during a selected time period. For example, if you set the Time granularity to 5 minutes, each data point will correspond to the number of bytes sent in that 5 minute interval. To convert this to Gbps, you can divide this number by 37500000000. Based on the virtual hub's [capacity](hub-settings.md#capacity), the hub router can support between 3 Gbps and 50 Gbps. The **Max** and **Min** aggregation types aren't meaningful at this time. +### <a name="schemas"></a>Schemas +For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](/azure/azure-monitor/essentials/resource-logs-schema). -## Analyzing logs +When you review any metrics through Log Analytics, the output contains the following columns: -Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. --For a list of supported logs in Virtual WAN, see [Monitoring Virtual WAN data reference logs](monitor-virtual-wan-reference.md#diagnostic). All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). +|**Column**|**Type**|**Description**| +| | | | +|TimeGrain|string|PT1M (metric values are pushed every minute)| +|Count|real|Usually equal to 2 (each MSEE pushes a single metric value every minute)| +|Minimum|real|The minimum of the two metric values pushed by the two MSEEs| +|Maximum|real|The maximum of the two metric values pushed by the two MSEEs| +|Average|real|Equal to (Minimum + Maximum)/2| +|Total|real|Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried)| ### <a name="create-diagnostic"></a>Create diagnostic setting to view logs The following steps help you create, edit, and view diagnostic settings: :::image type="content" source="./media/monitor-virtual-wan-reference/select-hub-gateway.png" alt-text="Screenshot that shows the Connectivity section for the hub." lightbox="./media/monitor-virtual-wan-reference/select-hub-gateway.png"::: -1. On the right part of the page, click on **Monitor Gateway** and then **Logs**. +1. On the right part of the page, select **Monitor Gateway** and then **Logs**. :::image type="content" source="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png" alt-text="Screenshot for Select View in Azure Monitor for Logs." lightbox="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png"::: 1. In this page, you can create a new diagnostic setting (**+Add diagnostic setting**) or edit an existing one (**Edit setting**). You can choose to send the diagnostic logs to Log Analytics (as shown in the following example), stream to an event hub, send to a 3rd-party solution, or archive to a storage account. - :::image type="content" source="./media/monitor-virtual-wan-reference/select-gateway-settings.png" alt-text="Screenshot for Select Diagnostic Log settings." lightbox="./media/monitor-virtual-wan-reference/select-gateway-settings.png"::: + :::image type="content" source="./media/monitor-virtual-wan-reference/select-gateway-settings.png" alt-text="Screenshot for Select Diagnostic Log settings." lightbox="./media/monitor-virtual-wan-reference/select-gateway-settings.png"::: + 1. After clicking **Save**, you should start seeing logs appear in this log analytics workspace within a few hours. 1. To monitor a **secured hub (with Azure Firewall)**, then diagnostics and logging configuration must be done from accessing the **Diagnostic Setting** tab: - :::image type="content" source="./media/monitor-virtual-wan-reference/firewall-diagnostic-settings.png" alt-text="Screenshot shows Firewall diagnostic settings." lightbox="./media/monitor-virtual-wan-reference/firewall-diagnostic-settings.png" ::: + :::image type="content" source="./media/monitor-virtual-wan-reference/firewall-diagnostic-settings.png" alt-text="Screenshot shows Firewall diagnostic settings." lightbox="./media/monitor-virtual-wan-reference/firewall-diagnostic-settings.png" ::: > [!IMPORTANT] > Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator). -## Alerts +## <a name="azure-firewall"></a>Monitoring secured hub (Azure Firewall) ++If you chose to secure your virtual hub using Azure Firewall, relevant logs and metrics are available here: [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md). ++You can monitor the Secured Hub using Azure Firewall logs and metrics. You can also use activity logs to audit operations on Azure Firewall resources. For every Azure Virtual WAN you secure and convert to a Secured Hub, Azure Firewall creates an explicit firewall resource object. The object is in the resource group where the hub is located. +++++++++### Virtual WAN alert rules ++You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md). ++## Monitoring Azure Virtual WAN - Best practices ++This article provides configuration best practices for monitoring Virtual WAN and the different components that can be deployed with it. The recommendations presented in this article are mostly based on existing Azure Monitor metrics and logs generated by Azure Virtual WAN. For a list of metrics and logs collected for Virtual WAN, see the [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md). ++Most of the recommendations in this article suggest creating Azure Monitor alerts. Azure Monitor alerts proactively notify you when there's an important event in the monitoring data. This information helps you address the root cause quicker and ultimately reduce downtime. To learn how to create a metric alert, see [Tutorial: Create a metric alert for an Azure resource](/azure/azure-monitor/alerts/tutorial-metric-alert). To learn how to create a log query alert, see [Tutorial: Create a log query alert for an Azure resource](/azure/azure-monitor/alerts/tutorial-log-alert). ++### Virtual WAN gateways ++This section describes best practices for Virtual WAN gateways. ++#### Site-to-site VPN gateway ++**Design checklist ΓÇô metric alerts** ++- Create alert rule for increase in Tunnel Egress and/or Ingress packet count drop. +- Create alert rule to monitor BGP peer status. +- Create alert rule to monitor number of BGP routes advertised and learned. +- Create alert rule for VPN gateway overutilization. +- Create alert rule for tunnel overutilization. ++| Recommendation | Description | +|:|:| +|Create alert rule for increase in Tunnel Egress and/or Ingress packet drop count.| An increase in tunnel egress and/or ingress packet drop count might indicate an issue with the Azure VPN gateway, or with the remote VPN device. Select the **Tunnel Egress/Ingress Packet drop count** metric when creating alert rules. Define a **static Threshold value** greater than **0** and the **Total** aggregation type when configuring the alert logic.<br><br>You can choose to monitor the **Connection** as a whole, or split the alert rule by **Instance** and **Remote IP** to be alerted for issues involving individual tunnels. To learn the difference between the concept of **VPN connection**, **link**, and **tunnel** in Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).| +|Create alert rule to monitor BGP peer status.|When using BGP in your site-to-site connections, it's important to monitor the health of the BGP peerings between the gateway instances and the remote devices, as recurrent failures can disrupt connectivity.<br><br>Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br>We recommend that you split the alert by **Instance** and **BGP Peer Address** to detect issues with individual peerings. Avoid selecting the gateway instance IPs as **BGP Peer Address** because this metric monitors the BGP status for every possible combination, including with the instance itself (which is always 0).| +|Create alert rule to monitor number of BGP routes advertised and learned.|**BGP Routes Advertised** and **BGP Routes Learned** monitor the number of routes advertised to and learned from peers by the VPN gateway, respectively. If these metrics drop to zero unexpectedly, it could be because thereΓÇÖs an issue with the gateway or with on-premises.<br><br>We recommend that you configure an alert for both these metrics to be triggered whenever their value is **zero**. Choose the **Total** aggregation type. Split by **Instance** to monitor individual gateway instances.| +|Create alert rule for VPN gateway overutilization.|The number of scale units per instance determines a VPN gatewayΓÇÖs aggregate throughput. All tunnels that terminate in the same gateway instance share its aggregate throughput. It's likely that tunnel stability will be affected if an instance is working at its capacity for a long period of time.<br><br>Select **Gateway S2S Bandwidth** when creating the alert rule. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum aggregate throughput of **both instances**. Alternatively, split the alert **by instance** and use the maximum throughput **per instance** as a reference.<br><br>It's good practice to determine the throughput needs per tunnel in advance in order to choose the appropriate number of scale units. To learn more about the supported scale unit values for site-to-site VPN gateways, see the [Virtual WAN FAQ](virtual-wan-faq.md). +|Create alert rule for tunnel overutilization.|The scale units of the gateway instance where it terminates determines the maximum throughput allowed per tunnel.<br><br>You might want to be alerted if a tunnel is at risk of nearing its maximum throughput, which can lead to performance and connectivity issues. Act proactively by investigating the root cause of the increased tunnel utilization or by increasing the gatewayΓÇÖs scale units.<br><br>Select **Tunnel Bandwidth** when creating the alert rule. Split by **Instance** and **Remote IP** to monitor all individual tunnels or choose specific tunnels instead. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum throughput allowed per tunnel.<br><br>To learn more about how the gatewayΓÇÖs scale units impact a tunnelΓÇÖs maximum throughput, see the [Virtual WAN FAQ](virtual-wan-faq.md).| ++**Design checklist - log query alerts** ++To configure log-based alerts, you must first create a diagnostic setting for your site-to-site/point-to-site VPN gateway. A diagnostic setting is where you define what logs and/or metrics you want to collect and how you want to store that data to be analyzed later. Unlike gateway metrics, gateway logs aren't available if there's no diagnostic setting configured. To learn how to create a diagnostic setting, see [Create diagnostic setting to view logs](#create-diagnostic). ++- Create tunnel disconnect alert rule. +- Create BGP disconnect alert rule. ++| Recommendation | Description | +|:|:| +|Create tunnel disconnect alert rule.|**Use Tunnel Diagnostic Logs** to track disconnect events in your site-to-site connections. A disconnect event can be due to a failure to negotiate SAs, unresponsiveness of the remote VPN device, among other causes. Tunnel Diagnostic Logs also provide the disconnect reason. See the **Create tunnel disconnect alert rule - log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval passes, the number of rows is 0 again for a new interval.<br><br>For troubleshooting tips when analyzing Tunnel Diagnostic Logs, see [Troubleshoot Azure VPN gateway](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#TunnelDiagnosticLog) using diagnostic logs. Additionally, use **IKE Diagnostic Logs** to complement your troubleshooting, as these logs contain detailed IKE-specific diagnostics.| +|Create BGP disconnect alert rule. |Use **Route Diagnostic Logs** to track route updates and issues with BGP sessions. Repeated BGP disconnect events can affect connectivity and cause downtime. See the **Create BGP disconnect rule alert- log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passes, the number of rows is 0 again for a new interval if the BGP sessions are restored.<br><br>For more information about the data collected by Route Diagnostic Logs, see [Troubleshooting Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#RouteDiagnosticLog). | ++**Log queries** ++- **Create tunnel disconnect alert rule - log query**: The following log query can be used to select tunnel disconnect events when creating the alert rule: ++ ```text + AzureDiagnostics + | where Category == "TunnelDiagnosticLog" + | where OperationName == "TunnelDisconnected" + ``` ++- **Create BGP disconnect rule alert- log query**: The following log query can be used to select BGP disconnect events when creating the alert rule: ++ ```text + AzureDiagnostics + | where Category == "RouteDiagnosticLog" + | where OperationName == "BgpDisconnectedEvent" + ``` ++#### Point-to-site VPN gateway ++The following section details the configuration of metric-based alerts only. However, Virtual WAN point-to-site gateways also support diagnostic logs. To learn more about the available diagnostic logs for point-to-site gateways, see [Virtual WAN point-to-site VPN gateway diagnostics](monitor-virtual-wan-reference.md#p2s-diagnostic). ++**Design checklist - metric alerts** ++- Create alert rule for gateway overutilization. +- Create alert for P2S connection count nearing limit. +- Create alert for User VPN route count nearing limit. ++| Recommendation | Description | +|:|:| +|Create alert rule for gateway overutilization.|The number of scale units configured determines the bandwidth of a point-to-site gateway. To learn more about point-to-site gateway scale units, see Point-to-site (User VPN).<br><br>**Use the Gateway P2S Bandwidth** metric to monitor the gatewayΓÇÖs utilization and configure an alert rule that is triggered whenever the gatewayΓÇÖs bandwidth is **greater than** a value near its aggregate throughput ΓÇô for example, if the gateway was configured with 2 scale units, it has an aggregate throughput of 1 Gbps. In this case, you could define a threshold value of 950 Mbps.<br><br>Use this alert to proactively investigate the root cause of the increased utilization, and ultimately increase the number of scale units, if needed. Select the **Average** aggregation type when configuring the alert rule.| +|Create alert for P2S connection count nearing limit |The maximum number of point-to-site connections allowed is also determined by the number of scale units configured on the gateway. To learn more about point-to-site gateway scale units, see the FAQ for [Point-to-site (User VPN)](virtual-wan-faq.md#p2s-concurrent).<br><br>Use the **P2S Connection Count** metric to monitor the number of connections. Select this metric to configure an alert rule that is triggered whenever the number of connections is nearing the maximum allowed. For example, a 1-scale unit gateway supports up to 500 concurrent connections. In this case, you could configure the alert to be triggered whenever the number of connections is **greater than** 450.<br><br>Use this alert to determine whether an increase in the number of scale units is required or not. Choose the **Total** aggregation type when configuring the alert rule.| +|Create alert rule for User VPN routes count nearing limit.|The protocol used determines the maximum number of User VPN routes. IKEv2 has a protocol-level limit of 255 routes, whereas OpenVPN has a limit of 1,000 routes. To learn more about this fact, see [VPN server configuration concepts](point-to-site-concepts.md#vpn-server-configuration-concepts).<br><br>You might want to be alerted if youΓÇÖre close to hitting the maximum number of User VPN routes and act proactively to avoid any downtime. Use the **User VPN Route Count** to monitor this situation and configure an alert rule that is triggered whenever the number of routes surpasses a value close to the limit. For example, if the limit is 255 routes, an appropriate **Threshold** value could be 230. Choose the **Total** aggregation type when configuring the alert rule.| ++#### ExpressRoute gateway ++The following section focuses on metric-based alerts. In addition to the alerts described here, which focus on the gateway component, we recommend that you use the available metrics, logs, and tools to monitor the ExpressRoute circuit. To learn more about ExpressRoute monitoring, see [ExpressRoute monitoring, metrics, and alerts](../expressroute/expressroute-monitoring-metrics-alerts.md). To learn about how you can use the ExpressRoute Traffic Collector tool, see [Configure ExpressRoute Traffic Collector for ExpressRoute Direct](../expressroute/how-to-configure-traffic-collector.md). ++**Design checklist - metric alerts** ++- Create alert rule for bits received per second. +- Create alert rule for CPU overutilization. +- Create alert rule for packets per second. +- Create alert rule for number of routes advertised to peer. +- Count alert rule for number of routes learned from peer. +- Create alert rule for high frequency in route changes. ++| Recommendation | Description | +|:|:| +|Create alert rule for Bits Received Per Second.|**Bits Received per Second** monitors the total amount of traffic received by the gateway from the MSEEs.<br><br>You might want to be alerted if the amount of traffic received by the gateway is at risk of hitting its maximum throughput. This situation can lead to performance and connectivity issues. This approach allows you to act proactively by investigating the root cause of the increased gateway utilization or increasing the gatewayΓÇÖs maximum allowed throughput.<br><br>Choose the **Average** aggregation type and a **Threshold** value close to the maximum throughput provisioned for the gateway when configuring the alert rule.<br><br>Additionally, we recommend that you set an alert when the number of **Bits Received per Second** is near zero, as it might indicate an issue with the gateway or the MSEEs.<br><br>The number of scale units provisioned determines the maximum throughput of an ExpressRoute gateway. To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| +|Create alert rule for CPU overutilization.|When using ExpressRoute gateways, it's important to monitor the CPU utilization. Prolonged high utilization can affect performance and connectivity.<br><br>Use the **CPU utilization** metric to monitor utilization and create an alert for whenever the CPU utilization is **greater than** 80%, so you can investigate the root cause and ultimately increase the number of scale units, if needed. Choose the **Average** aggregation type when configuring the alert rule.<br><br>To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| +|Create alert rule for packets received per second.|**Packets per second** monitors the number of inbound packets traversing the Virtual WAN ExpressRoute gateway.<br><br>You might want to be alerted if the number of **packets per second** is nearing the limit allowed for the number of scale units configured on the gateway.<br><br>Choose the Average aggregation type when configuring the alert rule. Choose a **Threshold** value close to the maximum number of **packets per second** allowed based on the number of scale units of the gateway. To learn more about ExpressRoute performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).<br><br>Additionally, we recommend that you set an alert when the number of **Packets per second** is near zero, as it might indicate an issue with the gateway or MSEEs.| +|Create alert rule for number of routes advertised to peer. |**Count of Routes Advertised to Peers** monitors the number of routes advertised from the ExpressRoute gateway to the virtual hub router and to the Microsoft Enterprise Edge Devices.<br><br>We recommend that you **add a filter** to **only** select the two BGP peers displayed as **ExpressRoute Device** and create an alert to identify when the count of advertised routes approaches the documented limit of **1000**. For example, configure the alert to be triggered when the number of routes advertised is **greater than 950**.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero** in order to proactively detect any connectivity issues.<br><br>To add these alerts, select the **Count of Routes Advertised to Peers** metric, and then select the **Add filter** option and the **ExpressRoute** devices.| +|Create alert rule for number of routes learned from peer.|**Count of Routes Learned from Peers** monitors the number of routes the ExpressRoute gateway learns from the virtual hub router and from the Microsoft Enterprise Edge Device.<br><br>We recommend that you add a filter to **only** select the two BGP peers displayed as **ExpressRoute Device** and create an alert to identify when the count of learned routes approaches the [documented limit](../expressroute/expressroute-faqs.md#are-there-limits-on-the-number-of-routes-i-can-advertise) of 4000 for Standard SKU and 10,000 for Premium SKU circuits.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero**. This approach can help in detecting when your on-premises stops advertising routes. +|Create alert rule for high frequency in route changes.|**Frequency of Routes changes** shows the change frequency of routes being learned and advertised from and to peers, including other types of branches such as site-to-site and point-to-site VPN. This metric provides visibility when a new branch or more circuits are being connected/disconnected.<br><br>This metric is a useful tool when identifying issues with BGP advertisements, such as flaplings. We recommend that you to set an alert **if** the environment is **static** and BGP changes aren't expected. Select a **threshold value** that is **greater than 1** and an **Aggregation Granularity** of 15 minutes to monitor BGP behavior consistently.<br><br>If the environment is dynamic and BGP changes are frequently expected, you might choose not to set an alert otherwise in order to avoid false positives. However, you can still consider this metric for observability of your network.| ++### Virtual hub ++The following section focuses on metrics-based alerts for virtual hubs. ++**Design checklist - metric alerts** ++- Create alert rule for BGP peer status ++| Recommendation | Description | +|:|:| +|Create alert rule to monitor BGP peer status.| Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br> This approach allows you to identify when the virtual hub router is having connectivity issues with ExpressRoute, Site-to-Site VPN, and Point-to-Site VPN gateways deployed in the hub.| ++### Azure Firewall ++This section of the article focuses on metric-based alerts. Azure Firewall offers a comprehensive list of [metrics and logs](../firewall/firewall-diagnostics.md) for monitoring purposes. In addition to configuring the alerts described in the following section, explore how [Azure Firewall Workbook](../firewall/firewall-workbook.md) can help monitor your Azure Firewall. Also, explore the benefits of connecting Azure Firewall logs to Microsoft Sentinel using [Azure Firewall connector for Microsoft Sentinel](../sentinel/data-connectors/azure-firewall.md). ++**Design checklist - metric alerts** -Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-types#metric-alerts), [logs](/azure/azure-monitor/alerts/alerts-types#log-alerts), and the [activity log](/azure/azure-monitor/alerts/alerts-types#activity-log-alerts). Different types of alerts have benefits and drawbacks. +- Create alert rule for risk of SNAT port exhaustion. +- Create alert rule for firewall overutilization. -To see a list of monitoring best practices when configuring alerts, see [Monitoring - best practices](monitoring-best-practices.md). +| Recommendation | Description | +|:|:| +|Create alert rule for risk of SNAT port exhaustion.|Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale instance. ItΓÇÖs important to estimate in advance the number of SNAT ports that can fulfill your organizational requirements for outbound traffic to the Internet. Not doing so increases the risk of exhausting the number of available SNAT ports on the Azure Firewall, potentially causing outbound connectivity failures.<br><br>Use the **SNAT port utilization** metric to monitor the percentage of outbound SNAT ports currently in use. Create an alert rule for this metric to be triggered whenever this percentage surpasses **95%** (due to an unforeseen traffic increase, for example) so you can act accordingly by configuring another public IP address on the Azure Firewall, or by using an [Azure NAT Gateway](../nat-gateway/nat-overview.md) instead. Use the **Maximum** aggregation type when configuring the alert rule.<br><br>To learn more about how to interpret the **SNAT port utilization** metric, see [Overview of Azure Firewall logs and metrics](../firewall/logs-and-metrics.md#metrics). To learn more about how to scale SNAT ports in Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](../firewall/integrate-with-nat-gateway.md).| +|Create alert rule for firewall overutilization.|Azure Firewall maximum throughput differs depending on the SKU and features enabled. To learn more about Azure Firewall performance, see [Azure Firewall performance](../firewall/firewall-performance.md).<br><br>You might want to be alerted if your firewall is nearing its maximum throughput. You can troubleshoot the underlying cause, because this situation can affect the firewallΓÇÖs performance.<br><br> Create an alert rule to be triggered whenever the **Throughput** metric surpasses a value nearing the firewallΓÇÖs maximum throughput ΓÇô if the maximum throughput is 30 Gbps, configure 25 Gbps as the **Threshold** value, for example. The **Throughput** metric unit is **bits/sec**. Choose the **Average** aggregation type when creating the alert rule. -## Virtual WAN Insights +### Resource Health Alerts -Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "Insights". +You can also configure [Resource Health Alerts](/azure/service-health/resource-health-alert-monitor-guide) via Service Health for the below resources. This approach ensures you're informed of the availability of your Virtual WAN environment. The alerts allow you to troubleshoot whether networking issues are due to your Azure resources entering an unhealthy state, as opposed to issues from your on-premises environment. We recommend that you configure alerts when the resource status becomes degraded or unavailable. If the resource status does become degraded/unavailable, you can analyze if there are any recent spikes in the amount of traffic processed by these resources, the routes advertised to these resources, or the number of branch/VNet connections created. For more information about limits supported in Virtual WAN, see [Azure Virtual WAN limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-wan-limits). -Virtual WAN uses Network Insights to provide users and operators with the ability to view the state and status of a virtual WAN, presented via an autodiscovered topological map. Resource state and status overlays on the map give you a snapshot view of the overall health of the virtual WAN. You can navigate resources on the map via one-click access to the resource configuration pages of the Virtual WAN portal. For more information, see [Azure Monitor Network Insights for Virtual WAN](azure-monitor-insights.md). +- Microsoft.Network/vpnGateways +- Microsoft.Network/expressRouteGateways +- Microsoft.Network/azureFirewalls +- Microsoft.Network/virtualHubs +- Microsoft.Network/p2sVpnGateways -## Next steps +## Related content -* See [Monitoring Virtual WAN - Data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN. -* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. -* See [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics) for more details on **Azure Monitor Metrics**. -* See [All resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported) for a list of all supported metrics. -* See [Create diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings) for more information and troubleshooting for creating diagnostic settings via Azure portal, CLI, PowerShell, etc. +- See [Azure Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md) for a reference of the metrics, logs, and other important values created for Virtual WAN. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources. |
virtual-wan | Monitoring Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md | - Title: Monitoring Virtual WAN - Best practices- -description: This article helps you learn Monitoring best practices for Virtual WAN. ----- Previously updated : 01/12/2024---# Monitoring Azure Virtual WAN - Best practices --This article provides configuration best practices for monitoring Virtual WAN and the different components that can be deployed with it. The recommendations presented in this article are mostly based on existing Azure Monitor metrics and logs generated by Azure Virtual WAN. For a list of metrics and logs collected for Virtual WAN, see the [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md). --Most of the recommendations in this article suggest creating Azure Monitor alerts. Azure Monitor alerts are meant to proactively notify you when there is an important event in the monitoring data to help you address the root cause quicker and ultimately reduce downtime. To learn how to create a metric alert, see [Tutorial: Create a metric alert for an Azure resource](/azure/azure-monitor/alerts/tutorial-metric-alert). To learn how to create a log query alert, see [Tutorial: Create a log query alert for an Azure resource](/azure/azure-monitor/alerts/tutorial-log-alert). --## Virtual WAN gateways --### Site-to-site VPN gateway --**Design checklist ΓÇô metric alerts** --* Create alert rule for increase in Tunnel Egress and/or Ingress packet count drop. -* Create alert rule to monitor BGP peer status. -* Create alert rule to monitor number of BGP routes advertised and learned. -* Create alert rule for VPN gateway overutilization. -* Create alert rule for tunnel overutilization. --|Recommendation | Description| -||| -|Create alert rule for increase in Tunnel Egress and/or Ingress packet drop count.| An increase in tunnel egress and/or ingress packet drop count might indicate an issue with the Azure VPN gateway, or with the remote VPN device. Select the **Tunnel Egress/Ingress Packet drop count** metric when creating the alert rule(s). Define a **static Threshold value** greater than **0** and the **Total** aggregation type when configuring the alert logic.<br><br>You can choose to monitor the **Connection** as a whole, or split the alert rule by **Instance** and **Remote IP** to be alerted for issues involving individual tunnels. To learn the difference between the concept of **VPN connection**, **link**, and **tunnel** in Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).| -|Create alert rule to monitor BGP peer status.|When using BGP in your site-to-site connections, it's important to monitor the health of the BGP peerings between the gateway instances and the remote devices, as recurrent failures can disrupt connectivity.<br><br>Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br>We recommend that you split the alert by **Instance** and **BGP Peer Address** to detect issues with individual peerings. Avoid selecting the gateway instance IPs as **BGP Peer Address** because this metric monitors the BGP status for every possible combination, including with the instance itself (which is always 0).| -|Create alert rule to monitor number of BGP routes advertised and learned.|**BGP Routes Advertised** and **BGP Routes Learned** monitor the number of routes advertised to and learned from peers by the VPN gateway, respectively. If these metrics drop to zero unexpectedly, it could be because thereΓÇÖs an issue with the gateway or with on-premises.<br><br>We recommend that you configure an alert for both these metrics to be triggered whenever their value is **zero**. Choose the **Total** aggregation type. Split by **Instance** to monitor individual gateway instances.| -|Create alert rule for VPN gateway overutilization.|A VPN gatewayΓÇÖs aggregate throughput is determined by the number of scale units per instance. Note that all tunnels that terminate in the same gateway instance will share its aggregate throughput. It's likely that tunnel stability will be affected if an instance is working at its capacity for a long period of time.<br><br>Select **Gateway S2S Bandwidth** when creating the alert rule. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum aggregate throughput of **both instances**. Alternatively, split the alert **by instance** and use the maximum throughput **per instance** as a reference.<br><br>It's good practice to determine the throughput needs per tunnel in advance in order to choose the appropriate number of scale units. To learn more about the supported scale unit values for site-to-site VPN gateways, see the [Virtual WAN FAQ](virtual-wan-faq.md). -|Create alert rule for tunnel overutilization.|The maximum throughput allowed per tunnel is determined by the scale units of the gateway instance where it terminates.<br><br>You might want to be alerted if a tunnel is at risk of nearing its maximum throughput, which can lead to performance and connectivity issues, and act proactively on it by investigating the root cause of the increased tunnel utilization or by increasing the gatewayΓÇÖs scale units.<br><br>Select **Tunnel Bandwidth** when creating the alert rule. Split by **Instance** and **Remote IP** to monitor all individual tunnels or choose specific tunnel(s) instead. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum throughput allowed per tunnel.<br><br>To learn more about how a tunnelΓÇÖs maximum throughput is impacted by the gatewayΓÇÖs scale units, see the [Virtual WAN FAQ](virtual-wan-faq.md).| --**Design checklist - log query alerts** --To configure log-based alerts, you must first create a diagnostic setting for your site-to-site/point-to-site VPN gateway. A diagnostic setting is where you define what logs and/or metrics you want to collect and how you want to store that data to be analyzed later. Unlike gateway metrics, gateway logs won't be available if there's no diagnostic setting configured. To learn how to create a diagnostic setting, see [Create diagnostic setting to view logs](monitor-virtual-wan.md#create-diagnostic). --* Create tunnel disconnect alert rule. -* Create BGP disconnect alert rule. --|Recommendation | Description| -||| -|Create tunnel disconnect alert rule.|**Use Tunnel Diagnostic Logs** to track disconnect events in your site-to-site connections. A disconnect event can be due to a failure to negotiate SAs, unresponsiveness of the remote VPN device, among other causes. Tunnel Diagnostic Logs also provide the disconnect reason. See the **Create tunnel disconnect alert rule - log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query above is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passed, the number of rows is 0 again for a new interval.<br><br>For troubleshooting tips when analyzing Tunnel Diagnostic Logs, see [Troubleshoot Azure VPN gateway](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#TunnelDiagnosticLog) using diagnostic logs. Additionally, use **IKE Diagnostic Logs** to complement your troubleshooting, as these logs contain detailed IKE-specific diagnostics.| -|Create BGP disconnect alert rule. |Use **Route Diagnostic Logs** to track route updates and issues with BGP sessions. Repeated BGP disconnect events can affect connectivity and cause downtime. See the **Create BGP disconnect rule alert- log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query above is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passed, the number of rows is 0 again for a new interval if the BGP sessions have been restored.<br><br>For more information about the data collected by Route Diagnostic Logs, see [Troubleshooting Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#RouteDiagnosticLog). | --**Log queries** --* **Create tunnel disconnect alert rule - log query**: The following log query can be used to select tunnel disconnect events when creating the alert rule: -- ```text - AzureDiagnostics - | where Category == "TunnelDiagnosticLog" - | where OperationName == "TunnelDisconnected" - ``` --* **Create BGP disconnect rule alert- log query**: The following log query can be used to select BGP disconnect events when creating the alert rule: -- ```text - AzureDiagnostics - | where Category == "RouteDiagnosticLog" - | where OperationName == "BgpDisconnectedEvent" - ``` --### Point-to-site VPN gateway --The following section details the configuration of metric-based alerts only. However, Virtual WAN point-to-site gateways also support diagnostic logs. To learn more about the available diagnostic logs for point-to-site gateways, see [Virtual WAN point-to-site VPN gateway diagnostics](monitor-virtual-wan-reference.md#p2s-diagnostic). --**Design checklist - metric alerts** --* Create alert rule for gateway overutilization. -* Create alert for P2S connection count nearing limit. -* Create alert for User VPN route count nearing limit. --|Recommendation | Description| -||| -|Create alert rule for gateway overutilization.|The bandwidth of a point-to-site gateway is determined by the number of scale units configured. To learn more about point-to-site gateway scale units, see Point-to-site (User VPN).<br><br>**Use the Gateway P2S Bandwidth** metric to monitor the gatewayΓÇÖs utilization and configure an alert rule that is triggered whenever the gatewayΓÇÖs bandwidth is **greater than** a value near its aggregate throughput ΓÇô for example, if the gateway was configured with 2 scale units, it will have an aggregate throughput of 1 Gbps. In this case, you could define a threshold value of 950 Mbps.<br><br>Use this alert to proactively investigate the root cause of the increased utilization, and ultimately increase the number of scale units, if needed. Select the **Average** aggregation type when configuring the alert rule.| -|Create alert for P2S connection count nearing limit |The maximum number of point-to-site connections allowed is also determined by the number of scale units configured on the gateway. To learn more about point-to-site gateway scale units, see the FAQ for [Point-to-site (User VPN)](virtual-wan-faq.md#p2s-concurrent).<br><br>Use the **P2S Connection Count** metric to monitor the number of connections. Select this metric to configure an alert rule that is triggered whenever the number of connections is nearing the maximum allowed. For example, a 1-scale unit gateway supports up to 500 concurrent connections. In this case, you could configure the alert to be triggered whenever the number of connections is **greater than** 450.<br><br>Use this alert to determine whether an increase in the number of scale units is required or not. Choose the **Total** aggregation type when configuring the alert rule.| -|Create alert rule for User VPN routes count nearing limit.|The maximum number of User VPN routes is determined by the protocol used. IKEv2 has a protocol-level limit of 255 routes, whereas OpenVPN has a limit of 1000 routes. To learn more about this, see [VPN server configuration concepts](point-to-site-concepts.md#vpn-server-configuration-concepts).<br><br>You might want to be alerted if youΓÇÖre close to hitting the maximum number of User VPN routes and act proactively to avoid any downtime. Use the **User VPN Route Count** to monitor this and configure an alert rule that is triggered whenever the number of routes surpasses a value close to the limit. For example, if the limit is 255 routes, an appropriate **Threshold** value could be 230. Choose the **Total** aggregation type when configuring the alert rule.| --### ExpressRoute gateway --The following section focuses on metric-based alerts. In addition to the alerts described below, which focus on the gateway component, we recommend that you use the available metrics, logs, and tools to monitor the ExpressRoute circuit. To learn more about ExpressRoute monitoring, see [ExpressRoute monitoring, metrics, and alerts](../expressroute/expressroute-monitoring-metrics-alerts.md). To learn about how you can use the ExpressRoute Traffic Collector tool, see [Configure ExpressRoute Traffic Collector for ExpressRoute Direct](../expressroute/how-to-configure-traffic-collector.md). --**Design checklist - metric alerts** --* Create alert rule for bits received per second. -* Create alert rule for CPU overutilization. -* Create alert rule for packets per second. -* Create alert rule for number of routes advertised to peer. -* Count alert rule for number of routes learned from peer. -* Create alert rule for high frequency in route changes. --|Recommendation | Description| -||| -|Create alert rule for Bits Received Per Second.|**Bits Received per Second** monitors the total amount of traffic received by the gateway from the MSEEs.<br><br>You might want to be alerted if the amount of traffic received by the gateway is at risk of hitting its maximum throughput, as this can lead to performance and connectivity issues. This allows you to act proactively by investigating the root cause of the increased gateway utilization or increasing the gatewayΓÇÖs maximum allowed throughput.<br><br>Choose the **Average** aggregation type and a **Threshold** value close to the maximum throughput provisioned for the gateway when configuring the alert rule.<br><br>Additionally, we recommend that you set an alert when the number of **Bits Received per Second** is near zero, as it might indicate an issue with the gateway or the MSEEs.<br><br>The maximum throughput of an ExpressRoute gateway is determined by number of scale units provisioned. To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| -|Create alert rule for CPU overutilization.|When using ExpressRoute gateways, it's important to monitor the CPU utilization. Prolonged high utilization can affect performance and connectivity.<br><br>Use the **CPU utilization** metric to monitor this and create an alert for whenever the CPU utilization is **greater than** 80%, so you can investigate the root cause and ultimately increase the number of scale units, if needed. Choose the **Average** aggregation type when configuring the alert rule.<br><br>To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| -|Create alert rule for packets received per second.|**Packets per second** monitors the number of inbound packets traversing the Virtual WAN ExpressRoute gateway.<br><br>You might want to be alerted if the number of **packets per second** is nearing the limit allowed for the number of scale units configured on the gateway.<br><br>Choose the Average aggregation type when configuring the alert rule. Choose a **Threshold** value close to the maximum number of **packets per second** allowed based on the number of scale units of the gateway. To learn more about ExpressRoute performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).<br><br>Additionally, we recommend that you set an alert when the number of **Packets per second** is near zero, as it might indicate an issue with the gateway or MSEEs.| -|Create alert rule for number of routes advertised to peer. |**Count of Routes Advertised to Peers** monitors the number of routes advertised from the ExpressRoute gateway to the virtual hub router and to the Microsoft Enterprise Edge Devices.<br><br>We recommend that you **add a filter** to **only** select the two BGP peers displayed as **ExpressRoute Device** and create an alert to identify when the count of advertised routes approaches the documented limit of **1000**. For example, configure the alert to be triggered when the number of routes advertised is **greater than 950**.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero** in order to proactively detect any connectivity issues.<br><br>To add these alerts, select the **Count of Routes Advertised to Peers** metric, and then select the **Add filter** option and the **ExpressRoute** devices.| -|Create alert rule for number of routes learned from peer.|**Count of Routes Learned from Peers** monitors the number of routes the ExpressRoute gateway learns from the virtual hub router and from the Microsoft Enterprise Edge Device.<br><br>We recommend that you add a filter to **only** select the two BGP peers displayed as **ExpressRoute Device** and create an alert to identify when the count of learned routes approaches the [documented limit](../expressroute/expressroute-faqs.md#are-there-limits-on-the-number-of-routes-i-can-advertise) of 4000 for Standard SKU and 10,000 for Premium SKU circuits.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero**. This can help in detecting when your on-premises has stopped advertising routes. -|Create alert rule for high frequency in route changes.|**Frequency of Routes changes** shows the change frequency of routes being learned and advertised from and to peers, including other types of branches such as site-to-site and point-to-site VPN. This metric provides visibility when a new branch or more circuits are being connected/disconnected.<br><br>This metric is a useful tool when identifying issues with BGP advertisements, such as flaplings. We recommend that you to set an alert **if** the environment is **static** and BGP changes aren't expected. Select a **threshold value** that is **greater than 1** and an **Aggregation Granularity** of 15 minutes to monitor BGP behavior consistently.<br><br>If the environment is dynamic and BGP changes are frequently expected, you might choose not to set an alert otherwise in order to avoid false positives. However, you can still consider this metric for observability of your network.| --## Virtual hub --The following section focuses on metrics-based alerts for virtual hubs. --**Design checklist - metric alerts** --* Create alert rule for BGP peer status --|Recommendation | Description| -||| -|Create alert rule to monitor BGP peer status.| Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br> This will allow you to identify when the virtual hub router is having connectivity issues with ExpressRoute, Site-to-Site VPN, and Point-to-Site VPN gateways deployed in the hub.| ---## Azure Firewall --This section of the article focuses on metric-based alerts. Azure Firewall offers a comprehensive list of [metrics and logs](../firewall/firewall-diagnostics.md) for monitoring purposes. In addition to configuring the alerts described in the following section, explore how [Azure Firewall Workbook](../firewall/firewall-workbook.md) can help monitor your Azure Firewall, or the benefits of connecting Azure Firewall logs to Microsoft Sentinel using [Azure Firewall connector for Microsoft Sentinel](../sentinel/data-connectors/azure-firewall.md). --**Design checklist - metric alerts** --* Create alert rule for risk of SNAT port exhaustion. -* Create alert rule for firewall overutilization. --|Recommendation | Description| -||| -|Create alert rule for risk of SNAT port exhaustion.|Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale instance. ItΓÇÖs important to estimate in advance the number of SNAT ports that will fulfill your organizational requirements for outbound traffic to the Internet. Not doing so increases the risk of exhausting the number of available SNAT ports on the Azure Firewall, potentially causing outbound connectivity failures.<br><br>Use the **SNAT port utilization** metric to monitor the percentage of outbound SNAT ports currently in use. Create an alert rule for this metric to be triggered whenever this percentage surpasses **95%** (due to an unforeseen traffic increase, for example) so you can act accordingly by configuring an additional public IP address on the Azure Firewall, or by using an [Azure NAT Gateway](../nat-gateway/nat-overview.md) instead. Use the **Maximum** aggregation type when configuring the alert rule.<br><br>To learn more about how to interpret the **SNAT port utilization** metric, see [Overview of Azure Firewall logs and metrics](../firewall/logs-and-metrics.md#metrics). To learn more about how to scale SNAT ports in Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](../firewall/integrate-with-nat-gateway.md).| -|Create alert rule for firewall overutilization.|Azure Firewall maximum throughput differs depending on the SKU and features enabled. To learn more about Azure Firewall performance, see [Azure Firewall performance](../firewall/firewall-performance.md).<br><br>You might want to be alerted if your firewall is nearing its maximum throughput and troubleshoot the underlying cause, as this can have an impact in the firewallΓÇÖs performance.<br><br> Create an alert rule to be triggered whenever the **Throughput** metric surpasses a value nearing the firewallΓÇÖs maximum throughput ΓÇô if the maximum throughput is 30Gbps, configure 25Gbps as the **Threshold** value, for example. The **Throughput** metric unit is **bits/sec**. Choose the **Average** aggregation type when creating the alert rule. --## Resource Health Alerts --You can also configure [Resource Health Alerts](/azure/service-health/resource-health-alert-monitor-guide) via Service Health for the below resources. This ensures you are informed of the availability of your Virtual WAN environment, and this allows you to troubleshoot if networking issues are due to your Azure resources entering an unhealthy state, as opposed to issues from your on-premises environment. It is recommended to configure alerts when the resource status becomes degraded or unavailable. If the resource status does become degraded/unavailable, you can analyze if there are any recent spikes in the amount of traffic processed by these resources, the routes advertised to these resources, or the number of branch/VNet connections created. Please see [Azure Virtual WAN limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-wan-limits) for additional info on limits supported in Virtual WAN. --* Microsoft.Network/vpnGateways -* Microsoft.Network/expressRouteGateways -* Microsoft.Network/azureFirewalls -* Microsoft.Network/virtualHubs -* Microsoft.Network/p2sVpnGateways --## Next steps --* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN. -* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. -* See [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics) for more details about **Azure Monitor Metrics**. -* See [All resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported) for a list of all supported metrics. -* See [Create diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings) for more information and troubleshooting when creating diagnostic settings via the Azure portal, CLI, PowerShell, etc. |
web-application-firewall | Waf Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/waf-azure-policy.md | Title: Azure Web Application Firewall and Azure Policy description: Azure Web Application Firewall (WAF) combined with Azure Policy can help enforce organizational standards and assess compliance at-scale for WAF resources-++ Last updated 05/25/2023- # Azure Web Application Firewall and Azure Policy |