Updates from: 01/31/2022 02:05:48
Service Microsoft Docs article Related commit history on GitHub Change details
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-volume.md
description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 01/18/2022 Last updated : 01/29/2022 #Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS 1.21 or above cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
spec:
mountPath: /mnt/azure volumes: - name: azure
- azureFile:
- secretName: azure-secret
- shareName: aksshare
- readOnly: false
+ csi:
+ driver: file.csi.azure.com
+ volumeAttributes:
+ secretName: azure-secret # required
+ shareName: aksshare # required
+ mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30" # optional
``` Use the `kubectl` command to create the pod.
Use the `kubectl` command to create the pod.
kubectl apply -f azure-files-pod.yaml ```
-You now have a running pod with an Azure Files share mounted at */mnt/azure*. You can use `kubectl describe pod mypod` to verify the share is mounted successfully. The following condensed example output shows the volume mounted in the container:
-
-```
-Containers:
- mypod:
- Container ID: docker://86d244cfc7c4822401e88f55fd75217d213aa9c3c6a3df169e76e8e25ed28166
- Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- Image ID: docker-pullable://nginx@sha256:9ad0746d8f2ea6df3a17ba89eca40b48c47066dfab55a75e08e2b70fc80d929e
- State: Running
- Started: Sat, 02 Mar 2019 00:05:47 +0000
- Ready: True
- Mounts:
- /mnt/azure from azure (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-z5sd7 (ro)
-[...]
-Volumes:
- azure:
- Type: AzureFile (an Azure File Service mount on the host and bind mount to the pod)
- SecretName: azure-secret
- ShareName: aksshare
- ReadOnly: false
- default-token-z5sd7:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-z5sd7
-[...]
-```
+You now have a running pod with an Azure Files share mounted at */mnt/azure*. You can use `kubectl describe pod mypod` to verify the share is mounted successfully.
## Mount file share as a persistent volume - Mount options-
-The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.15 and above. The following example sets *0755* on the *PersistentVolume* object:
+> The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.15 and above.
```yaml apiVersion: v1
spec:
storage: 5Gi accessModes: - ReadWriteMany
- azureFile:
- secretName: azure-secret
- secretNamespace: default
- shareName: aksshare
- readOnly: false
- mountOptions:
- - dir_mode=0755
- - file_mode=0755
- - uid=1000
- - gid=1000
- - mfsymlinks
- - nobrl
-```
-
-To update your mount options, create a *azurefile-mount-options-pv.yaml* file with a *PersistentVolume*. For example:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: azurefile
-spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteMany
- azureFile:
- secretName: azure-secret
- shareName: aksshare
+ persistentVolumeReclaimPolicy: Retain
+ csi:
+ driver: file.csi.azure.com
readOnly: false
+ volumeHandle: unique-volumeid # make sure this volumeid is unique in the cluster
+ volumeAttributes:
+ resourceGroup: EXISTING_RESOURCE_GROUP_NAME # optional, only set this when storage account is not in the same resource group as agent node
+ shareName: aksshare
+ nodeStageSecretRef:
+ name: azure-secret
+ namespace: default
mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=1000
- - gid=1000
- - mfsymlinks
- - nobrl
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - nosharesock
``` Create a *azurefile-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
kubectl apply -f azure-files-pod.yaml
## Next steps
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+For Azure File CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
-For more information about AKS clusters interact with Azure Files, see the [Kubernetes plugin for Azure Files][kubernetes-files].
+For information about AKS 1.20 or below clusters interact with Azure Files, see the [Kubernetes plugin for Azure Files][kubernetes-files].
-For storage class parameters, see [Static Provision(bring your own file share)](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share).
+For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
<!-- LINKS - external --> [kubectl-create]: https://kubernetes.io/docs/user-guide/kubectl/v1.8/#create
For storage class parameters, see [Static Provision(bring your own file share)](
[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/ [smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview [kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
+[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share
<!-- LINKS - internal --> [aks-quickstart-cli]: kubernetes-walkthrough.md
For storage class parameters, see [Static Provision(bring your own file share)](
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
-[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
+[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
api-management Sap Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/sap-api.md
+
+ Title: Import an SAP API using the Azure portal | Microsoft Docs
+
+description: Learn how to import OData metadata from SAP as an API to Azure API Management
++++ Last updated : 01/26/2022+++
+# Import SAP OData metadata as an API
+
+This article shows how to import an OData service using its metadata description. In this article, [SAP Gateway](https://help.sap.com/viewer/product/SAP_GATEWAY) serves as an example. However, you can apply the approach to any OData-compliant service.
+
+In this article, you'll:
+> [!div class="checklist"]
+> * Convert OData metadata to an OpenAPI specification
+> * Import the OpenAPI specification to API Management
+> * Complete API configuration
+> * Test the API in the Azure portal
+
+## Prerequisites
+
+- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
+- An SAP system and service exposed as OData v2 or v4.
+- If your SAP backend uses a self-signed certificate (for test purposes), you may need to disable the verification of the trust chain for SSL. To do so, configure a [backend](backends.md) in your API Management instance:
+ 1. In the Azure portal, under **APIs**, select **Backends** > **+ Add**.
+ 1. Add a **Custom URL** pointing to the SAP backend service.
+ 1. Uncheck **Validate certificate chain** and **Validate certificate name**.
+
+ > [!NOTE]
+ > For production scenarios, use proper certificates for end-to-end SSL verification.
+
+## Convert OData metadata to OpenAPI JSON
+
+1. Retrieve metadata XML from your SAP service. Use one of these methods:
+
+ * Use the SAP Gateway Client (transaction `/IWFND/GW_CLIENT`), or
+ * Make a direct HTTP call to retrieve the XML:
+ `http://<OData server URL>:<port>/<path>/$metadata`.
+
+1. Convert the OData XML to OpenAPI JSON format. Use an OASIS open-source tool for [OData v2](https://github.com/oasis-tcs/odata-openapi/tree/main/tools) or [OData v4](https://github.com/oasis-tcs/odata-openapi/tree/main/lib), depending on your metadata XML.
+
+ The following is an example command to convert OData v2 XML for the test service `epm_ref_apps_prod_man_srv`:
+
+ ```console
+ odata-openapi -p --basePath '/sap/opu/odata/sap/epm_ref_apps_prod_man_srv' \
+ --scheme https --host <your IP address>:<your SSL port> \
+ ./epm_ref_apps_prod_man_srv.xml
+ ```
+ > [!NOTE]
+ > * For test purposes with a single XML file, you can use a [web-based converter](https://aka.ms/ODataOpenAPI) based on the open-source tool.
+ > * With the tool or the web-based converter, specifying the \<IP address>:\<port> of your SAP OData server is optional. Alternatively, add this information later in your generated OpenAPI specification or after importing to API Management.
+
+1. Save the `openapi-spec.json` file locally for import to API Management.
++
+## Import and publish backend API
+
+1. From the side navigation menu, under the **APIs** section, select **APIs**.
+1. Under **Create a new definition**, select **OpenAPI specification**.
+
+ :::image type="content" source="./media/import-api-from-oas/oas-api.png" alt-text="OpenAPI specifiction":::
+
+1. Click **Select a file**, and select the `openapi-spec.json` file that you saved locally in a previous step.
+
+1. Enter API settings. You can set the values during creation or configure them later by going to the **Settings** tab.
+ * In **API URL suffix**, we recommend using the same URL path as in the original SAP service.
+
+ * For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+
+1. Select **Create**.
+
+> [!NOTE]
+> The API import limitations are documented in [another article](api-management-api-import-restrictions.md).
++
+## Complete API configuration
+
+[Add](add-api-manually.md#add-and-test-an-operation) the following three operations to the API that you imported.
+
+- `GET /$metadata`
+
+ |Operation |Description |Further configuration for operation |
+ ||||
+ |`GET /$metadata` | Enables API Management to reach the `$metadata` endpoint, which is required for client integration with the OData server.<br/><br/>This required operation isn't included in the OpenAPI specification that you generated and imported. | Add a `200 OK` response. |
+
+ :::image type="content" source="media/sap-api/get-metadata-operation.png" alt-text="Get metadata operation":::
+
+- `HEAD /`
+
+ |Operation |Description |Further configuration for operation |
+ ||||
+ |`HEAD /` | Enables the client to exchange Cross Site Request Forgery (CSRF) tokens with the SAP server, when required.<br/><br/>SAP also allows CSRF token exchange using the GET verb.<br/><br/> CSRF token exchange isnΓÇÖt covered in this article. See an example API Management [policy snippet](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Get%20X-CSRF%20token%20from%20SAP%20gateway%20using%20send%20request.policy.xml) to broker token exchange. | N/A |
+
+ :::image type="content" source="media/sap-api/head-root-operation.png" alt-text="Operation to fetch tokens":::
+
+- `GET /`
+
+ Operation |Description |Further configuration for operation |
+ ||||
+ |`GET /` | Enables policy configuration at service root. | Configure the following inbound [rewrite-uri](api-management-transformation-policies.md#RewriteURL) policy to append a trailing slash to requests that are forwarded to service root:<br/><br> `<rewrite-uri template="/" copy-unmatched-params="true" />` <br/><br/>This policy removes potential ambiguity of requests with or without trailing slashes, which are treated differently by some backends.|
+
+ :::image type="content" source="media/sap-api/get-root-operation.png" alt-text="Get operation for service root":::
+
+Also, configure authentication to your backend using an appropriate method for your environment. For examples, see [API Management authentication policies](api-management-authentication-policies.md).
+
+## Test your API
+
+1. Navigate to your API Management instance.
+1. From the side navigation menu, under the **APIs** section, select **APIs**.
+1. Under **All APIs**, select your imported API.
+1. Select the **Test** tab to access the test console.
+1. Select an operation, enter any required values, and select **Send**.
+
+ For example, test the `GET /$metadata` call to verify connectivity to the SAP backend
+1. View the response. To troubleshoot, [trace](api-management-howto-api-inspector.md) the call.
+1. When testing is complete, exit the test console.
+
+## Production considerations
+
+* See an [example end-to-end scenario](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) to integrate API Management with an SAP gateway.
+* Control access to an SAP backend using API Management policies. See policy snippets for [SAP principal propagation](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml) and [fetching an X-CSRF token](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Get%20X-CSRF%20token%20from%20SAP%20gateway%20using%20send%20request.policy.xml).
+* For guidance to deploy, manage, and migrate APIs at scale, see:
+ * [Automated API deployments with APIOps](/architecture/example-scenario/devops/automated-api-deployments-apiops)
+ * [CI/CD for API Management using Azure Resource Manager templates](devops-api-development-templates.md).
++
+## Next steps
+> [!div class="nextstepaction"]
+> [Transform and protect a published API](transform-api.md)
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
The free certificate comes with the following limitations:
- All the above must be met for successful certificate issuances and renewals # [Subdomain](#tab/subdomain)-- Must have CNAME mapped _directly_ to <app-name>.azurewebsites.net; using services that proxy the CNAME value will block certificate issuance and renewal
+- Must have CNAME mapped _directly_ to \<app-name\>.azurewebsites.net; using services that proxy the CNAME value will block certificate issuance and renewal
- All the above must be met for successful certificate issuance and renewals --
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
Title: 'Quickstart: Create a Python app'
-description: Get started with Azure App Service by deploying your first Python app to a Linux container in App Service.
+ Title: 'Quickstart: Deploy a Python web app to Azure App Service'
+description: Get started with Azure App Service by deploying your first Python app to Azure App Service.
Previously updated : 11/10/2020 Last updated : 01/28/2022++ ms.devlang: python-
-zone_pivot_groups: python-frameworks-01
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô393165ΓÇôA/BΓÇôDocs/PythonQuickstartΓÇôCLIvsPortalΓÇôFY21Q4
-adobe-target-experience: Experience B
-adobe-target-content: ./quickstart-python-portal
+
-# Quickstart: Create a Python app using Azure App Service on Linux
+# Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service
-In this quickstart, you deploy a Python web app to [App Service on Linux](overview.md#app-service-on-linux), Azure's highly scalable, self-patching web hosting service. You use the [Azure CLI](/cli/azure/install-azure-cli) locally from a Windows, Linux, or macOS environment to deploy a sample with either the Flask or Django frameworks. The web app you configure uses a basic App Service tier that incurs a small cost in your Azure subscription.
+In this quickstart, you will deploy a Python web app (Django or Flask) to [Azure App Service](/azure/app-service/overview#app-service-on-linux). Azure App Service is a fully managed web hosting service that supports Python 3.7 and higher apps hosted in a Linux server environment.
-## Set up your initial environment
+To complete this quickstart, you need:
+1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+1. <a href="https://www.python.org/downloads/" target="_blank">Python 3.9 or higher</a> installed locally.
-1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-1. Install <a href="https://www.python.org/downloads/" target="_blank">Python 3.6 or higher</a>.
-1. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a>, with which you run commands in any shell to provision and configure Azure resources.
+## 1 - Sample application
-Open a terminal window and check your Python version is 3.6 or higher:
+This quickstart can be completed using either Flask or Django. A sample application in each framework is provided to help you follow along with this quickstart. Download or clone the sample application to your local workstation.
-# [Bash](#tab/bash)
+### [Flask](#tab/flask)
-```bash
-python3 --version
+```Console
+git clone https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart
```
-# [PowerShell](#tab/powershell)
+### [Django](#tab/django)
-```cmd
-py -3 --version
-```
-
-# [Cmd](#tab/cmd)
-
-```cmd
-py -3 --version
+```Console
+git clone https://github.com/Azure-Samples/msdocs-python-django-webapp-quickstart
```
-Check that your Azure CLI version is 2.0.80 or higher:
+To run the application locally:
-```azurecli
-az --version
-```
+### [Flask](#tab/flask)
-Then sign in to Azure through the CLI:
+1. Navigate into in the application folder:
-```azurecli
-az login
-```
+ ```Console
+ cd msdocs-python-flask-webapp-quickstart
+ ```
-This command opens a browser to gather your credentials. When the command finishes, it shows JSON output containing information about your subscriptions.
+1. Create a virtual environment for the app:
-Once signed in, you can run Azure commands with the Azure CLI to work with resources in your subscription.
+ [!INCLUDE [Virtual environment setup](<./includes/quickstart-python/virtual-environment-setup.md>)]
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+1. Install the dependencies:
-## Clone the sample
+ ```Console
+ pip install -r requirements.txt
+ ```
-Clone the sample repository using the following command and navigate into the sample folder. ([Install git](https://git-scm.com/downloads) if you don't have git already.)
+1. Run the app:
-```terminal
-git clone https://github.com/Azure-Samples/python-docs-hello-world
-```
+ ```Console
+ flask run
+ ```
-```terminal
-git clone https://github.com/Azure-Samples/python-docs-hello-django
-```
+1. Browse to the sample application at `http://localhost:5000` in a web browser.
-The sample contains framework-specific code that Azure App Service recognizes when starting the app. For more information, see [Container startup process](configure-language-python.md#container-startup-process).
+ :::image type="content" source="./media/quickstart-python/run-flask-app-localhost.png" alt-text="Screenshot of the Flask app running locally in a browser":::
Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
-## Run the sample
+### [Django](#tab/django)
-1. Navigate into in the *python-docs-hello-world* folder:
+1. Navigate into in the application folder:
- ```terminal
- cd python-docs-hello-world
+ ```Console
+ cd msdocs-python-django-webapp-quickstart
```
-1. Create a virtual environment and install dependencies:
+1. Create a virtual environment for the app:
- [!include [virtual environment setup](../../includes/app-service-quickstart-python-venv.md)]
+ [!INCLUDE [Virtual environment setup](<./includes/quickstart-python/virtual-environment-setup.md>)]
- If you encounter "[Errno 2] No such file or directory: 'requirements.txt'.", make sure you're in the *python-docs-hello-world* folder.
+1. Install the dependencies:
-1. Run the development server.
+ ```Console
+ pip install -r requirements.txt
+ ```
- ```terminal
- flask run
+1. Run the app:
+
+ ```Console
+ python manage.py runserver
```
-
- By default, the server assumes that the app's entry module is in *app.py*, as used in the sample.
- If you use a different module name, set the `FLASK_APP` environment variable to that name.
+1. Browse to the sample application at `http://localhost:8000` in a web browser.
- If you encounter the error, "Could not locate a Flask application. You did not provide the 'FLASK_APP' environment variable, and a 'wsgi.py' or 'app.py' module was not found in the current directory.", make sure you're in the `python-docs-hello-world` folder that contains the sample.
+ :::image type="content" source="./media/quickstart-python/run-django-app-localhost.png" alt-text="Screenshot of the Django app running locally in a browser":::
-1. Open a web browser and go to the sample app at `http://localhost:5000/`. The app displays the message **Hello, World!**.
+Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
- ![Run a sample Python app locally](./media/quickstart-python/run-hello-world-sample-python-app-in-browser-localhost.png)
-
-1. In your terminal window, press **Ctrl**+**C** to exit the development server.
+
-1. Navigate into the *python-docs-hello-django* folder:
+## 2 - Create a web app in Azure
- ```terminal
- cd python-docs-hello-django
- ```
+To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using the [Azure portal](https://portal.azure.com/), VS Code using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or the Azure CLI.
-1. Create a virtual environment and install dependencies:
+### [Azure portal](#tab/azure-portal)
- [!include [virtual environment setup](../../includes/app-service-quickstart-python-venv.md)]
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
- If you encounter "[Errno 2] No such file or directory: 'requirements.txt'.", make sure you're in the *python-docs-hello-django* folder.
-
-1. Run the development server.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot showing the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot showing how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot showing how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot showing the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
- ```terminal
- python manage.py runserver
- ```
+### [VS Code](#tab/vscode-aztools)
+
+To create Azure resources in VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
+
+> [!div class="nextstepaction"]
+> [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240px.png" alt-text="A screenshot showing the location of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: |
+| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240px.png" alt-text="A screenshot showing the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
+| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-3-240px.png" alt-text="A screenshot of dialog box used to enter the name of the new web app in Visual Studio Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-3.png"::: |
+| [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-4-240px.png" alt-text="A screenshot of the dialog box in VS Code used to select the runtime for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-4.png"::: |
+| [!INCLUDE [Create app service step 6](<./includes/quickstart-python/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-5-240px.png" alt-text="A screenshot of the dialog in VS Code used to select the App Service plan for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-5.png"::: |
-1. Open a web browser and go to the sample app at `http://localhost:8000/`. The app displays the message **Hello, World!**.
+### [Azure CLI](#tab/azure-cli)
- ![Run a sample Python app locally](./media/quickstart-python/run-hello-world-sample-python-app-in-browser-localhost.png)
-
-1. In your terminal window, press **Ctrl**+**C** to exit the development server.
++ Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
-## Deploy the sample
+## 3 - Deploy your application code to Azure
-Deploy the code in your local folder (*python-docs-hello-world*) using the `az webapp up` command:
+Azure App service supports multiple methods to deploy your application code to Azure including support for GitHub Actions and all major CI/CD tools. This article focuses on how to deploy your code from your local workstation to Azure.
-```azurecli
-az webapp up --sku B1 --name <app-name>
-```
+### [Deploy using VS Code](#tab/vscode-deploy)
-- If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment).-- If the `webapp` command isn't recognized, because that your Azure CLI version is 2.0.80 or higher. If not, [install the latest version](/cli/azure/install-azure-cli).-- Replace `<app_name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier.-- The `--sku B1` argument creates the web app on the Basic pricing tier, which incurs a small hourly cost. Omit this argument to use a faster premium tier.-- You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [az account list-locations](/cli/azure/appservice#az_appservice_list_locations) command.-- If you see the error, "Could not auto-detect the runtime stack of your app," make sure you're running the command in the *python-docs-hello-world* folder (Flask) or the *python-docs-hello-django* folder (Django) that contains the *requirements.txt* file. (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md) (GitHub).)
+To deploy a web app from VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan and hosting app, configuring logging, then performing ZIP deployment. It then gives the message, "You can launch the app at http://&lt;app-name&gt;.azurewebsites.net", which is the app's URL on Azure.
+> [!div class="nextstepaction"]
+> [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
-![Example output of the az webapp up command](./media/quickstart-python/az-webapp-up-output.png)
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [VS Code deploy step 1](<./includes/quickstart-python/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-1-240px.png" alt-text="A screenshot showing the location of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/deploy-visual-studio-code-1.png"::: |
+| [!INCLUDE [VS Code deploy step 2](<./includes/quickstart-python/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-2-240px.png" alt-text="A screenshot showing the context menu of an App Service and the deploy to web app menu option." lightbox="./media/quickstart-python/deploy-visual-studio-code-2.png"::: |
+| [!INCLUDE [VS Code deploy step 3](<./includes/quickstart-python/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-3-240px.png" alt-text="A screenshot dialog in VS Code used to choose the app to deploy." lightbox="./media/quickstart-python/deploy-visual-studio-code-3.png"::: |
+| [!INCLUDE [VS Code deploy step 4](<./includes/quickstart-python/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-4-240px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/deploy-visual-studio-code-4.png"::: |
+| [!INCLUDE [VS Code deploy step 5](<./includes/quickstart-python/deploy-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-5-240px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/deploy-visual-studio-code-5.png"::: |
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+### [Deploy using Local Git](#tab/local-git-deploy)
-## Browse to the app
+### [Deploy using a ZIP file](#tab/zip-deploy)
-Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. It can take a minute or two for the the app to start, so if you see a default app page, wait a minute and refresh the browser.
-The Python sample code is running a Linux container in App Service using a built-in image.
+
-![Run a sample Python app in Azure](./media/quickstart-python/run-hello-world-sample-python-app-in-browser.png)
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
-**Congratulations!** You've deployed your Python app to App Service.
+## 4 - Browse to the app
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. It can take a minute or two for the app to start, so if you see a default app page, wait a minute and refresh the browser.
-## Redeploy updates
+The Python sample code is running a Linux container in App Service using a built-in image.
-In this section, you make a small code change and then redeploy the code to Azure. The code change includes a `print` statement to generate logging output that you work with in the next section.
-Open *app.py* in an editor and update the `hello` function to match the following code.
+**Congratulations!** You have deployed your Python app to App Service.
-```python
-def hello():
- print("Handling request to home page.")
- return "Hello, Azure!"
-```
-Open *hello/views.py* in an editor and update the `hello` function to match the following code.
-
-```python
-def hello(request):
- print("Handling request to home page.")
- return HttpResponse("Hello, Azure!")
-```
-
-Save your changes, then redeploy the app using the `az webapp up` command again:
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
-```azurecli
-az webapp up
-```
+## 5 - Stream logs
-This command uses values that are cached locally in the *.azure/config* file, including the app name, resource group, and App Service plan.
+Azure App Service captures all messages output to the console to assist you in diagnosing issues with your application. The sample apps include `print()` statements to demonstrate this capability.
-Once deployment is complete, switch back to the browser window open to `http://<app-name>.azurewebsites.net`. Refresh the page, which should display the modified message:
+### [Flask](#tab/flask)
-![Run an updated sample Python app in Azure](./media/quickstart-python/run-updated-hello-world-sample-python-app-in-browser.png)
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+### [Django](#tab/django)
-> [!TIP]
-> Visual Studio Code provides powerful extensions for Python and Azure App Service, which simplify the process of deploying Python web apps to App Service. For more information, see [Deploy Python apps to App Service from Visual Studio Code](/azure/python/tutorial-deploy-app-service-on-linux-01).
-## Stream logs
+
-You can access the console logs generated from inside the app and the container in which it runs. Logs include any output generated using `print` statements.
+The contents of the App Service diagnostic logs can be reviewed in the Azure portal, VS Code, or using the Azure CLI.
-To stream logs, run the [az webapp log tail](/cli/azure/webapp/log#az_webapp_log_tail) command:
+### [Azure portal](#tab/azure-portal)
-```azurecli
-az webapp log tail
-```
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Stream logs from Azure portal 1](<./includes/quickstart-python/stream-logs-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-1-240px.png" alt-text="A screenshot showing the location in the Azure portal where to enable streaming logs." lightbox="./media/quickstart-python/stream-logs-azure-portal-1.png"::: |
+| [!INCLUDE [Stream logs from Azure portal 2](<./includes/quickstart-python/stream-logs-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-2-240px.png" alt-text="A screenshot showing how to view logs in the Azure portal." lightbox="./media/quickstart-python/stream-logs-azure-portal-2.png"::: |
-You can also include the `--logs` parameter with then `az webapp up` command to automatically open the log stream on deployment.
+### [VS Code](#tab/vscode-aztools)
-Refresh the app in the browser to generate console logs, which include messages describing HTTP requests to the app. If no output appears immediately, try again in 30 seconds.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Stream logs from VS Code 1](<./includes/quickstart-python/stream-logs-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-1-240px.png" alt-text="A screenshot showing how to start streaming logs with the VS Code extension." lightbox="./media/quickstart-python/stream-logs-vs-code-1.png"::: |
+| [!INCLUDE [Stream logs from VS Code 2](<./includes/quickstart-python/stream-logs-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-2-240px.png" alt-text="A screenshot showing an example of streaming logs in the VS Code Output window." lightbox="./media/quickstart-python/stream-logs-vs-code-2.png"::: |
-You can also inspect the log files from the browser at `https://<app-name>.scm.azurewebsites.net/api/logs/docker`.
+### [Azure CLI](#tab/azure-cli)
-To stop log streaming at any time, press **Ctrl**+**C** in the terminal.
+First, you need to configure Azure App Service to output logs to the App Service filesystem using the [az webapp log config](/cli/azure/webapp/log#az_webapp_log_config) command.
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
-## Manage the Azure app
+To stream logs, use the [az webapp log tail](/cli/azure/webapp/log#az_webapp_log_tail) command.
-Go to the <a href="https://portal.azure.com" target="_blank">Azure portal</a> to manage the app you created. Search for and select **App Services**.
-![Navigate to App Services in the Azure portal](./media/quickstart-python/navigate-to-app-services-in-the-azure-portal.png)
+Refresh the home page in the app or attempt other requests to generate some log messages. The output should look similar to the following.
-Select the name of your Azure app.
+```Output
+Starting Live Log Stream
-![Navigate to your Python app in App Services in the Azure portal](./media/quickstart-python/navigate-to-app-in-app-services-in-the-azure-portal.png)
+2021-12-23T02:15:52.740703322Z Request for index page received
+2021-12-23T02:15:52.740740222Z 169.254.130.1
+2021-12-23T02:15:52.841043070Z 169.254.130.1
+2021-12-23T02:15:52.884541951Z 169.254.130.1
+2021-12-23T02:15:53.043211176Z 169.254.130.1
-Selecting the app opens its **Overview** page, where you can perform basic management tasks like browse, stop, start, restart, and delete.
+2021-12-23T02:16:01.304306845Z Request for hello page received with name=David
+2021-12-23T02:16:01.304335945Z 169.254.130.1
+2021-12-23T02:16:01.398399251Z 169.254.130.1
+2021-12-23T02:16:01.430740060Z 169.254.130.1
+```
-![Manage your Python app in the Overview page in the Azure portal](./media/quickstart-python/manage-an-app-in-app-services-in-the-azure-portal.png)
+
-The App Service menu provides different pages for configuring your app.
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
## Clean up resources
-In the preceding steps, you created Azure resources in a resource group. The resource group has a name like "appsvc_rg_Linux_CentralUS" depending on your location. If you keep the web app running, you will incur some ongoing costs (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/)).
+When you are finished with the sample app, you can remove all of the resources for the app from Azure to ensure you do not incur additional charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
+
+### [Azure portal](#tab/azure-portal)
+
+Follow these steps while signed-in to the Azure portal to delete a resource group.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Remove resource group Azure portal 1](<./includes/quickstart-python/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-1.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 2](<./includes/quickstart-python/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-2-240px.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-2.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 3](<./includes/quickstart-python/remove-resource-group-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-3.png"::: |
-If you don't expect to need these resources in the future, delete the resource group by running the following command:
+### [VS Code](#tab/vscode-aztools)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Remove resource group VS Code 1](<./includes/quickstart-python/remove-resource-group-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-1-240px.png" alt-text="A screenshot showing how to delete a resource group in VS Code using the Azure Tools extension." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-1.png"::: |
+| [!INCLUDE [Remove resource group VS Code 2](<./includes/quickstart-python/remove-resource-group-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-2-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group from VS Code." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Delete the resource group by using the [az group delete](/cli/azure/group#az_group_delete) command.
```azurecli
-az group delete --no-wait
+az group delete \
+ --name msdocs-python-webapp-quickstart \
+ --no-wait
```
-The command uses the resource group name cached in the *.azure/config* file.
- The `--no-wait` argument allows the command to return before the operation is complete. ++ Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp). ## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Python (Django) web app with PostgreSQL](tutorial-python-postgresql-app.md)
+> [Tutorial: Python (Django) web app with PostgreSQL](/azure/app-service/tutorial-python-postgresql-app.md)
> [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](/azure/app-service/configure-language-python.md)
> [!div class="nextstepaction"]
-> [Add user sign-in to a Python web app](../active-directory/develop/quickstart-v2-python-webapp.md)
+> [Add user sign-in to a Python web app](/azure/active-directory/develop/quickstart-v2-python-webapp.md)
> [!div class="nextstepaction"]
-> [Tutorial: Run Python app in custom container](tutorial-custom-container.md)
+> [Tutorial: Run Python app in custom container](/azure/app-service/tutorial-custom-container.md)
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/concept-id-document.md
See how data, including name, birth date, machine-readable zone, and expiration
> [!NOTE] > Form Recognizer studio is available with the preview (v3.0) API.
-1. On the Form Recognizer Studio home page, select **Invoices**
+1. On the Form Recognizer Studio home page, select **Identity documents**
1. You can analyze the sample invoice or select the **+ Add** button to upload your own sample.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
A .NET isolated function project is basically a .NET console app project that ta
+ [local.settings.json](functions-develop-local.md#local-settings-file) file. + C# project file (.csproj) that defines the project and dependencies. + Program.cs file that's the entry point for the app.
+
+> [!NOTE]
+> To be able to publish your isolated function project to either a Windows or a Linux function app in Azure, you must set a value of `dotnet-isolated` in the remote [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) application setting.
## Package references
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-azure-sql.md
The Azure SQL bindings for Azure Functions are open-source and available on the
- [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md) - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)-- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
+- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
+- [Learn how to connect Azure Function to Azure SQL with managed identity](./functions-identity-access-azure-sql-with-managed-identity.md)
azure-functions Functions Identity Access Azure Sql With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-identity-access-azure-sql-with-managed-identity.md
+
+ Title: Connect a function app to Azure SQL with managed identity and SQL bindings
++
+description: Learn how to connect Azure SQL bindings through managed identity.
+ Last updated : 1/28/2022+++
+#Customer intent: As a function developer, I want to learn how to use managed identities so that I can avoid having to handle connection strings in my application settings.
++
+# Tutorial: Connect a function app to Azure SQL with managed identity and SQL bindings
+
+Azure Functions provides a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview.md), which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you'll add managed identity to an Azure Function that utilizes [Azure SQL bindings](/azure/azure-functions/functions-bindings-azure-sql). A sample Azure Function project with SQL bindings is available in the [ToDo backend example](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)).
++
+When you're finished with this tutorial, your Azure Function will connect to Azure SQL database without the need of username and password.
+
+An overview of the steps you'll take:
+
+> [!div class="checklist"]
+> * [Enable Azure AD authentication to the SQL database](#grant-database-access-to-azure-ad-user)
+> * [Enable Azure Function managed identity](#enable-system-assigned-managed-identity-on-azure-function)
+> * [Grant SQL Database access to the managed identity](#grant-sql-database-access-to-the-managed-identity)
+> * [Configure Azure Function SQL connection string](#configure-azure-function-sql-connection-string)
++
+## Grant database access to Azure AD user
+
+First enable Azure AD authentication to SQL database by assigning an Azure AD user as the Active Directory admin of the server. This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL database](../azure-sql/database/authentication-aad-overview.md#azure-ad-features-and-limitations).
+
+Enabling Azure AD authentication can be completed via the Azure portal, PowerShell, or Azure CLI. Directions for Azure CLI are below and information completing this via Azure portal and PowerShell is available in the [Azure SQL documentation on Azure AD authentication](/azure/azure-sql/database/authentication-aad-configure).
+
+1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
+
+1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable.
+
+ ```azurecli-interactive
+ azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv)
+ ```
+
+ > [!TIP]
+ > To see the list of all user principal names in Azure AD, run `az ad user list --query [].userPrincipalName`.
+ >
+
+1. Add this Azure AD user as an Active Directory admin using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<server-name>* with the server name (without the `.database.windows.net` suffix).
+
+ ```azurecli-interactive
+ az sql server ad-admin create --resource-group myResourceGroup --server-name <server-name> --display-name ADMIN --object-id $azureaduser
+ ```
+
+For more information on adding an Active Directory admin, see [Provision an Azure Active Directory administrator for your server](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-database)
+++
+## Enable system-assigned managed identity on Azure Function
+
+In this step we'll add a system-assigned identity to the Azure Function. In later steps, this identity will be given access to the SQL database.
+
+To enable system-assigned managed identity in the Azure portal:
+
+1. Create an Azure Function in the portal as you normally would. Navigate to it in the portal.
+1. Scroll down to the Settings group in the left navigation.
+1. Select Identity.
+1. Within the System assigned tab, switch Status to On. Click Save.
+
+![Turn on system assigned identity for Function app](./media/functions-identity-access-sql-with-managed-identity/function-system-identity.png)
++
+For information on enabling system-assigned managed identity through Azure CLI or PowerShell, check out more information on [using managed identities with Azure Functions](/azure/app-service/overview-managed-identity?toc=%2Fazure%2Fazure-functions%2Ftoc.json&tabs=dotnet#add-a-system-assigned-identity).
++
+## Grant SQL database access to the managed identity
+
+In this step we'll connect to the SQL database with an Azure AD user account and grant the managed identity access to the database.
+
+1. Open your preferred SQL tool and login with an Azure AD user account (such as the Azure AD user we assigned as administrator). This can be accomplished in Cloud Shell with the SQLCMD command.
+
+ ```bash
+ sqlcmd -S <server-name>.database.windows.net -d <db-name> -U <aad-user-name> -P "<aad-password>" -G -l 30
+ ```
+
+1. In the SQL prompt for the database you want, run the following commands to grant permissions to your function. For example,
+
+ ```sql
+ CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
+ ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
+ ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
+ GO
+ ```
+
+ *\<identity-name>* is the name of the managed identity in Azure AD. If the identity is system-assigned, the name is always the same as the name of your Function app.
++
+## Configure Azure Function SQL connection string
+
+In the final step we'll configure the Azure Function SQL connection string to use Azure AD managed identity authentication.
+
+The connection string setting name is identified in our Functions code as the binding attribute "ConnectionStringSetting", as seen in the SQL input binding [attributes and annotations](/azure/azure-functions/functions-bindings-azure-sql-input?tabs=csharp#attributes-and-annotations).
+
+In the application settings of our Function App the SQL connection string setting should be updated to follow this format:
+
+`Server=demo.database.windows.net; Authentication=Active Directory Managed Identity; Database=testdb`
+
+*testdb* is the name of the database we're connecting to and *demo.database.windows.net* is the name of the server we're connecting to.
+
+## Next steps
+
+- [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md)
+- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
+- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-endpoint-overview.md
Create a new rule or open an existing rule. In the **Resources** tab, click on t
> [!NOTE] > The data collection endpoint should be created in the **same region** where your virtual machines exist.
-1. Create data collection endpoint(s) using these [DCE REST APIs](/rest/api/monitor/datacollectionendpoints).
+1. Create data collection endpoint(s) using these [DCE REST APIs](/cli/azure/monitor/data-collection/endpoint).
2. Create association(s) to link the endpoint(s) to your target machines or resources, using these [DCRA REST APIs](/rest/api/monitor/datacollectionruleassociations/create#examples).
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-preview.md
This article provides a guide on setup and usage of the new features in preview for **AzAcSnap v5.1**. These new features can be used with Azure NetApp Files, Azure BareMetal, and now Azure Managed Disk. This guide should be read along with the documentation for the generally available version of AzAcSnap at [aka.ms/azacsnap](https://aka.ms/azacsnap).
-The 4 new preview features provided with AzAcSnap v5.1 are:
+The four new preview features provided with AzAcSnap v5.1 are:
- Oracle Database support-- Backint Co-existence
+- Backint coexistence
- Azure Managed Disk - RunBefore and RunAfter capability
New database platforms and operating systems supported with this preview release
> Support for Oracle is Preview feature. > This section's content supplements [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md) website page.
-This section explains how to enable communication with storage. Ensure the storage back-end you are using is correctly selected.
+This section explains how to enable communication with storage. Ensure the storage back-end you're using is correctly selected.
# [Oracle](#tab/oracle)
The following example commands set up a user (AZACSNAP) in the Oracle database,
configuration unnecessary. This feature can be leveraged to use the Oracle TNS (Transparent Network Substrate) administrative file to hide the details of the database
- connection string and instead use an alias. If the connection information changes, it is a matter of changing the `tnsnames.ora` file instead
+ connection string and instead use an alias. If the connection information changes, it's a matter of changing the `tnsnames.ora` file instead
of potentially many datasource definitions. Set up the Oracle Wallet (change the password) This example uses the mkstore command from the Linux shell to set up the Oracle wallet. Theses commands
The following example commands set up a user (AZACSNAP) in the Oracle database,
> [!IMPORTANT] > Be sure to create a unique user to generate the Oracle Wallet to avoid any impact on the running database.
- 1. Run the following commands on the Oracle Database Server
+ 1. Run the following commands on the Oracle Database Server.
1. Get the Oracle environment variables to be used in setup. Run the following commands as the `root` user on the Oracle Database Server.
The following example commands set up a user (AZACSNAP) in the Oracle database,
/u01/app/oracle/product/19.0.0/dbhome_1 ```
- 1. Create the Linux user to generate the Oracle Wallet and associated `*.ora` files using the output from the previous step
+ 1. Create the Linux user to generate the Oracle Wallet and associated `*.ora` files using the output from the previous step.
> [!NOTE]
- > In these examples we are using the `bash` shell. If you are using a different shell (for example, csh), then ensure environment variables have been set correctly.
+ > In these examples we are using the `bash` shell. If you're using a different shell (for example, csh), then ensure environment variables have been set correctly.
```bash useradd -m azacsnap
When adding an Oracle database to the configuration, the following values are re
-## Backint co-existence
+## Backint coexistence
> [!NOTE]
-> Support for co-existence with SAP HANA's Backint interface is a Preview feature.
+> Support for coexistence with SAP HANA's Backint interface is a Preview feature.
> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page. [Azure Backup](/azure/backup/) service provides an alternate backup tool for SAP HANA, where database and log backups are streamed into the
the Azure Backup site on how to [Run SAP HANA native client backup to local disk
The process described in the Azure Backup documentation has been implemented with AzAcSnap to automatically do the following steps:
-1. force a log backup flush to backint
-1. wait for running backups to complete
-1. disable the backint-based backup
-1. put SAP HANA into a consistent state for backup
-1. take a storage snapshot-based backup
-1. release SAP HANA
+1. force a log backup flush to backint.
+1. wait for running backups to complete.
+1. disable the backint-based backup.
+1. put SAP HANA into a consistent state for backup.
+1. take a storage snapshot-based backup.
+1. release SAP HANA.
1. re-enable the backint-based backup. By default this option is disabled, but it can be enabled by running `azacsnap -c configure ΓÇôconfiguration edit` and answering ΓÇÿyΓÇÖ (yes) to the question ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. This will set the autoDisableEnableBackint value to true in the
-JSON configuration file (for example, `azacsnap.json`). It is also possible to change this value by editing the configuration file directly.
+JSON configuration file (for example, `azacsnap.json`). It's also possible to change this value by editing the configuration file directly.
Refer to this partial snippet of the configuration file to see where this value is placed and the correct format:
Refer to this partial snippet of the configuration file to see where this value
> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page. Microsoft provides a number of storage options for deploying databases such as SAP HANA. Many of these are detailed on the
-[Azure Storage types for SAP workload](/azure/virtual-machines/workloads/sap/planning-guide-storage) web page. Additionally there is a
+[Azure Storage types for SAP workload](/azure/virtual-machines/workloads/sap/planning-guide-storage) web page. Additionally there's a
[Cost conscious solution with Azure premium storage](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#cost-conscious-solution-with-azure-premium-storage). AzAcSnap is able to take application consistent database snapshots when deployed on this type of architecture (that is, a VM with Managed Disks). However, the setup
Disks in the mounted Logical Volume(s).
> [!IMPORTANT] > The Linux system must have `xfs_freeze` available to block disk I/O.
+> [!CAUTION]
+> Take extra care to configure AzAcSnap with the correct mountpoints (filesystems) because `xfs_freeze` blocks I/O to the device specified by the Azure Managed Disk
+> mount-point. This could inadvertently block a running application until `azacsnap` finishes running.
+ Architecture at a high level:
-1. Azure Managed Disks attached to the VM using the Azure portal
+1. Azure Managed Disks attached to the VM using the Azure portal.
1. Logical Volume is created from these Managed Disks. 1. Logical Volume mounted to a Linux directory. 1. Service Principal should be created in the same way as for Azure NetApp Files in [AzAcSnap installation](azacsnap-installation.md?tabs=azure-netapp-files%2Csap-hana#enable-communication-with-storage).
-1. Install and Configure AzAcSnap
+1. Install and Configure AzAcSnap.
> [!NOTE] > The configurator has a new option to define the mountpoint for the Logical Volume. This parameter gets passed to `xfs_freeze` to block the I/O (this > happens after the database is put into backup mode). After the I/O cache has been flushed (dependent on Linux kernel parameter `fs.xfs.xfssyncd_centisecs`). 1. Install and Configure `xfs_freeze` to be run as a non-privileged user:
- 1. Create an executable file called $HOME/bin/xfs_freeze with the following content
+ 1. Create an executable file called $HOME/bin/xfs_freeze with the following content.
```bash #!/bin/sh
Architecture at a high level:
1. Test the azacsnap user can freeze and unfreeze I/O to the target mountpoint by running the following as the azacsnap user. > [!NOTE]
- > In this example we run each command twice to show it worked the first time as there is no command to confirm if `xfs_freeze` has frozen I/O.
+ > In this example we run each command twice to show it worked the first time as there's no command to confirm if `xfs_freeze` has frozen I/O.
Freeze I/O.
Architecture at a high level:
### Example configuration file
-Here is an example config file, note the hierarchy for the dataVolume, mountpoint, azureManagedDisks:
+Here's an example config file, note the hierarchy for the dataVolume, mountpoint, azureManagedDisks:
```output {
Although `azacsnap` is currently missing the `-c restore` option for Azure Manag
1. Connect the disks to the VM via the Azure portal. 1. Log in to the VM as the `root` user and scan for the newly attached disks using dmesg or pvscan:
- 1. Using `dmesg`
+ 1. Using `dmesg`:
```bash dmesg | tail -n30
Although `azacsnap` is currently missing the `-c restore` option for Azure Manag
[2510054.627310] sd 5:0:0:3: [sdf] Attached SCSI disk ```
- 1. Using `pvscan`
+ 1. Using `pvscan`:
```bash saphana:~ # pvscan
Although `azacsnap` is currently missing the `-c restore` option for Azure Manag
1 logical volume(s) in volume group "hanadata_adhoc" now active ```
-1. Mount the logical volume as the `root` user.
+1. Mount the logical volume as the `root` user:
> [!IMPORTANT] > Use the `mount -o rw,nouuid` options, otherwise volume mounting will fail due to duplicate UUIDs on the VM.
Although `azacsnap` is currently missing the `-c restore` option for Azure Manag
mount -o rw,nouuid /dev/hanadata_adhoc/hanadata /mnt/hanadata_adhoc ```
-1. Then access the data
+1. Then access the data:
```bash ls /mnt/hanadata_adhoc/
The following list of environment variables is generated by `azacsnap` and passe
- `$azPrefix` = the --prefix value. - `$azRetention` = the --retention value. - `$azSid` = the --dbsid value.-- `$azSnapshotName` = the snapshot name generated by azacsnap
+- `$azSnapshotName` = the snapshot name generated by azacsnap.
> [!NOTE]
-> There is only a value for `$azSnapshotName` in the `--runafter` option.
+> There's only a value for `$azSnapshotName` in the `--runafter` option.
### Example usage
The following crontab entry is a single line and runs `azacsnap` at five past mi
This example shell script has a special stanza at the end to prevent AzAcSnap from killing the external command due to the timeout described earlier. This allows for a long running command, such as uploading large files with azcopy, to be run without being prematurely stopped.
-The snapshots need to be mounted on the system doing the copy, with at a minimum read-only privileges. The base location of the mount point for the snapshots should
+The snapshots need to be mounted on the system doing the copy, with at a minimum read-only privilege. The base location of the mount point for the snapshots should
be provided to the `sourceDir` variable in the script. ```bash
azure-netapp-files Cross Region Replication Display Health Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-display-health-status.md
Follow the following steps to create [alert rules in Azure Monitor](../azure-mon
2. From the Alerts window, select the **Create** dropdown and select **Create new alert rule**. 3. From the Scope tab of the Create an Alert Rule page, select **Select scope**. The **Select a Resource** page appears. 4. From the Resource tab, find the **Volumes** resource type.
-5. From the Condition tab, select **ΓÇ£Add condition**ΓÇ¥. From there, find a signal called ΓÇ£**is volume replication healthy**ΓÇ¥.
-6. There you'll see ΓÇ£**Condition of the relationship, 1 or 0**ΓÇ¥ and the **Configure Signal Logic** window is displayed.
+5. From the Condition tab, select **Add condition**. From there, find a signal called ΓÇ£**is volume replication healthy**ΓÇ¥.
+6. There you'll see **Condition of the relationship, 1 or 0** and the **Configure Signal Logic** window is displayed.
7. To check if the replication is _unhealthy_:
- 1. **Operator** to `Less than or equal to`.
- 1. Set **Aggregation type** to `Average`.
- 1. Set **Threshold** value to `0`.
- 1. Set **Unit** to `Count`.
-8. To check if the replication is healthy:
- 1. Set **Operator** to `Greater than or equal to`.
- 1. Set **Aggregation** type to `Average`.
- 1. Set **Threshold** value to `1`.
- 1. Set **Unit** to `Count`.
+ * Set **Operator** to `Less than or equal to`.
+ * Set **Aggregation type** to `Average`.
+ * Set **Threshold** value to `0`.
+ * Set **Unit** to `Count`.
+8. To check if the replication is _healthy_:
+ * Set **Operator** to `Greater than or equal to`.
+ * Set **Aggregation** type to `Average`.
+ * Set **Threshold** value to `1`.
+ * Set **Unit** to `Count`.
9. Select **Review + create**. The alert rule is ready for use. :::image type="content" source="../media/azure-netapp-files/alert-config-signal-logic.png" alt-text="Screenshot of the Azure interface that shows the configure signal logic step with a backdrop of the Create alert rule page." lightbox="../media/azure-netapp-files/alert-config-signal-logic.png":::
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/troubleshooting.md
vm-linux Previously updated : 07/24/2018 Last updated : 01/28/2022
Permissions are set as regular users without sudo access. Any installation outsi
### Supported entry point limitations
-Cloud Shell entry points beside the Azure portal, such as Visual Studio Code & Windows Terminal, do not support the use of commands that modify UX components in Cloud Shell, such as `Code`.
+Cloud Shell entry points beside the Azure portal, such as Visual Studio Code & Windows Terminal, do not support various Cloud Shell functionalities:
+- Use of commands that modify UX components in Cloud Shell, such as `Code`
+- Fetching non-arm access tokens
## Bash limitations
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/environment.md
Previously updated : 11/02/2021 Last updated : 12/05/2021 # Azure Container Apps Preview environments
-Individual container apps are deployed to a single Container Apps environment, which acts as a secure boundary around groups of container apps. Container Apps in the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace.
+Individual container apps are deployed to a single Container Apps environment, which acts as a secure boundary around groups of container apps. Container Apps in the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace. You may provide an [existing virtual network](vnet-custom.md) when you create an environment.
:::image type="content" source="media/environments/azure-container-apps-environments.png" alt-text="Azure Container Apps environments.":::
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/get-started-existing-container-image.md
This article demonstrates how to deploy an existing container to Azure Container
> [!NOTE] > Private registry authorization is supported via registry username and password.
+## Prerequisites
+
+- Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
+ [!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)] To create the environment, run the following command:
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/get-started.md
Azure Container Apps Preview enables you to run microservices and containerized
In this quickstart, you create a secure Container Apps environment and deploy your first container app.
+## Prerequisites
+
+- Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
+ [!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)] To create the environment, run the following command:
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/overview.md
With Azure Container Apps, you can:
- [**Use the Azure CLI extension or ARM templates**](get-started.md) to manage your applications.
+- [**Provide an existing virtual network**](vnet-custom.md) when creating an environment for your container apps.
+ - [**Securely manage secrets**](secure-app.md) directly in your application. - [**View application logs**](monitor.md) using Azure Log Analytics.
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/quickstart-portal.md
An Azure account with an active subscription is required. If you don't already h
## Setup
-Begin by signing in to the [Azure portal](https://portal.azure.com).
+<!-- Create -->
-## Create a container app
+7. Select the **Create** button at the bottom of the *Create Container App Environment* page.
-To create your container app, start at the Azure portal home page.
-
-1. Search for **Container Apps** in the top search bar.
-1. Select **Container Apps** in the search results.
-1. Select the **Create** button.
-
-### Basics tab
-
-In the *Basics* tab, do the following actions.
-
-#### Enter project details
-
-| Setting | Action |
-|||
-| Subscription | Select your Azure subscription. |
-| Resource group | Select **Create new** and enter **my-container-apps**. |
-| Container app name | Enter **my-container-app**. |
-
-#### Create an environment
-
-1. In the *Create Container App environment* field, select **Create new**.
-1. In the *Create Container App Environment* page on the *Basics* tab, enter the following values:
-
- | Setting | Value |
- |||
- | Environment name | Enter **my-environment**. |
- | Region | Select **Canada Central**. |
-
-1. Select the **Monitoring** tab to create a Log Analytics workspace.
-1. Select **Create new** in the *Log Analytics workspace* field.
-1. Enter **my-container-apps-logs** in the *Name* field of the *Create new Log Analytics Workspace* dialog.
-
- The *Location* field is pre-filled with *Canada Central* for you.
-
-1. Select **OK**.
-1. Select the **Create** button at the bottom of the *Create Container App Environment* page.
-
-### Deploy the container app
-
-1. Select the **Review and create** button at the bottom of the page.
-
- Next, the settings in the Container App are verified. If no errors are found, the *Create* button is enabled.
-
- If there are errors, any tab containing errors is marked with a red dot. Navigate to the appropriate tab. Fields containing an error will be highlighted in red. Once all errors are fixed, select **Review and create** again.
-
-1. Select **Create**.
-
- A page with the message *Deployment is in progress* is displayed. Once the deployment is successfully completed, you'll see the message: *Your deployment is complete*.
+<!-- Deploy the container app -->
### Verify deployment
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/vnet-custom.md
+
+ Title: Provide a virtual network to an Azure Container Apps Preview environment
+description: Learn how to provide a VNET to an Azure Container Apps environment.
++++ Last updated : 1/28/2021+
+zone_pivot_groups: azure-cli-or-portal
++
+# Provide a virtual network to an Azure Container Apps (Preview) environment
+
+As you create an Azure Container Apps [environment](environment.md), a virtual network (VNET) is created for you, or you can provide your own. Network addresses are assigned from a subnet range you define as the environment is created.
+
+- You control the subnet range used by the Container Apps environment.
+- Once the environment is created, the subnet range is immutable.
+- A single load balancer and single Kubernetes service are associated with each container apps environment.
+- Each [revision pod](revisions.md) is assigned an IP address in the subnet.
+- You can restrict inbound requests to the environment exclusively to the VNET by deploying the environment as internal.
+
+> [!IMPORTANT]
+> In order to ensure the environment deployment within your custom VNET is successful, configure your VNET with an "allow-all" configuration by default. The full list of traffic dependencies required to configure the VNET as "deny-all" is not yet available. Refer to the [custom VNET security sample](https://aka.ms/azurecontainerapps/customvnet) for additional details.
++
+## Subnet types
+
+As a Container Apps environment is created, you provide resource IDs for two different subnets. Both subnets must be defined in the same container apps.
+
+- **App subnet**: Subnet for user app containers. Subnet that contains IP ranges mapped to applications deployed as containers.
+- **Control plane subnet**: Subnet for [control plane infrastructure](/azure/azure-resource-manager/management/control-plane-and-data-plane) components and user app containers.
+
+If the [platformReservedCidr](#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
+
+## Accessibility level
+
+You can deploy your Container Apps environment with an internet-accessible endpoint or with an IP address in your VNET. The accessibility level determines the type of load balancer used with your Container Apps instance.
+
+### External
+
+Container Apps environments deployed as external resources are available for public requests. External environments are deployed with a virtual IP on an external, public facing IP address.
+
+### Internal
+
+When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNET's list of private IP addresses.
+
+To create an internal only environment, provide the `--internal-only` parameter to the `az containerapp env create` command.
+
+## Example
+
+The following example shows you how to create a Container Apps environment in an existing virtual network.
++
+<!-- Create -->
+
+7. Select the **Networking** tab to create a VNET.
+8. Select **Yes** next to *Use your own virtual network*.
+9. Next to the *Virtual network* box, select the **Create new** link.
+10. Enter **my-custom-vnet** in the name box.
+11. Select the **OK** button.
+12. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+
+ | Setting | Value |
+ |||
+ | Subnet name | Enter **my-control-plane-vnet**. |
+ | Virtual Network Address Block | Keep the default values. |
+ | Subnet Address Block | Keep the default values. |
+
+13. Select the **OK** button.
+14. Next to the *Control plane subnet* box, select the **Create new** link and enter the following values:
+
+ | Setting | Value |
+ |||
+ | Subnet name | Enter **my-apps-vnet**. |
+ | Virtual Network Address Block | Keep the default values. |
+ | Subnet Address Block | Keep the default values. |
+
+15. Under *Virtual IP*, select **External**.
+16. Select **Create**.
+
+<!-- Deploy -->
+++
+## Prerequisites
+
+- Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- Install the [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
++
+Next, declare a variable to hold the VNET name.
+
+# [Bash](#tab/bash)
+
+```bash
+VNET_NAME="my-custom-vnet"
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$VNET_NAME="my-custom-vnet"
+```
+++
+Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container apps instance.
+
+> [!NOTE]
+> You can use an existing virtual network, but two empty subnets are required to use with Container Apps.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az network vnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name $VNET_NAME \
+ --location $LOCATION \
+ --address-prefix 10.0.0.0/16
+```
+
+```azurecli
+az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name control-plane \
+ --address-prefixes 10.0.0.0/21
+```
+
+```azurecli
+az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name applications \
+ --address-prefixes 10.0.8.0/21
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az network vnet create `
+ --resource-group $RESOURCE_GROUP `
+ --name $VNET_NAME `
+ --location $LOCATION `
+ --address-prefix 10.0.0.0/16
+```
+
+```powershell
+az network vnet subnet create `
+ --resource-group $RESOURCE_GROUP `
+ --vnet-name $VNET_NAME `
+ --name control-plane `
+ --address-prefixes 10.0.0.0/21
+```
+
+```powershell
+az network vnet subnet create `
+ --resource-group $RESOURCE_GROUP `
+ --vnet-name $VNET_NAME `
+ --name applications `
+ --address-prefixes 10.0.8.0/21
+```
+++
+With the VNET established, you can now query for the VNET, control plane, and app subnet IDs.
+
+# [Bash](#tab/bash)
+
+```bash
+VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+```bash
+CONTROL_PLANE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+```bash
+APP_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name ${VNET_NAME} --name applications --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+```
+
+```powershell
+$CONTROL_PLANE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name control-plane --query "id" -o tsv)
+```
+
+```powershell
+$APP_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name applications --query "id" -o tsv)
+```
+++
+Finally, create the Container Apps environment with the VNET and subnets.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env create \
+ --name $CONTAINERAPPS_ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
+ --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
+ --location "$LOCATION" \
+ --app-subnet-resource-id $APP_SUBNET \
+ --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az containerapp env create `
+ --name $CONTAINERAPPS_ENVIRONMENT `
+ --resource-group $RESOURCE_GROUP `
+ --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
+ --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
+ --location "$LOCATION" `
+ --app-subnet-resource-id $APP_SUBNET `
+ --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET
+```
+++
+The following table describes the parameters used in for `containerapp env create`.
+
+| Parameter | Description |
+|||
+| `name` | Name of the container apps environment. |
+| `resource-group` | Name of the resource group. |
+| `logs-workspace-id` | The ID of the Log Analytics workspace. |
+| `logs-workspace-key` | The Log Analytics client secret. |
+| `location` | The Azure location where the environment is to deploy. |
+| `app-subnet-resource-id` | The resource ID of a subnet where containers are injected into the container app. This subnet must be in the same VNET as the subnet defined in `--control-plane-subnet-resource-id`. |
+| `controlplane-subnet-resource-id` | The resource ID of a subnet for control plane infrastructure components. This subnet must be in the same VNET as the subnet defined in `--app-subnet-resource-id`. |
+| `internal-only` | Optional parameter that scopes the environment to IP addresses only available the custom VNET. |
+
+With your environment created with your custom-virtual network, you can create container apps into the environment using the `az containerapp create` command.
+
+### Optional configuration
+
+You have the option of deploying a private DNS and defining custom networking IP ranges for your Container Apps environment.
+
+#### Deploy with a private DNS
+
+If you want to deploy your container app with a private DNS, run the following commands.
+
+First, extract identifiable information from the environment.
+
+# [Bash](#tab/bash)
+
+```bash
+ENVIRONMENT_DEFAULT_DOMAIN=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query defaultDomain --out json | tr -d '"'`
+```
+
+```bash
+ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query staticIp --out json | tr -d '"'`
+```
+
+```bash
+VNET_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query id --out json | tr -d '"'`
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$ENVIRONMENT_DEFAULT_DOMAIN=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query defaultDomain -o tsv)
+```
+
+```powershell
+$ENVIRONMENT_STATIC_IP=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query staticIp -o tsv)
+```
+
+```powershell
+$VNET_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query id -o tsv)
+```
+++
+Next, set up the private DNS.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az network private-dns zone create \
+ --resource-group $RESOURCE_GROUP \
+ --name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+
+```azurecli
+az network private-dns link vnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name $VNET_NAME \
+ --virtual-network $VNET_ID \
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true
+```
+
+```azurecli
+az network private-dns record-set a add-record \
+ --resource-group $RESOURCE_GROUP \
+ --record-set-name "*" \
+ --ipv4-address $ENVIRONMENT_STATIC_IP \
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az network private-dns zone create `
+ --resource-group $RESOURCE_GROUP `
+ --name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+
+```powershell
+az network private-dns link vnet create `
+ --resource-group $RESOURCE_GROUP `
+ --record-set-name $VNET_NAME `
+ --virtual-network $VNET_ID `
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true
+```
+
+```powershell
+az network private-dns record-set a add-record `
+ --resource-group $RESOURCE_GROUP `
+ --name "*" `
+ --ipv4-address $ENVIRONMENT_STATIC_IP `
+ --zone-name $ENVIRONMENT_DEFAULT_DOMAIN
+```
+++
+#### Networking parameters
+
+There are three optional networking parameters you can choose to define when calling `containerapp env create`. You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the CLI generates the values for you.
+
+| Parameter | Description |
+|||
+| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (this is the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
+| `docker-bridge-cidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
+
+- The `platform-reserved-cidr` and `docker-bridge-cidr` address ranges can't conflict with each other, or with the ranges of either provided subnet. Further, make sure these ranges don't conflict with any other address range in the VNET.
+
+- If these properties arenΓÇÖt provided, the CLI autogenerates the range values based on the address range of the VNET to avoid range conflicts.
++
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group.
++
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete \
+ --name $RESOURCE_GROUP
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az group delete `
+ --name $RESOURCE_GROUP
+```
++++
+## Restrictions
+
+Subnet address ranges can't overlap with the following reserved ranges:
+
+- 169.254.0.0/16
+- 172.30.0.0/16
+- 172.31.0.0/16
+- 192.0.2.0/24
+
+Additionally, subnets must have a size between /21 and /12.
+
+## Additional resources
+
+- Refer to [What is Azure Private Endpoint](/azure/private-link/private-endpoint-overview) for more details on configuring your private endpoint.
+
+- To set up DNS name resolution for internal services, you must [set up your own DNS server](/azure/dns/).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Managing autoscaling behavior](scale-app.md)
container-registry Allow Access Trusted Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/allow-access-trusted-services.md
Where indicated, access by the trusted service requires additional configuration
|Trusted service |Supported usage scenarios | Configure managed identity with RBAC role ||||
-| Azure Container Instances | [Authenticate with Azure Container Registry from Azure Container Instances](container-registry-auth-aci.md) | Yes, either system-assigned or user-assigned identity |
+| Azure Container Instances | [Deploy to Azure Container Instances from Azure Container Registry using a managed identity](../container-instances/using-azure-container-registry-mi.md) | Yes, either system-assigned or user-assigned identity |
| Microsoft Defender for Cloud | Vulnerability scanning by [Microsoft Defender for container registries](scan-images-defender.md) | No | |ACR Tasks | [Access the parent registry or a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) | Yes | |Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-container.md) or [train](../machine-learning/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image | Yes |
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
There's no impact on the performance of your transactional workloads due to anal
## Auto-Sync
-Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. We would like to learn more how this latency fits your scenarios. For that, please reach out to the [Azure Cosmos DB team](mailto:cosmosdbsynapselink@microsoft.com).
+Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. We would like to learn more how this latency fits your scenarios. For that, please reach out to the [Azure Cosmos DB Team](mailto:cosmosdbsynapselink@microsoft.com).
At the end of each execution of the automatic sync process, your transactional data will be immediately available for Azure Synapse Analytics runtimes:
The following constraints are applicable on the operational data in Azure Cosmos
* Sample scenarios: * If your document's first level has 2000 properties, only the first 1000 will be represented. * If your documents have five levels with 200 properties in each one, all properties will be represented.
- * If your documents have ten levels with 400 properties in each one, only the two first levels will be fully represented in analytical store. Half of the third level will also be represented.
+ * If your documents have 10 levels with 400 properties in each one, only the two first levels will be fully represented in analytical store. Half of the third level will also be represented.
* The hypothetical document below contains four properties and three levels. * The levels are `root`, `myArray`, and the nested structure within the `myArray`.
It's possible to use full fidelity Schema for SQL (Core) API accounts, instead o
* This option is only valid for accounts that **don't** have Synapse Link already enabled. * It isn't possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.
- * Currently Azure Cosmos DB API for MongoDB accounts aren't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
+ * Currently Azure Cosmos DB API for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
* Currently this change can't be made through the Azure portal. All database accounts that have Synapse Link enabled by the Azure portal will have the default schema representation type, well-defined schema. The schema representation type decision must be made at the same time that Synapse Link is enabled on the account, using Azure CLI or PowerShell.
The schema representation type decision must be made at the same time that Synap
The well-defined schema representation creates a simple tabular representation of the schema-agnostic data in the transactional store. The well-defined schema representation has the following considerations: * The first document defines the base schema and property must always have the same type across all documents. The only exceptions are:
- * From null to any other data type.The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store.
+ * From `NULL` to any other data type. The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store.
* From `float` to `integer`. All documents will be represented in analytical store. * From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
WITH (num varchar(100)) AS [IntToFloat]
* `{"id": "2", "code": "123"}` > [!NOTE]
- > The condition above doesn't apply for null properties. For example, `{"a":123} and {"a":null}` is still well-defined.
+ > The condition above doesn't apply for `NULL` properties. For example, `{"a":123} and {"a":NULL}` is still well-defined.
> [!NOTE] > The condition above doesn't change if you update `"code"` of document `"1"` to a string in your transactional store. In analytical store, `"code"` will be kept as `integer` since currently we don't support schema reset.
WITH (num varchar(100)) AS [IntToFloat]
* Spark pools in Azure Synapse will represent these values as `undefined`. * SQL serverless pools in Azure Synapse will represent these values as `NULL`.
-* Expect different behavior in regard to explicit `null` values:
+* Expect different behavior in regard to explicit `NULL` values:
* Spark pools in Azure Synapse will read these values as `0` (zero). And it will change to `undefined` as soon as the column has a non-null value. * SQL serverless pools in Azure Synapse will read these values as `NULL`.
Here's a map of all the property data types and their suffix representations in
|Boolean | ".bool" |True| |Int32 | ".int32" |123| |Int64 | ".int64" |255486129307|
-|Null | ".null" | null|
+|NULL | ".NULL" | NULL|
|String| ".string" | "ABC"| |Timestamp | ".timestamp" | Timestamp(0, 0)| |DateTime |".date" | ISODate("2020-08-21T07:43:07.375Z")| |ObjectId |".objectId" | ObjectId("5f3f7b59330ec25c132623a2")| |Document |".object" | {"a": "a"}|
-* Expect different behavior in regard to explicit `null` values:
+* Expect different behavior in regard to explicit `NULL` values:
* Spark pools in Azure Synapse will read these values as `0` (zero). * SQL serverless pools in Azure Synapse will read these values as `NULL`.
Here's a map of all the property data types and their suffix representations in
* Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
+## <a id="analytical-ttl"></a> Analytical Time-to-Live (TTL)
+
+Analytical TTL (ATTL) indicates how long data should be retained in your analytical store, for a container.
+
+Analytical store is enabled when ATTL is set with a value other than `NULL` and `0`. When enabled, inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store, irrespective of the transactional TTL (TTTL) configuration. The retention of this transactional data in analytical store can be controlled at container level by the `AnalyticalStoreTimeToLiveInSeconds` property.
+
+The possible ATTL configurations are:
+
+* If the value is set to `0` or set to `NULL`: the analytical store is disabled and no data is replicated from transactional store to analytical store
+
+* If the value is set to `-1`: the analytical store retains all historical data, irrespective of the retention of the data in the transactional store. This setting indicates that the analytical store has infinite retention of your operational data
+
+* If the value is set to any positive integer `n` number: items will expire from the analytical store `n` seconds after their last modified time in the transactional store. This setting can be leveraged if you want to retain your operational data for a limited period of time in the analytical store, irrespective of the retention of the data in the transactional store
+
+Some points to consider:
+
+* After the analytical store is enabled with an ATTL value, it can be updated to a different valid value later.
+* While TTTL can be set at the container or item level, ATTL can only be set at the container level currently.
+* You can achieve longer retention of your operational data in the analytical store by setting ATTL >= TTTL at the container level.
+* The analytical store can be made to mirror the transactional store by setting ATTL = TTTL.
+* If you have ATTL bigger than TTTL, at some point in time you'll have data that only exists in analytical store. This data is read only.
+
+How to enable analytical store on a container:
+
+* From the Azure portal, the ATTL option, when turned on, is set to the default value of -1. You can change this value to 'n' seconds, by navigating to container settings under Data Explorer.
+
+* From the Azure Management SDK, Azure Cosmos DB SDKs, PowerShell, or Azure CLI, the ATTL option can be enabled by setting it to either -1 or 'n' seconds.
+
+To learn more, see [how to configure analytical TTL on a container](configure-synapse-link.md#create-analytical-ttl).
+ ## Cost-effective archival of historical data Data tiering refers to the separation of data between storage infrastructures optimized for different scenarios. Thereby improving the overall performance and cost-effectiveness of the end-to-end data stack. With analytical store, Azure Cosmos DB now supports automatic tiering of data from the transactional store to analytical store with different data layouts. With analytical store optimized in terms of storage cost compared to the transactional store, allows you to retain much longer horizons of operational data for historical analysis.
With periodic backup mode and existing containers, you can:
### Partially rebuild analytical store when TTTL < ATTL
-The data that was only in analytical store isn't restored, but it will be kept available for queries as long as you keep the original container. Analytical store is only deleted when you delete the container. You analytical queries in Azure Synapse Analytics can read data from both original and restored container's analytical stores. Example:
+The data that was only in analytical store isn't restored, but it will be kept available for queries as long as you keep the original container. Analytical store is only deleted when you delete the container. Your analytical queries in Azure Synapse Analytics can read data from both original and restored container's analytical stores. Example:
* Container `OnlineOrders` has TTTL set to one month and ATTL set for one year. * When you restore it to `OnlineOrdersNew` and turn on analytical store to rebuild it, there will be only one month of data in both transactional and analytical store.
In order to get a high-level cost estimate to enable analytical store on an Azur
> Analytical store read operations estimates aren't included in the Cosmos DB cost calculator since they are a function of your analytical workload. While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
-## <a id="analytical-ttl"></a> Analytical Time-to-Live (TTL)
-
-Analytical TTL indicates how long data should be retained in your analytical store, for a container.
-
-If analytical store is enabled, inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store, irrespective of the transactional TTL configuration. The retention of this operational data in the analytical store can be controlled by the Analytical TTL value at the container level, as specified below:
-
-Analytical TTL on a container is set using the `AnalyticalStoreTimeToLiveInSeconds` property:
-
-* If the value is set to `0` or set to `null`: the analytical store is disabled and no data is replicated from transactional store to analytical store
-
-* If the value is set to `-1`: the analytical store retains all historical data, irrespective of the retention of the data in the transactional store. This setting indicates that the analytical store has infinite retention of your operational data
-
-* If the value is set to any positive integer `n` number: items will expire from the analytical store `n` seconds after their last modified time in the transactional store. This setting can be leveraged if you want to retain your operational data for a limited period of time in the analytical store, irrespective of the retention of the data in the transactional store
-
-Some points to consider:
-
-* After the analytical store is enabled with an analytical TTL value, it can be updated to a different valid value later.
-* While transactional TTL can be set at the container or item level, analytical TTL can only be set at the container level currently.
-* You can achieve longer retention of your operational data in the analytical store by setting analytical TTL >= transactional TTL at the container level.
-* The analytical store can be made to mirror the transactional store by setting analytical TTL = transactional TTL.
-* If you have analytical TTL bigger than transactional TTL, at some point in time you'll have data that only exists in analytical store. This data is read only.
-
-How to enable analytical store on a container:
-
-* From the Azure portal, the analytical TTL option, when turned on, is set to the default value of -1. You can change this value to 'n' seconds, by navigating to container settings under Data Explorer.
-
-* From the Azure Management SDK, Azure Cosmos DB SDKs, PowerShell, or CLI, the analytical TTL option can be enabled by setting it to either -1 or 'n' seconds.
-
-To learn more, see [how to configure analytical TTL on a container](configure-synapse-link.md#create-analytical-ttl).
## Next steps
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
Last updated 09/21/2020
# Azure Cosmos DB Emulator - Release notes and download information [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-This article shows the Azure Cosmos DB Emulator release notes with a list of feature updates that were made in each release. It also lists the latest version of the emulator to download and use.
+This article shows the Azure Cosmos DB Emulator released versions and it details the updates that were made. Only the latest version is made available to download and use and previous versions aren't actively supported by the Azure Cosmos DB Emulator developers.
## Download
This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Release notes
-### 2.14.4 (25 October 2021)
+### 2.14.5 (January 18, 2022)
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. One other important update with this release is to reduce the number of services executed in the background and start them as needed.
-### 2.14.3 (8 September 2021)
+### 2.14.4 (October 25, 2021)
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB.
-### 2.14.2 (12 August 2021)
+### 2.14.3 (September 8, 2021)
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. It also addresses couple issues with telemetry data that is collected and resets the base image for the Linux Cosmos emulator Docker image.
-### 2.14.1 (18 June 2021)
+### 2.14.2 (August 12, 2021)
+
+ - This release updates the local Data Explorer content to latest Microsoft Azure version and resets the base for the Linux Cosmos emulator Docker image.
+
+### 2.14.1 (June 18, 2021)
- This release improves the start-up time for the emulator while reducing the footprint of its data on the disk. This new optimization is activated by "/EnablePreview" argument.
-### 2.14.0 (15 June 2021)
+### 2.14.0 (June 15, 2021)
+ - This release updates the local Data Explorer content to latest Microsoft Azure version. It also fixes an issue when importing multiple document items by using the JSON file upload feature.
-### 2.11.13 (21 April 2021)
+### 2.11.13 (April 21, 2021)
+ - This release updates the local Data Explorer content to latest Microsoft Azure version and adds a new MongoDB endpoint configuration, "4.0".
-### 2.11.11 (22 February 2021)
+### 2.11.11 (February 22, 2021)
+ - This release updates the local Data Explorer content to latest Microsoft Azure version.
-### 2.11.10 (5 January 2021)
+### 2.11.10 (January 5, 2021)
+ - This release updates the local Data Explorer content to latest Microsoft Azure version. It also adds a new public option, "/ExportPemCert", which allows the emulator user to directly export the public emulator's certificate as a .PEM file.
-### 2.11.9 (3 December 2020)
+### 2.11.9 (December 3, 2020)
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. It also addresses couple issues with the Azure Cosmos DB Emulator functionality:
* Fix for an issue where large document payload requests fail when using Direct mode and Java client applications. * Fix for a connectivity issue with MongoDB endpoint version 3.6 when targeted by .NET based applications.
-### 2.11.8 (6 November 2020)
+### 2.11.8 (November 6, 2020)
+ - This release includes an update for the Azure Cosmos DB Emulator Data Explorer and fixes an issue where "TLS 1.3" clients try to open the Data Explorer.
-### 2.11.6 (6 October 2020)
+### 2.11.6 (October 6, 2020)
+ - This release addresses a concurrency-related issue when multiple containers might be created at the same time. This can leave the emulator in a corrupted state and future API requests to the emulator's endpoint will fail with "service unavailable" errors. The work-around is to stop the emulator, reset of the emulator's local data and restart.
-### 2.11.5 (23 August 2020)
+### 2.11.5 (August 23, 2020)
-This release adds two new Cosmos emulator startup options:
+This release adds two new Azure Cosmos DB Emulator startup options:
-* "/EnablePreview" - it enables preview features for the emulator. The preview features that are still under development and they can be accessed via CI and sample writing.
+* "/EnablePreview" - it enables preview features for the Azure Cosmos DB Emulator. The preview features that are still under development and they can be accessed via CI and sample writing.
* "/EnableAadAuthentication" - it enables the emulator to accept custom Azure Active Directory tokens as an alternative to the Azure Cosmos primary keys. This feature is still under development; specific role assignments and other permission-related settings aren't currently supported.
-### 2.11.2 (07 July 2020)
+### 2.11.2 (July 7, 2020)
+
+- This release changes how ETL traces required when troubleshooting the Azure Cosmos DB Emulator are collected. WPR (Windows Performance Runtime tools) is now the default tools for capturing ETL-based traces while old LOGMAN based capturing has been deprecated. With the latest Windows security update, LOGMAN stopped working as expected when executed through the Azure Cosmos DB Emulator.
-- This release changes how ETL traces required when troubleshooting the Cosmos emulator are collected. WPR (Windows Performance Runtime tools) is now the default tools for capturing ETL-based traces while old LOGMAN based capturing has been deprecated. This change is required in part because latest Windows security updates had an unexpected impact on how LOGMAN works when executed through the Cosmos emulator.
+### 2.11.1 (June 10, 2020)
-### 2.11.1 (10 June 2020)
+This release fixes couple bugs related to Azure Cosmos DB Emulator Data Explorer:
-- This release fixes couple bugs related to emulator Data Explorer. In certain cases when using the emulator Data Explorer through a web browser, it fails to connect to the Cosmos emulator endpoint and all the related actions such as creating a database or a container will result in error. The second issue fixed is related to creating an item from a JSON file using Data Explorer upload action.
+* Data Explorer fails to connect to the Azure Cosmos DB Emulator endpoint when hosted in some Web browser versions. Emulator users might not be able to create a database or a container through the Web page.
+* Address an issue that prevented emulator users from creating an item from a JSON file using Data Explorer upload action.
### 2.11.0 -- This release introduces support for autoscale provisioned throughput. These new features include the ability to set a custom maximum provisioned throughput level in request units (RU/s), enable autoscale on existing databases and containers, and programmatic support through Azure Cosmos DB SDKs.
+- This release introduces support for autoscale provisioned throughput. The added features include the option to set a custom maximum provisioned throughput level in request units (RU/s), enable autoscale on existing databases and containers, and API support through Azure Cosmos DB SDK.
- Fix an issue while querying through large number of documents (over 1 GB) were the emulator will fail with internal error status code 500. ### 2.9.2
This release adds two new Cosmos emulator startup options:
### 2.7.2 -- This release adds MongoDB version 3.6 server support to the Cosmos Emulator. To start a MongoDB endpoint that target version 3.6 of the service, start the emulator from an Administrator command line with "/EnableMongoDBEndpoint=3.6" option.
+- This release adds MongoDB version 3.6 server support to the Azure Cosmos DB Emulator. To start a MongoDB endpoint that target version 3.6 of the service, start the emulator from an Administrator command line with "/EnableMongoDBEndpoint=3.6" option.
### 2.7.0 -- This release fixes a regression, which prevented users from executing queries against the SQL API account from the emulator when using .NET core or x86 .NET based clients.
+- This release fixes a regression in the Azure Cosmos DB Emulator that prevented users from executing SQL related queries. This issue impacts emulator users that configured SQL API endpoint and they're using .NET core or x86 .NET based client applications.
### 2.4.6
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Previously updated : 01/14/2022 Last updated : 01/28/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
These properties are supported for an Azure SQL Database linked service:
| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication | | connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is located in a private network. If not specified, the default Azure integration runtime is used. | No |
-> [!NOTE]
-> Azure SQL Database [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is not supported in data flow.
- For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively: - [SQL authentication](#sql-authentication)
To learn details about the properties, check [GetMetadata activity](control-flow
## Using Always Encrypted
-When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine), follow below steps:
+When you copy data from/to Azure SQL Database with [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine), follow below steps:
1. Store the [Column Master Key (CMK)](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted?view=sql-server-ver15&preserve-view=true) in an [Azure Key Vault](../key-vault/general/overview.md). Learn more on [how to configure Always Encrypted by using Azure Key Vault](../azure-sql/database/always-encrypted-azure-key-vault-configure.md?tabs=azure-powershell)
When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-da
3. Create linked service to connect to your SQL database and enable 'Always Encrypted' function by using either managed identity or service principal. >[!NOTE]
->SQL Server [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) supports below scenarios:
+> Azure SQL Database [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) supports below scenarios:
>1. Either source or sink data stores is using managed identity or service principal as key provider authentication type. >2. Both source and sink data stores are using managed identity as key provider authentication type. >3. Both source and sink data stores are using the same service principal as key provider authentication type.
+>[!NOTE]
+> Currently, Azure SQL Database [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows.
+ ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 01/14/2022 Last updated : 01/28/2022 # Copy and transform data in Azure SQL Managed Instance using Azure Data Factory or Synapse Analytics
The following properties are supported for the SQL Managed Instance linked servi
| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication | | connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use a self-hosted integration runtime or an Azure integration runtime if your managed instance has a public endpoint and allows the service to access it. If not specified, the default Azure integration runtime is used. |Yes |
-> [!NOTE]
-> SQL Managed Instance [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is not supported in data flow.
- For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively: - [SQL authentication](#sql-authentication)
When data is copied to and from SQL Managed Instance using copy activity, the fo
## Using Always Encrypted
-When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine), follow below steps:
+When you copy data from/to SQL Managed Instance with [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine), follow below steps:
1. Store the [Column Master Key (CMK)](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted?view=sql-server-ver15&preserve-view=true) in an [Azure Key Vault](../key-vault/general/overview.md). Learn more on [how to configure Always Encrypted by using Azure Key Vault](../azure-sql/database/always-encrypted-azure-key-vault-configure.md?tabs=azure-powershell)
When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-da
3. Create linked service to connect to your SQL database and enable 'Always Encrypted' function by using either managed identity or service principal. >[!NOTE]
->SQL Server [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) supports below scenarios:
+>SQL Managed Instance [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) supports below scenarios:
>1. Either source or sink data stores is using managed identity or service principal as key provider authentication type. >2. Both source and sink data stores are using managed identity as key provider authentication type. >3. Both source and sink data stores are using the same service principal as key provider authentication type.
+>[!NOTE]
+>Currently, SQL Managed Instance [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows.
+ ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
Previously updated : 01/14/2022 Last updated : 01/28/2022 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
The following properties are supported for the SQL Server linked service:
| connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, the default Azure integration runtime is used. |No | > [!NOTE]
-> - SQL Server [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is not supported in data flow.
-> - Windows authentication is not supported in data flow.
+> Windows authentication is not supported in data flow.
>[!TIP] >If you hit an error with the error code "UserErrorFailedToConnectToSqlServer" and a message like "The session limit for the database is XXX and has been reached," add `Pooling=false` to your connection string and try again.
When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-da
>2. Both source and sink data stores are using managed identity as key provider authentication type. >3. Both source and sink data stores are using the same service principal as key provider authentication type.
+>[!NOTE]
+>Currently, SQL Server [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows.
+ ## Troubleshoot connection issues 1. Configure your SQL Server instance to accept remote connections. Start **SQL Server Management Studio**, right-click **server**, and select **Properties**. Select **Connections** from the list, and select the **Allow remote connections to this server** check box.
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-square.md
Previously updated : 09/09/2021 Last updated : 01/27/2022 # Copy data from Square using Azure Data Factory or Synapse Analytics (Preview)
This article outlines how to use the Copy Activity in an Azure Data Factory or S
> [!IMPORTANT] > This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+> [!Note]
+> Currently, this connector doesn't support sandbox accounts.
+ ## Supported capabilities This Square connector is supported for the following activities:
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
+
+ Title: Configure export settings in Azure API for FHIR
+description: This article describes how to configure export settings in Azure API for FHIR
++++ Last updated : 01/28/2022+++
+# Configure export settings in Azure API for FHIR and set up a storage account
+
+Azure API for FHIR supports $export command that allows you to export the data out of Azure API for FHIR account to a storage account.
+
+There are three steps involved in configuring export in Azure API for FHIR:
+
+1. Enable Managed Identity on Azure API for FHIR.
+2. Create an Azure storage account (if not done before) and assign permissions to Azure API for FHIR to the storage account.
+3. Select the storage account in Azure API for FHIR as export storage account.
+
+## Enabling Managed Identity on Azure API for FHIR
+
+The first step in configuring Azure API for FHIR for export is to enable system wide managed identity on the service. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+
+Browse to the Azure API for FHIR and select **Identity**. Changing the status to **On** will enable managed identity in Azure API for FHIR.
+
+[ ![Screenshot of the enable managed identity page.](media/export-data/fhir-mi-enabled.png) ](media/export-data/fhir-mi-enabled.png#lightbox)
+
+In the next step, create a storage account and assign permission to our service.
+
+## Adding permission to storage account
+
+The next step in export data is to assign permission for Azure API for FHIR to write to the storage account.
+
+After you've created a storage account, browse to the **Access Control (IAM)** in the storage account, and then select **Add role assignment**.
+
+For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
+
+It's here that you'll add the role [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) to our service name, and then select **Save**.
+
+[ ![Screenshot of add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) ](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
+
+Now youΓÇÖre ready to select the storage account in Azure API for FHIR as a default storage account for $export.
+
+## Selecting the storage account for $export
+
+The final step is to assign the Azure storage account that Azure API for FHIR will use to export the data to. To do this, go to **Integration** in Azure API for FHIR and select the storage account.
+
+[ ![Screenshot of FHIR Export Storage.](media/export-data/fhir-export-storage.png) ](media/export-data/fhir-export-storage.png#lightbox)
+
+After you've completed this final step, youΓÇÖre now ready to export the data using $export command.
+
+> [!Note]
+> Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
+
+## Next steps
+
+In this article, you learned the steps in configuring export settings that allow you to export data out the Azure API for FHIR to a storage account. For more information about configuring database settings, access control, enabling diagnostic logging, and using custom headers to add data to audit logs, see
+
+>[!div class="nextstepaction"]
+>[Additional Settings](azure-api-for-fhir-additional-settings.md)
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
+
+ Title: Copy data in Azure API for FHIR to Azure Synapse Analytics
+description: This article describes copying FHIR data into Synapse in Azure API for FHIR
++++ Last updated : 01/28/2022+++
+# Copy data from Azure API for FHIR to Azure Synapse Analytics
+
+In this article, you'll learn a couple of ways to copy data from Azure API for FHIR to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
+
+Copying data from the FHIR server to Synapse involves exporting the data using the FHIR `$export` operation followed by a series of steps to transform and load the data to Synapse. This article will walk you through two of the several approaches, both of which will show how to convert FHIR resources into tabular formats while copying them into Synapse.
+
+* **Load exported data to Synapse using T-SQL:** Use `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. Convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
+* **Use the tools from the FHIR Analytics Pipelines OSS repo:** The [FHIR Analytics Pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) repo contains tools that can create an **Azure Data Factory (ADF) pipeline** to copy FHIR data into a **Common Data Model (CDM) folder**, and from the CDM folder to Synapse.
+
+## Load exported data to Synapse using T-SQL
+
+### `$export` for moving FHIR data into Azure Data Lake Gen 2 storage
++
+#### Configure your FHIR server to support `$export`
+
+Azure API for FHIR implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export. If you use `$export`, you get de-identification feature by default its capability is already integrated in `$export`.
+
+To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. YouΓÇÖll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step by step can be found [here](./configure-export-data.md).
+
+You can configure the server to export the data to any kind of Azure storage account, but we recommend exporting to ADL Gen 2 for best alignment with Synapse.
+
+#### Using `$export` command
+
+After configuring your FHIR server, you can follow the [documentation](./export-data.md#using-export-command) to export your FHIR resources at System, Patient, or Group level. For example, you can export all of your FHIR data related to the patients in a `Group` with the following `$export` command, in which you specify your ADL Gen 2 blob storage name in the field `{{BlobContainer}}`:
+
+```rest
+https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}
+```
+
+You can also use `_type` parameter in the `$export` call above to restrict the resources we you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
+
+```rest
+https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
+_type=Patient,MedicationRequest,Condition
+```
+
+For more information on the different parameters supported, check out our `$export` page section on the [query parameters](./export-data.md#settings-and-parameters).
+
+### Create a Synapse workspace
+
+Before using Synapse, you'll need a Synapse workspace. YouΓÇÖll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+
+After creating a workspace, you can view your workspace on Synapse Studio by signing into your workspace on https://web.azuresynapse.net, or launching Synapse Studio in the Azure portal.
+
+#### Creating a linked service between Azure storage and Synapse
+
+To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
+
+1. In Synapse Studio, browse to the **Manage** tab and under **External connections**, select **Linked services**.
+2. Select **New** to add a new linked service.
+3. Select **Azure Data Lake Storage Gen2** from the list and select **Continue**.
+4. Enter your authentication credentials. Select **Create** when finished.
+
+Now that you have a linked service between your ADL Gen 2 storage and Synapse, youΓÇÖre ready to use Synapse SQL pools to load and analyze your FHIR data.
+
+### Decide between serverless and dedicated SQL pool
+
+Azure Synapse Analytics offers two different SQL pools, serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
+
+#### Using serverless SQL pool
+
+Since itΓÇÖs serverless, there's no infrastructure to setup or clusters to maintain. You can start querying data from Synapse Studio as soon as the workspace is created.
+
+For example, the following query can be used to transform selected fields from `Patient.ndjson` into a tabular structure:
+
+```sql
+SELECT * FROM
+OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}/Patient.ndjson',
+FORMAT = 'csv',
+FIELDTERMINATOR ='0x0b',
+FIELDQUOTE = '0x0b')
+WITH (doc NVARCHAR(MAX)) AS rows
+CROSS APPLY OPENJSON(doc)
+WITH (
+ ResourceId VARCHAR(64) '$.id',
+ Active VARCHAR(10) '$.active',
+ FullName VARCHAR(100) '$.name[0].text',
+ Gender VARCHAR(20) '$.gender',
+ ...
+)
+```
+
+In the query above, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
+
+You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as shown below:
+
+```sql
+-- Create External data source where the parquet file will be written
+CREATE EXTERNAL DATA SOURCE [MyDataSource] WITH (
+ LOCATION = 'https://{{youraccount}}.blob.core.windows.net/{{exttblcontainer}}'
+);
+GO
+
+-- Create External File Format
+CREATE EXTERNAL FILE FORMAT [ParquetFF] WITH (
+ FORMAT_TYPE = PARQUET,
+ DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
+);
+GO
+
+CREATE EXTERNAL TABLE [dbo].[Patient] WITH (
+ LOCATION = 'PatientParquet/',
+ DATA_SOURCE = [MyDataSource],
+ FILE_FORMAT = [ParquetFF]
+) AS
+SELECT * FROM
+OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}/Patient.ndjson'
+-- Use rest of the SQL statement from the previous example --
+```
+
+#### Using dedicated SQL pool
+
+Dedicated SQL pool supports managed tables and a hierarchical cache for in-memory performance. You can import big data with simple T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics.
+
+The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
+
+```sql
+-- Create table with HEAP, which is not indexed and does not have a column width limitation of NVARCHAR(4000)
+CREATE TABLE StagingPatient (
+Resource NVARCHAR(MAX)
+) WITH (HEAP)
+COPY INTO StagingPatient
+FROM 'https://{{yourblobaccount}}.blob.core.windows.net/{{yourcontainer}}/Patient.ndjson'
+WITH (
+FILE_TYPE = 'CSV',
+ROWTERMINATOR='0x0a',
+FIELDQUOTE = '',
+FIELDTERMINATOR = '0x00'
+)
+GO
+```
+
+Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. HereΓÇÖs a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
+
+```sql
+SELECT RES.*
+INTO Patient
+FROM StagingPatient
+CROSS APPLY OPENJSON(Resource)
+WITH (
+ ResourceId VARCHAR(64) '$.id',
+ FullName VARCHAR(100) '$.name[0].text',
+ FamilyName VARCHAR(50) '$.name[0].family',
+ GivenName VARCHAR(50) '$.name[0].given[0]',
+ Gender VARCHAR(20) '$.gender',
+ DOB DATETIME2 '$.birthDate',
+ MaritalStatus VARCHAR(20) '$.maritalStatus.coding[0].display',
+ LanguageOfCommunication VARCHAR(20) '$.communication[0].language.text'
+) AS RES
+GO
+
+```
+
+## Use FHIR Analytics Pipelines OSS tools
++
+> [!Note]
+> [FHIR Analytics pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+### ADF pipeline for moving FHIR data into CDM folder
+
+Common Data Model (CDM) folder is a folder in a data lake that conforms to well-defined and standardized metadata structures and self-describing data. These folders facilitate metadata interoperability between data producers and data consumers. Before you copy FHIR data into CDM folder, you can transform your data into a table configuration.
+
+### Generating table configuration
+
+Clone the repo get all the scripts and source code. Use `npm install` to install the dependencies. Run the following command from the `Configuration-Generator` folder to generate a table configuration folder using YAML format instructions:
+
+```bash
+Configuration-Generator> node .\generate_from_yaml.js -r {resource configuration file} -p {properties group file} -o {output folder}
+```
+
+You may use the sample `YAML` files, `resourcesConfig.yml` and `propertiesGroupConfig.yml` provided in the repo.
+
+### Generating ADF pipeline
+
+Now you can use the content of the generated table configuration and a few other configurations to generate an ADF pipeline. This ADF pipeline, when triggered, exports the data from the FHIR server using `$export` API and writes to a CDM folder along with associated CDM metadata.
+
+1. Create an Azure Active Directory (Azure AD) application and service principal. The ADF pipeline uses an Azure batch service to do the transformation, and needs an Azure AD application for the batch service. Follow [Azure AD documentation](../../active-directory/develop/howto-create-service-principal-portal.md).
+2. Grant access for export storage location to the service principal. In the `Access Control` of the export storage, grant `Storage Blob Data Contributor` role to the Azure AD application.
+3. Deploy the egress pipeline. Use the template `fhirServiceToCdm.json` for a custom deployment on Azure. This step will create the following Azure resources:
+ - An ADF pipeline with the name `{pipelinename}-df`.
+ - A key vault with the name `{pipelinename}-kv` to store the client secret.
+ - A batch account with the name `{pipelinename}batch` to run the transformation.
+ - A storage account with the name `{pipelinename}storage`.
+4. Grant access to the Azure Data Factory. In the access control panel of your FHIR service, grant `FHIR data exporter` and `FHIR data reader` roles to the data factory, `{pipelinename}-df`.
+5. Upload the content of the table configuration folder to the configuration container.
+6. Go to `{pipelinename}-df`, and trigger the pipeline. You should see the exported data in the CDM folder on the storage account `{pipelinename}storage`. You should see one folder for each table having a CSV file.
+
+### From CDM folder to Synapse
+
+Once you have the data exported in a CDM format and stored in your ADL Gen 2 storage, you can now copy your data in the CDM folder to Synapse.
+
+You can create CDM to Synapse pipeline using a configuration file, which would look like the following example:
+
+```json
+{
+ "ResourceGroup": "",
+ "TemplateFilePath": "../Templates/cdmToSynapse.json",
+ "TemplateParameters": {
+ "DataFactoryName": "",
+ "SynapseWorkspace": "",
+ "DedicatedSqlPool": "",
+ "AdlsAccountForCdm": "",
+ "CdmRootLocation": "cdm",
+ "StagingContainer": "adfstaging",
+ "Entities": ["LocalPatient", "LocalPatientAddress"]
+ }
+}
+```
+
+Run the following script with the configuration file above:
+
+```bash
+.\DeployCdmToSynapsePipeline.ps1 -Config: config.json
+```
+
+Add ADF Managed Identity as a SQL user into SQL database. Below is a sample SQL script to create a user and an assign role:
+
+```sql
+CREATE USER [datafactory-name] FROM EXTERNAL PROVIDER
+GO
+EXEC sp_addrolemember db_owner, [datafactory-name]
+GO
+```
+
+## Next steps
+
+In this article, you learned two different ways to copy your FHIR data into Synapse: (1) using `$export` to copy data into ADL Gen 2 blob storage then loading the data into Synapse SQL pools, and (2) using ADF pipeline for moving FHIR data into CDM folder then into Synapse.
+
+Next, you can learn about anonymization of your FHIR data while copying data to Synapse to ensure your healthcare information is protected:
+
+>[!div class="nextstepaction"]
+>[Exporting de-identified data](de-identified-export.md)
+++++++
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/de-identified-export.md
+
+ Title: Exporting de-identified data (preview) for Azure API for FHIR
+description: This article describes how to set up and use de-identified export for Azure API for FHIR
++++ Last updated : 01/28/2022++
+# Exporting de-identified data (preview) for Azure API for FHIR
+
+> [!Note]
+> Results when using the de-identified export will vary based on factors such as data inputted, and functions selected by the customer. Microsoft is unable to evaluate the de-identified export outputs or determine the acceptability for customer's use cases and compliance needs. The de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements.
+
+The $export command can also be used to export de-identified data from the FHIR server. It uses the anonymization engine from [Tools for Health Data Anonymization](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md), and takes anonymization config details in query parameters. You can create your own anonymization config file or use the [sample config file](https://github.com/microsoft/FHIR-Tools-for-Anonymization#sample-configuration-file-for-hipaa-safe-harbor-method) for HIPAA Safe Harbor method as a starting point.
+
+ `https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>`
+
+> [!Note]
+> Currently, Azure API for FHIR only supports de-identified export at the system level ($export).
+
+|Query parameter | Example |Optionality| Description|
+|||--||
+| _\_anonymizationConfig_ |DemoConfig.json|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format). This file should be kept inside a container named **anonymization** within the same Azure storage account that is configured as the export location. |
+| _\_anonymizationConfigEtag_|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure storage explorer from the blob property|
+
+> [!IMPORTANT]
+> Both raw export as well as de-identified export writes to the same Azure storage account specified as part of export configuration. It is recommended that you use different containers corresponding to different de-identified config and manage user access at the container level.
+
+## Next steps
+
+In this article, you've learned how to set up and use de-identified export. Next, to learn how to export FHIR data using $export for Azure API for FHIR, see
+
+>[!div class="nextstepaction"]
+>[Export data](export-data.md)
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/export-data.md
+
+ Title: Executing the export by invoking $export command on Azure API for FHIR
+description: This article describes how to export FHIR data using $export for Azure API for FHIR
++++ Last updated : 01/26/2022+++
+# How to export FHIR data in Azure API for FHIR
+
+The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+
+Before using $export, you'll want to make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
+
+## Using $export command
+
+After configuring the Azure API for FHIR for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
++
+**Jobs stuck in a bad state**
+
+In some situations, thereΓÇÖs a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions havenΓÇÖt been set up properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, `ndjson`) files are present. If they arenΓÇÖt present, and there are no other export jobs running, then thereΓÇÖs a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try re-queuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
+
+The Azure API For FHIR supports $export at the following levels:
+* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>`
+* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>`
+* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
+
+When data is exported, a separate file is created for each resource type. To ensure that the exported files don't become too large. We create a new file after the size of a single-exported file becomes larger than 64 MB. The result is that you may get multiple files for each resource type, which will be enumerated (that is, Patient-1.ndjson, Patient-2.ndjson).
++
+> [!Note]
+> `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
+
+In addition, checking the export status through the URL returned by the location header during the queuing is supported along with canceling the actual export job.
+
+### Exporting FHIR data to ADLS Gen2
+
+Currently we support $export for ADLS Gen2 enabled storage accounts, with the following limitation:
+
+- User canΓÇÖt take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).
+- Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder.
++
+## Settings and parameters
+
+### Headers
+There are two required header parameters that must be set for $export jobs. The values are defined by the current [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
+* **Accept** - application/fhir+json
+* **Prefer** - respond-async
+
+### Query parameters
+The Azure API for FHIR supports the following query parameters. All of these parameters are optional:
+
+|Query parameter | Defined by the FHIR Spec? | Description|
+||||
+| \_outputFormat | Yes | Currently supports three values to align to the FHIR Spec: application/fhir+ndjson, application/ndjson, or just ndjson. All export jobs will return `ndjson` and the passed value has no effect on code behavior. |
+| \_since | Yes | Allows you to only export resources that have been modified since the time provided |
+| \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources|
+| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
+| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isnΓÇÖt specified, the data will be exported to a new container. |
+
+> [!Note]
+> Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
+
+## Secure Export to Azure Storage
+
+Azure API for FHIR supports a secure export operation. Choose one of the two options below:
+
+* Allowing Azure API for FHIR as a Microsoft Trusted Service to access the Azure storage account.
+
+* Allowing specific IP addresses associated with Azure API for FHIR to access the Azure storage account.
+This option provides two different configurations depending on whether the storage account is in the same location as, or is in a different location from that of the Azure API for FHIR.
+
+### Allowing Azure API for FHIR as a Microsoft Trusted Service
+
+Select a storage account from the Azure portal, and then select the **Networking** blade. Select **Selected networks** under the **Firewalls and virtual networks** tab.
+
+> [!IMPORTANT]
+> Ensure that youΓÇÖve granted access permission to the storage account for Azure API for FHIR using its managed identity. For more information, see [Configure export setting and set up the storage account](../../healthcare-apis/fhir/configure-export-data.md).
++
+Under the **Exceptions** section, select the box **Allow trusted Microsoft services to access this storage account** and save the setting.
++
+You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and isnΓÇÖt publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account for a short period of time.
+
+> [!IMPORTANT]
+> The user interface will be updated later to allow you to select the Resource type for Azure API for FHIR and a specific service instance.
+
+### Allowing specific IP addresses for the Azure storage account in a different region
+
+Select **Networking** of the Azure storage account from the
+portal.
+
+Select **Selected networks**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to
+allow access from the internet or your on-premises networks. You can
+find the IP address in the table below for the Azure region where the
+Azure API for FHIR is provisioned.
+
+|**Azure Region** |**Public IP Address** |
+|:-|:-|
+| Australia East | 20.53.44.80 |
+| Canada Central | 20.48.192.84 |
+| Central US | 52.182.208.31 |
+| East US | 20.62.128.148 |
+| East US 2 | 20.49.102.228 |
+| East US 2 EUAP | 20.39.26.254 |
+| Germany North | 51.116.51.33 |
+| Germany West Central | 51.116.146.216 |
+| Japan East | 20.191.160.26 |
+| Korea Central | 20.41.69.51 |
+| North Central US | 20.49.114.188 |
+| North Europe | 52.146.131.52 |
+| South Africa North | 102.133.220.197 |
+| South Central US | 13.73.254.220 |
+| Southeast Asia | 23.98.108.42 |
+| Switzerland North | 51.107.60.95 |
+| UK South | 51.104.30.170 |
+| UK West | 51.137.164.94 |
+| West Central US | 52.150.156.44 |
+| West Europe | 20.61.98.66 |
+| West US 2 | 40.64.135.77 |
+
+> [!NOTE]
+> The above steps are similar to the configuration steps described in the document How to convert data to FHIR (Preview). For more information, see [Host and use templates](../../healthcare-apis/fhir/convert-data.md#host-and-use-templates)
+
+### Allowing specific IP addresses for the Azure storage account in the same region
+
+The configuration process is the same as above except a specific IP
+address range in CIDR format is used instead, 100.64.0.0/10. The reason why the IP address range, which includes 100.64.0.0 ΓÇô 100.127.255.255, must be specified is because the actual IP address used by the service varies, but will be within the range, for each $export request.
+
+> [!Note]
+> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case, the $export operation will not succeed. You can retry the $export request, but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region.
+
+## Next steps
+
+In this article, you've learned how to export FHIR resources using $export command. Next, to learn how to export de-identified data, see
+
+>[!div class="nextstepaction"]
+>[Export de-identified data](de-identified-export.md)
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
Title: Move FHIR service to another subscription or resource group
-description: This article describes how to move Azure an API for FHIR service instance
+ Title: Move Azure API for FHIR instance to a different subscription or resource group
+description: This article describes how to move Azure an API for FHIR instance
Previously updated : 01/14/2022 Last updated : 01/28/2022
-# Move FHIR service to another subscription or resource group
+# Move Azure API for FHIR to a different subscription or resource group
-In this article, you'll learn how to move an Azure API for FHIR service instance to another subscription or another resource group.
+In this article, you'll learn how to move an Azure API for FHIR instance to a different subscription or another resource group.
-Moving to a different region is not supported, though the option may be available from the list. See more information on [Move operation support for resources](../../azure-resource-manager/management/move-support-resources.md).
+Moving to a different region isnΓÇÖt supported, though the option may be available from the list. For more information, see [Move operation support for resources](../../azure-resource-manager/management/move-support-resources.md).
> [!Note] > Moving an instance of Azure API for FHIR between subscriptions or resource groups is supported, as long as Private Link is NOT enabled and no IoMT connectors are created. ## Move to another subscription
-You can move an Azure API for FHIR service instance to another subscription from the portal. However, the runtime and data for the service are not moved. On average the **move** operation takes approximately 15 minutes or so, and the actual time may vary.
+You can move an Azure API for FHIR instance to another subscription from the portal. However, the runtime and data for the service arenΓÇÖt moved. On average the **move** operation takes approximately 15 minutes or so, and the actual time may vary.
The **move** operation takes a few simple steps.
The process works similarly to **Move to another subscription**, except the sele
## Next steps
-In this article, you've learned how to move the FHIR service. For more information about the FHIR service, see
+In this article, you've learned how to move the Azure API for FHIR instance. For more information about the supported FHIR features in Azure API for FHIR, see
>[!div class="nextstepaction"]
->[Supported FHIR Features](fhir-features-supported.md)
+>[Supported FHIR features](fhir-features-supported.md)
healthcare-apis Bulk Importing Fhir Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/bulk-importing-fhir-data.md
Previously updated : 01/06/2022 Last updated : 01/28/2022
In this article, you've learned about the tools and the steps for bulk-importing
>[Configure export settings and set up a storage account](configure-export-data.md) >[!div class="nextstepaction"]
->[Moving data from Azure API for FHIR to Azure Synapse Analytics](move-to-synapse.md)
+>[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/copy-to-synapse.md
+
+ Title: Copy data from the FHIR service to Azure Synapse Analytics
+description: This article describes copying FHIR data into Synapse
++++ Last updated : 01/28/2022++
+# Copy data from the FHIR service to Azure Synapse Analytics
+
+In this article, youΓÇÖll learn a couple of ways to copy data from the FHIR service to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
+
+Copying data from the FHIR server to Synapse involves exporting the data using the FHIR `$export` operation followed by a series of steps to transform and load the data to Synapse. This article will walk you through two of the several approaches, both of which will show how to convert FHIR resources into tabular formats while copying them into Synapse.
+
+* **Load exported data to Synapse using T-SQL:** Use `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. Convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
+* **Use the tools from the FHIR Analytics Pipelines OSS repo:** The [FHIR Analytics Pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) repo contains tools that can create an **Azure Data Factory (ADF) pipeline** to copy FHIR data into a **Common Data Model (CDM) folder**, and from the CDM folder to Synapse.
+
+## Load exported data to Synapse using T-SQL
+
+### `$export` for moving FHIR data into Azure Data Lake Gen 2 storage
++
+#### Configure your FHIR server to support `$export`
+
+Azure API for FHIR implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export. If you use `$export`, you get de-identification feature by default its capability is already integrated in `$export`.
+
+To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. YouΓÇÖll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step-by-step instructions can be found [here](./configure-export-data.md).
+
+You can configure the server to export the data to any kind of Azure storage account, but we recommend exporting to ADL Gen 2 for best alignment with Synapse.
+
+#### Using `$export` command
+
+After configuring your FHIR server, you can follow the [documentation](./export-data.md#using-export-command) to export your FHIR resources at System, Patient, or Group level. For example, you can export all of your FHIR data related to the patients in a `Group` with the following `$export` command, in which you specify your ADL Gen 2 blob storage name in the field `{{BlobContainer}}`:
+
+```rest
+https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}
+```
+
+You can also use `_type` parameter in the `$export` call above to restrict the resources you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
+
+```rest
+https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
+_type=Patient,MedicationRequest,Condition
+```
+
+For more information on the different parameters supported, check out our `$export` page section on the [query parameters](./export-data.md#settings-and-parameters).
+
+### Create a Synapse workspace
+
+Before using Synapse, youΓÇÖll need a Synapse workspace. YouΓÇÖll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+
+After creating a workspace, you can view your workspace on Synapse Studio by signing into your workspace on https://web.azuresynapse.net, or launching Synapse Studio in the Azure portal.
+
+#### Creating a linked service between Azure storage and Synapse
+
+To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
+
+1. On Synapse Studio, navigate to the **Manage** tab, and under **External connections**, select **Linked services**.
+2. Select **New** to add a new linked service.
+3. Select **Azure Data Lake Storage Gen2** from the list and select **Continue**.
+4. Enter your authentication credentials. Select **Create** when finished.
+
+Now that you have a linked service between your ADL Gen 2 storage and Synapse, youΓÇÖre ready to use Synapse SQL pools to load and analyze your FHIR data.
+
+### Decide between serverless and dedicated SQL pool
+
+Azure Synapse Analytics offers two different SQL pools, serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture.
+
+#### Using serverless SQL pool
+
+Since itΓÇÖs serverless, there's no infrastructure to setup or clusters to maintain. You can start querying data from Synapse Studio as soon as the workspace is created.
+
+For example, the following query can be used to transform selected fields from `Patient.ndjson` into a tabular structure:
+
+```sql
+SELECT * FROM
+OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}/Patient.ndjson',
+FORMAT = 'csv',
+FIELDTERMINATOR ='0x0b',
+FIELDQUOTE = '0x0b')
+WITH (doc NVARCHAR(MAX)) AS rows
+CROSS APPLY OPENJSON(doc)
+WITH (
+ ResourceId VARCHAR(64) '$.id',
+ Active VARCHAR(10) '$.active',
+ FullName VARCHAR(100) '$.name[0].text',
+ Gender VARCHAR(20) '$.gender',
+ ...
+)
+```
+
+In the query above, the `OPENROWSET` function accesses files in Azure Storage, and `OPENJSON` parses JSON text and returns the JSON input properties as rows and columns. Every time this query is executed, the serverless SQL pool reads the file from the blob storage, parses the JSON, and extracts the fields.
+
+You can also materialize the results in Parquet format in an [External Table](../../synapse-analytics/sql/develop-tables-external-tables.md) to get better query performance, as shown below:
+
+```sql
+-- Create External data source where the parquet file will be written
+CREATE EXTERNAL DATA SOURCE [MyDataSource] WITH (
+ LOCATION = 'https://{{youraccount}}.blob.core.windows.net/{{exttblcontainer}}'
+);
+GO
+
+-- Create External File Format
+CREATE EXTERNAL FILE FORMAT [ParquetFF] WITH (
+ FORMAT_TYPE = PARQUET,
+ DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
+);
+GO
+
+CREATE EXTERNAL TABLE [dbo].[Patient] WITH (
+ LOCATION = 'PatientParquet/',
+ DATA_SOURCE = [MyDataSource],
+ FILE_FORMAT = [ParquetFF]
+) AS
+SELECT * FROM
+OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}/Patient.ndjson'
+-- Use rest of the SQL statement from the previous example --
+```
+
+#### Using dedicated SQL pool
+
+Dedicated SQL pool supports managed tables and a hierarchical cache for in-memory performance. You can import big data with simple T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics.
+
+The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
+
+```sql
+-- Create table with HEAP, which is not indexed and does not have a column width limitation of NVARCHAR(4000)
+CREATE TABLE StagingPatient (
+Resource NVARCHAR(MAX)
+) WITH (HEAP)
+COPY INTO StagingPatient
+FROM 'https://{{yourblobaccount}}.blob.core.windows.net/{{yourcontainer}}/Patient.ndjson'
+WITH (
+FILE_TYPE = 'CSV',
+ROWTERMINATOR='0x0a',
+FIELDQUOTE = '',
+FIELDTERMINATOR = '0x00'
+)
+GO
+```
+
+Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. HereΓÇÖs a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
+
+```sql
+SELECT RES.*
+INTO Patient
+FROM StagingPatient
+CROSS APPLY OPENJSON(Resource)
+WITH (
+ ResourceId VARCHAR(64) '$.id',
+ FullName VARCHAR(100) '$.name[0].text',
+ FamilyName VARCHAR(50) '$.name[0].family',
+ GivenName VARCHAR(50) '$.name[0].given[0]',
+ Gender VARCHAR(20) '$.gender',
+ DOB DATETIME2 '$.birthDate',
+ MaritalStatus VARCHAR(20) '$.maritalStatus.coding[0].display',
+ LanguageOfCommunication VARCHAR(20) '$.communication[0].language.text'
+) AS RES
+GO
+
+```
+
+## Use FHIR Analytics Pipelines OSS tools
++
+> [!Note]
+> [FHIR Analytics pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+### ADF pipeline for moving FHIR data into CDM folder
+
+Common Data Model (CDM) folder is a folder in a data lake that conforms to well-defined and standardized metadata structures and self-describing data. These folders facilitate metadata interoperability between data producers and data consumers. Before you copy FHIR data into CDM folder, you can transform your data into a table configuration.
+
+### Generating table configuration
+
+Clone the repo to get all the scripts and source code. Use `npm install` to install the dependencies. Run the following command from the `Configuration-Generator` folder to generate a table configuration folder using YAML format instructions:
+
+```bash
+Configuration-Generator> node .\generate_from_yaml.js -r {resource configuration file} -p {properties group file} -o {output folder}
+```
+
+You may use the sample `YAML` files, `resourcesConfig.yml` and `propertiesGroupConfig.yml` provided in the repo.
+
+### Generating ADF pipeline
+
+Now you can use the content of the generated table configuration and a few other configurations to generate an ADF pipeline. This ADF pipeline, when triggered, exports the data from the FHIR server using `$export` API and writes to a CDM folder along with associated CDM metadata.
+
+1. Create an Azure Active Directory (AD) application and service principal. The ADF pipeline uses an Azure batch service to do the transformation, and needs an Azure AD application for the batch service. Follow [Azure AD documentation](../../active-directory/develop/howto-create-service-principal-portal.md).
+2. Grant access for export storage location to the service principal. In the `Access Control` of the export storage, grant `Storage Blob Data Contributor` role to the Azure AD application.
+3. Deploy the egress pipeline. Use the template `fhirServiceToCdm.json` for a custom deployment on Azure. This step will create the following Azure resources:
+ - An ADF pipeline with the name `{pipelinename}-df`.
+ - A key vault with the name `{pipelinename}-kv` to store the client secret.
+ - A batch account with the name `{pipelinename}batch` to run the transformation.
+ - A storage account with the name `{pipelinename}storage`.
+4. Grant access to the Azure Data Factory. In the access control panel of your FHIR service, grant `FHIR data exporter` and `FHIR data reader` roles to the data factory, `{pipelinename}-df`.
+5. Upload the content of the table configuration folder to the configuration container.
+6. Go to `{pipelinename}-df`, and trigger the pipeline. You should see the exported data in the CDM folder on the storage account `{pipelinename}storage`. You should see one folder for each table having a CSV file.
+
+### From CDM folder to Synapse
+
+Once you have the data exported in a CDM format and stored in your ADL Gen 2 storage, you can now copy your data in the CDM folder to Synapse.
+
+You can create CDM to Synapse pipeline using a configuration file, which would look something like this:
+
+```json
+{
+ "ResourceGroup": "",
+ "TemplateFilePath": "../Templates/cdmToSynapse.json",
+ "TemplateParameters": {
+ "DataFactoryName": "",
+ "SynapseWorkspace": "",
+ "DedicatedSqlPool": "",
+ "AdlsAccountForCdm": "",
+ "CdmRootLocation": "cdm",
+ "StagingContainer": "adfstaging",
+ "Entities": ["LocalPatient", "LocalPatientAddress"]
+ }
+}
+```
+
+Run this script with the configuration file above:
+
+```bash
+.\DeployCdmToSynapsePipeline.ps1 -Config: config.json
+```
+
+Add ADF Managed Identity as a SQL user into SQL database. HereΓÇÖs a sample SQL script to create a user and an assign role:
+
+```sql
+CREATE USER [datafactory-name] FROM EXTERNAL PROVIDER
+GO
+EXEC sp_addrolemember db_owner, [datafactory-name]
+GO
+```
+
+## Next steps
+
+In this article, you learned two different ways to copy your FHIR data into Synapse: (1) using `$export` to copy data into ADL Gen 2 blob storage then loading the data into Synapse SQL pools, and (2) using ADF pipeline for moving FHIR data into CDM folder then into Synapse.
+
+Next, you can learn about anonymization of your FHIR data while copying data to Synapse to ensure your healthcare information is protected:
+
+>[!div class="nextstepaction"]
+>[Exporting de-identified data](./de-identified-export.md)
++++++++
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/export-data.md
In this article, you've learned how to export FHIR resources using the $export c
>[Export de-identified data](de-identified-export.md) >[!div class="nextstepaction"]
->[Export to Synapse](move-to-synapse.md)
+>[Copy data from the FHIR service to Azure Synapse Analytics](copy-to-synapse.md)
iot-edge How To Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-alerts.md
Aggregate values by the `_ResourceId` field and choose it as the *Resource ID co
## Viewing alerts
-See alerts generated for devices across multiple IoT Hubs in **Alerts** tab of the [IoT Edge fleet view workbook](how-to-explore-curated-visualizations.md#iot-edge-fleet-view-workbook).
+See alerts generated for devices across multiple IoT Hubs in **Alerts** tab of the [IoT Edge fleet view workbook](how-to-explore-curated-visualizations.md#fleet-view-workbook).
Click the alert rule name to see more context about the alert. Clicking the device name link will show you the detailed metrics for the device around the time when the alert fired.
iot-edge How To Explore Curated Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-explore-curated-visualizations.md
description: Use Azure workbooks to visualize and explore IoT Edge built-in metr
Previously updated : 08/11/2021 Last updated : 01/29/2022
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-You can visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks. Curated monitoring workbooks for IoT Edge devices are provided in the form of public templates:
+You can visually explore metrics collected from IoT Edge devices using Azure Monitor workbooks. Curated monitoring workbooks for IoT Edge devices are provided in the form of public templates:
-* For devices connected to IoT Hub, from the **IoT Hub** blade in the Azure portal navigate to the **Workbooks** page in the **Monitoring** section.
-* For devices connected to IoT Central, from the **IoT Central** blade in the Azure portal navigate to the **Workbooks** page in the **Monitoring** section.
+* For devices connected to IoT Hub, from the **IoT Hub** page in the Azure portal, navigate to the **Workbooks** page in the **Monitoring** section.
+* For devices connected to IoT Central, from the **IoT Central** page in the Azure portal, navigate to the **Workbooks** page in the **Monitoring** section.
-The curated workbooks use [built-in metrics](how-to-access-built-in-metrics.md) from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules.
+Curated workbooks use [built-in metrics](how-to-access-built-in-metrics.md) from the IoT Edge runtime. They first need metrics to be [ingested](how-to-collect-and-transport-metrics.md) into a Log Analytics workspace. These views don't need any metrics instrumentation from the workload modules.
## Access curated workbooks
-Azure Monitor workbooks for IoT are a set of templates that you can use to start visualizing your metrics right away, or that you can customize to fit your solution.
+Azure Monitor workbooks for IoT are a set of templates that you can use to visualize your device metrics. They can be customized to fit your solution.
To access the curated workbooks, use the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub or IoT Central application.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT Hub or IoT Central application.
1. Select **Workbooks** from the **Monitoring** section of the menu. 1. Choose the workbook that you want to explore from the list of public templates:
- * **IoT Edge fleet view**: Monitor your fleet of devices, and drill into specific devices for a health snapshot.
- * **IoT Edge device details**: Visualize device details around messaging, modules, and host components on an IoT Edge device.
- * **IoT Edge health snapshot**: View a device's health based on six common performance metrics. To access the health snapshot workbook, start in the fleet view workbook and select the specific device that you want to view. The fleet view workbook passes some required parameters to the health snapshot view.
+ * **Fleet View**: Monitor your fleet of devices across multiple IoT Hubs or Central Apps, and drill into specific devices for a health snapshot.
-You can explore the workbooks on your own, or use the following sections to get a preview of the kind of data and visualizations that each workbook offers.
+ * **Device Details**: Visualize device details around messaging, modules, and host components on an IoT Edge device.
-## IoT Edge fleet view workbook
+ * **Alerts**: View triggered [alerts](how-to-create-alerts.md) for devices across multiple IoT resources.
-The fleet view workbook has two views that you can use:
+Use the following sections to get a preview of the kind of data and visualizations that each workbook offers.
-* The **Devices** view shows an overview of active devices.
-* The **Alerts** view shows alerts generated from [pre-configured alert rules](how-to-create-alerts.md).
+>[!NOTE]
+> The screen captures that follow may not reflect the latest workbook design.
-You can switch between the views using the tabs at the top of the workbook.
-
-# [Devices](#tab/devices)
+## Fleet view workbook
:::image type="content" source="./media/how-to-explore-curated-visualizations/how-to-explore-fleet-view.gif" alt-text="The devices section of the fleet view workbook." lightbox="./media/how-to-explore-curated-visualizations/how-to-explore-fleet-view.gif":::
-See the overview of active devices sending metrics in the **Devices** view. This view shows devices associated with the current IoT hub or IoT Central application.
-
-On the right, there's the device list with composite bars showing local and upstream messages sent. You can filter the list by device name and click on the device name link to see its detailed metrics.
-
-On the left, the hive cell visualization shows which devices are healthy or unhealthy. It also shows when the device last sent metrics. Devices that haven't sent metrics for more than 30 minutes are shown in Blue. Click on the device name in the hive cell to see its health snapshot. Only the last 3 measurements from the device are considered when determining health. Using only recent data accounts for temporary spikes in the reported metrics.
+By default, this view shows the health of devices associated with the current IoT cloud resource. You can select multiple IoT resources using the dropdown control on the top left.
-# [Alerts](#tab/alerts)
--
-See the generated alerts from [pre-created alert rules](how-to-create-alerts.md) in the **Alerts** view. This view lets you see alerts from multiple IoT hubs or IoT Central applications.
-
-On the left, there's a list of alert severities with their count. On the right, there's map with total alerts per region.
-
-Click on a severity row to see alerts details. The **Alert rule** link takes you to the alert context and the **Device** link opens the detailed metrics workbook. When opened from this view, the device details workbook is automatically adjusted to the time range around when the alert fired.
+Use the **Settings** tab to adjust the various thresholds to categorize the device as Healthy or Unhealthy.
-
+Click the **Details** button to see the device list with a snapshot of aggregated, primary metrics. Click the link in the **Status** column to view the trend of an individual device's health metrics or the device name to view its detailed metrics.
-## IoT Edge device details workbook
+## Device details workbook
-The device details workbook has three views that you can use:
+The device details workbook has three views:
* The **Messaging** view visualizes the message routes for the device and reports on the overall health of the messaging system. * The **Modules** view shows how the individual modules on a device are performing. * The **Host** view shows information about the host device including version information for host components and resource use.
-You can switch between the views using the tabs at the top of the workbook.
+Switch between the views using the tabs at the top of the workbook.
-The device details workbook also integrates with the IoT Edge portal-based troubleshooting experience so that you can view **Live logs** coming in from your device. You can access this experience by selecting the **Troubleshoot \<device name> live** button above the workbook.
+The device details workbook also integrates with the IoT Edge portal-based troubleshooting experience. You can pull **Live logs** from your device using this feature. Access this experience by selecting the **Troubleshoot \<device name> live** button above the workbook.
# [Messaging](#tab/messaging)
The **Messaging** view includes three subsections: routing details, a routing gr
The **Routing** section shows message flow between sending modules and receiving modules. It presents information such as message count, rate, and number of connected clients. Click on a sender or receiver to drill in further. Clicking a sender shows the latency trend chart experienced by the sender and number of messages it sent. Clicking a receiver shows the queue length trend for the receiver and number of messages it received.
-The **Graph** section shows a visual representation of message flow between modules. You can drag and zoom to adjust the graph.
+The **Graph** section shows a visual representation of message flow between modules. Drag and zoom to adjust the graph.
-The **Health** section presents various metrics related to overall health of the messaging subsystem. You can progressively drill-in to details if any errors are noted.
+The **Health** section presents various metrics related to overall health of the messaging subsystem. Progressively drill-in to details if any errors are noted.
# [Modules](#tab/modules)
The **Modules** view presents metrics collected from the edgeAgent module, which
* Module availability * Per-module CPU and memory use * CPU and memory use across all modules
-* Modules restart count and restart timeline.
+* Modules restart count and restart timeline
# [Host](#tab/host)
This workbook integrates directly with the portal-based troubleshooting experien
-## IoT Edge health snapshot workbook
-
-The health snapshot workbook can be accessed from within the fleet view workbook. The fleet view workbook passes in some parameters required to initialize the health snapshot view. Select a device name in the hive cell to see the health snapshot of that device.
-
+## Alerts workbook
-Out of the box, the health snapshot is made up of six signals:
+See the generated alerts from [pre-created alert rules](how-to-create-alerts.md) in the **Alerts** workbook. This view lets you see alerts from multiple IoT Hubs or IoT Central applications.
-* Upstream messages
-* Local messages
-* Queue length
-* Disk usage
-* Host-level CPU utilization
-* Host-level memory utilization
-
-These signals are measured against configurable thresholds to determine if a device is healthy or not. The thresholds can be adjusted or new signals can be added by editing the workbook. See the next section to learn about workbook customizations.
+Click on a severity row to see alerts details. The **Alert rule** link takes you to the alert context and the **Device** link opens the detailed metrics workbook. When opened from this view, the device details workbook is automatically adjusted to the time range around when the alert fired.
## Customize workbooks
-[Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md) are very customizable. You can edit the public templates to suit your requirements. All the visualizations are driven by resource-centric [KQL](/azure/data-explorer/kusto/query/) queries on the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table. See the example below that edits the health thresholds.
-
-To begin customizing a workbook, first enter editing mode. Select the **Edit** button in the menu bar of the workbook.
+[Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md) are very customizable. You can edit the public templates to suit your requirements. All the visualizations are driven by resource-centric [Kusto Query Language](/azure/data-explorer/kusto/query/) queries on the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table.
-
-Curated workbooks make extensive use of workbook groups. You may need to click **Edit** on several nested groups before being able to view a visualization query.
+To begin customizing a workbook, first enter editing mode. Select the **Edit** button in the menu bar of the workbook. Curated workbooks make extensive use of workbook groups. You may need to select **Edit** on several nested groups before being able to view a visualization query.
Save your changes as a new workbook. You can [share](../azure-monitor/visualize/workbooks-access-control.md) the saved workbook with your team or [deploy them programmatically](../azure-monitor/visualize/workbooks-automate.md) as part of your organization's resource deployments.
-For example, you may want to change the thresholds for when a device is considered healthy or unhealthy. You could do so by drilling into the fleet view workbook template until you get to the **device-health-graph** query item which includes all the metric thresholds that this workbook compares a device against.
- ## Next steps
iot-edge Tutorial Monitor With Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-monitor-with-workbooks.md
Every IoT Edge device relies on two modules, the *runtime modules*, which manage
Both of the runtime modules create metrics that allow you to remotely monitor how an IoT Edge device or its individual modules are performing. The IoT Edge agent reports on the state of individual modules and the host device, so creates metrics like how long a module has been running correctly, or the amount of RAM and percent of CPU being used on the device. The IoT Edge hub reports on communications on the device, so creates metrics like the total number of messages sent and received, or the time it takes to resolve a direct method. For the full list of available metrics, see [Access built-in metrics](how-to-access-built-in-metrics.md).
-These metrics are exposed automatically by both modules so that you can create your own solutions to access and report on these metrics. To make this process easier, Microsoft provides the [azureiotedge-metrics-collector module](https://hub.docker.com/_/microsoft-azureiotedge-metrics-collector) that handles this process for those who don't have or want a custom solution. The metrics collector module collects metrics from the two runtime modules and any other modules you may want to monitor, and transports them off-device.
+These metrics are exposed automatically by both modules so that you can create your own solutions to access and report on these metrics. To make this process easier, Microsoft provides the [azureiotedge-metrics-collector module](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft_iot_edge.metrics-collector?tab=overview) that handles this process for those who don't have or want a custom solution. The metrics collector module collects metrics from the two runtime modules and any other modules you may want to monitor, and transports them off-device.
The metrics collector module works one of two ways to send your metrics to the cloud. The first option, which we'll use in this tutorial, is to send the metrics directly to Log Analytics. The second option, which is only recommended if your networking policies require it, is to send the metrics through IoT Hub and then set up a route to pass the metric messages to Log Analytics. Either way, once the metrics are in your Log Analytics workspace, they are available to view through Azure Monitor workbooks.
It may take up to fifteen minutes for your device monitoring workbooks to be rea
Azure Monitor provides three default workbook templates for IoT:
-* The **IoT Edge fleet view** workbook shows an overview of active devices so that you can identify any unhealthy devices and drill down into how each device is performing. This workbook also shows alerts generated from any alert rules that you may create.
-* The **IoT Edge device details** workbook provides visualizations around three categories: messaging, modules, and host. The messaging view visualizes the message routes for a device and reports on the overall health of the messaging system. The modules view shows how the individual modules on a device are performing. The host view shows information about the host device including version information for host components and resource use.
-* The **IoT Edge health snapshot** workbook measures device signals against configurable thresholds to determine if a device is health or not. This workbook can only be accessed from within the fleet view workbook, which passes the parameters required to initialize the health snapshot of a particular device.
+* The **Fleet View** workbook shows the health of devices across multiple IoT resources. The view allows configuring thresholds for determining device health and presents aggregations of primary metrics, per-device.
+* The **Device Details** workbook provides visualizations around three categories: messaging, modules, and host. The messaging view visualizes the message routes for a device and reports on the overall health of the messaging system. The modules view shows how the individual modules on a device are performing. The host view shows information about the host device including version information for host components and resource use.
+* The **Alerts** workbook View presents alerts for devices across multiple IoT resources.
### Explore the fleet view and health snapshot workbooks
The fleet view workbook shows all of your devices, and lets you select specific
:::image type="content" source="./media/tutorial-monitor-with-workbooks/workbooks-gallery.png" alt-text="Select workbooks to open the Azure Monitor workbooks gallery.":::
-1. Select the **IoT Edge fleet view** workbook.
+1. Select the **Fleet View** workbook.
1. You should see your device that's running the metrics collector module. The device is listed as either **healthy** or **unhealthy**.
-1. Select the device name to open the **IoT Edge health snapshot** and view specific details about the device health.
+1. Select the device name to view detailed metrics from the device.
1. On any of the time charts, use the arrow icons under the X-axis or click on the chart and drag your cursor to change the time range.
iot-hub-device-update Create Update Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/create-update-group.md
You can update the Device Twin with the appropriate Tag using RegistryManager af
#### Device Update Tag Format
-```markdown
+```json
"tags": { "ADUGroup": "<CustomTagValue>" }
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
In this tutorial you will learn how to:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-* Access to an IoT Hub. It is recommended that you use a S1 (Standard) tier or above.
+* Access to an IoT Hub. It is recommended that you use a S1 (Standard) tier or higher.
* A Device Update instance and account linked to your IoT Hub. Follow the guide to [create and link](./create-device-update-account.md) a device update account if you have not done so previously. ## Get started
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
Each board-specific sample Azure RTOS project contains code and documentation on how to use Device Update for IoT Hub on it. 1. Download the board-specific sample files from [Azure RTOS and Device Update samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU). 2. Find the docs folder from the downloaded sample.
-3. From the docs follow the steps for how to prepare Azure Resources, Account, and register IoT devices to it.
+3. From the docs, follow the steps for how to prepare Azure Resources, Account, and register IoT devices to it.
5. Next follow the docs to build a new firmware image and import manifest for your board. 6. Next publish firmware image and manifest to Device Update for IoT Hub. 7. Finally download and run the project on your device.
Learn more about [Azure RTOS](/azure/rtos/).
2. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub. 3. From 'IoT Devices' on the left navigation pane, find your IoT device and navigate to the Device Twin. 4. In the Device Twin, delete any existing Device Update tag value by setting them to null.
-5. Add a new Device Update tag value as shown below.
+5. Add a new Device Update tag value to the root JSON object as shown below.
```JSON "tags": {
Learn more about [Azure RTOS](/azure/rtos/).
## Create update group 1. Go to the IoT Hub you previously connected to your Device Update instance.
-2. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+2. Select the Updates option under "Device management" from the left-hand navigation bar.
3. Select the Groups tab at the top of the page. 4. Select the Add button to create a new group. 5. Select the IoT Hub tag you created in the previous step from the list. Select Create update group.
logic-apps Create Replication Tasks Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-replication-tasks-azure-resources.md
ms.suite: integration Previously updated : 11/09/2021 Last updated : 01/29/2022
Currently, replication task templates are available for [Azure Event Hubs](../ev
| Resource type | Replication source and target | ||-| | Azure Event Hubs namespace | - Event Hubs instance to Event Hubs instance <br>- Event Hubs instance to Service Bus queue <br>- Event Hubs instance to Service Bus topic |
-| Azure Service Bus namespace | - Service Bus queue to Service Bus queue <br>- Service Bus queue to Service Bus topic <br>- Service Bus topic to Service Bus topic <br>- Service Bus queue to Event Hubs instance <br>- Service Bus topic to Service Bus queue <br>- Service Bus topic to Event Hubs instance |
+| Azure Service Bus namespace | - Service Bus queue to Service Bus queue <br>- Service Bus queue to Service Bus topic <br>- Service Bus topic to Service Bus topic <br>- Service Bus queue to Event Hubs instance <br>- Service Bus topic to Service Bus queue <br>- Service Bus topic to Event Hubs instance <p><p>**Important**: When a queue is the source, a replication task doesn't copy messages but *moves* them from the source to the target and deletes them from the source. <p><p>To mirror messages instead, use a topic as your source where the "main" subscription acts like a queue endpoint. That way, the target gets a copy of each message from the source. <p><p>To route messages across different regions, you can create a queue where messages are sent from an app. The replication task transfers messages from that queue to a target queue in a namespace that's in another region. You can also use a topic subscription as the entity that acts as the transfer queue. For more information, review [Replication topology for ServiceBusCopy](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/ServiceBusCopy#replication-topology).|
||| ### Replication topology and workflow
For information about replication and federation in Azure Service Bus, review th
## Metadata and property mappings
-For Event Hubs, the following items obtained from the source Event Hubs namespace are replaced by new service-assigned values in the target Event Hubs namespace: service-assigned metadata of an event, original enqueue time, sequence number, and offset. However, for [helper functions](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/src/Azure.Messaging.Replication) and the replication tasks in the Azure-provided samples, the original values are preserved in the user properties: `repl-enqueue-time` (ISO8601 string), `repl-sequence`, and `repl-offset`. These properties have the `string` type and contain the stringified value of the respective original properties. If the event is forwarded multiple times, the service-assigned metadata of the immediate source is appended to the already existing properties, with values separated by semicolons. For more information, review [Service-assigned metadata - Event replication task patterns](../event-hubs/event-hubs-federation-patterns.md#service-assigned-metadata).
+For Event Hubs, the following items obtained from the source Event Hubs namespace are replaced by new service-assigned values in the target Event Hubs namespace: service-assigned metadata of an event, original enqueue time, sequence number, and offset. However, for [helper functions](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/src/Azure.Messaging.Replication) and the replication tasks in the Azure-provided samples, the original values are preserved in the user properties: `repl-enqueue-time` (ISO8601 string), `repl-sequence`, and `repl-offset`. These properties have the `string` type and contain the stringified value of the respective original properties. If the event is forwarded multiple times, the service-assigned metadata of the immediate source is appended to any existing properties, with values separated by semicolons. For more information, review [Service-assigned metadata - Event replication task patterns](../event-hubs/event-hubs-federation-patterns.md#service-assigned-metadata).
-For Service Bus, the following items obtained from the source Service Bus queue or topic are replaced by new service-assigned values in the target Service Bus queue or topic: service-assigned metadata of a message, original enqueue time, and sequence number. However, for the default replication tasks in the Azure-provided samples, the original values are preserved in the user properties: `repl-enqueue-time` (ISO8601 string) and `repl-sequence`. These properties have the `string` type and contain the stringified value of the respective original properties. If the message is forwarded multiple times, the service-assigned metadata of the immediate source is appended to the already existing properties, with values separated by semicolons. For more information, review [Service-assigned metadata - Message replication task patterns](../service-bus-messaging/service-bus-federation-patterns.md#service-assigned-metadata).
+For Service Bus, the following items obtained from the source Service Bus queue or topic are replaced by new service-assigned values in the target Service Bus queue or topic: service-assigned metadata of a message, original enqueue time, and sequence number. However, for the default replication tasks in the Azure-provided samples, the original values are preserved in the user properties: `repl-enqueue-time` (ISO8601 string) and `repl-sequence`. These properties have the `string` type and contain the stringified value of the respective original properties. If the message is forwarded multiple times, the service-assigned metadata of the immediate source is appended to any existing properties, with values separated by semicolons. For more information, review [Service-assigned metadata - Message replication task patterns](../service-bus-messaging/service-bus-federation-patterns.md#service-assigned-metadata).
When a task replicates from Service Bus to Event Hubs, the task maps only the `User Properties` property to the `Properties` property. However, when the task replicates from Event Hubs to Service Bus, the task maps the following properties:
This example shows how to create a replication task for Service Bus queues.
1. On the **Authenticate** tab, in the **Connections** section, select **Create** for every connection that appears in the task so that you can provide authentication credentials for all the connections. The types of connections in each task vary based on the task.
- This example shows the prompt to create the connection to the target Service Bus namespace where the target queue exists. The connection already exists for the source Service Bus namespace.
+ This example shows the prompt to create the connection to the target Service Bus namespace where the target queue exists. The connection exists for the source Service Bus namespace.
![Screenshot showing selected "Create" option for the connection to the target Service Bus namespace.](./media/create-replication-tasks-azure-resources/create-authenticate-connections.png)
To make sure that the storage account doesn't contain any legacy information fro
1. Now delete the folder that contains the source entity's checkpoint and offset information by using the following steps:
- 1. Download, install, and open the latest [Azure Storage Explorer desktop client](https://azure.microsoft.com/features/storage-explorer/), if you don't already have the most recent version.
+ 1. Download, install, and open the latest [Azure Storage Explorer desktop client](https://azure.microsoft.com/features/storage-explorer/), if you don't have the most recent version.
> [!NOTE] > For the delete cleanup task, you currently have to use the Azure Storage Explorer client,
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-component.md
az ml component create --file my_component.yml --version 1 --resource-group my-r
Use `az ml component create --help`for more information on the `create` command.
-### Create a component in the studio UI
-
-You can create a component in **Components** page in the studio UI.
-
-1. Click **New Component** in the component page.
-
- :::image type="content" source="./media/concept-component/ui-create-component.png" lightbox="./media/concept-component/ui-create-component.png" alt-text="Screenshot showing new component button.":::
-
-1. Follow the wizard to finish the creation process.
- ## Use components to build ML pipelines You can use the Azure CLI (v2) to create a pipeline job. See [Create and run ML pipelines (CLI)](how-to-create-component-pipelines-cli.md).
You can check component details and manage the component using CLI (v2). Use `az
You can use `az ml component list` to list all components in a workspace.
-You can see all created components in your workspace in the **Components** page in the studio UI.
- ### Show details for a component You can use `az ml component show --name <COMPONENT_NAME>` to show the details of a component.
-You can also check component details in the **Components** page in the studio UI.
-- ### Upgrade a component You can use `az ml component create --file <NEW_VERSION.yaml>` to upgrade a component.
-You can also click **Upgrade** in the component detail page to upgrade a new version for the component.
- ### Delete a component You can use `az ml component delete --name <COMPONENT_NAME>` to delete a component.
-You can also select a component and archive it.
-- ## Next steps - [Component YAML reference](reference-yaml-component-command.md)
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
Once the list of FQDNs and corresponding IP addresses are gathered, proceed to c
This architecture uses the common Hub and Spoke virtual network topology. One virtual network contains the DNS server and one contains the private endpoint to the Azure Machine Learning workspace and associated resources. There must be a valid route between both virtual networks. For example, through a series of peered virtual networks. The following steps describe how this topology works:
If you cannot access the workspace from a virtual machine or jobs fail on comput
This architecture uses the common Hub and Spoke virtual network topology. ExpressRoute is used to connect from your on-premises network to the Hub virtual network. The Custom DNS server is hosted on-premises. A separate virtual network contains the private endpoint to the Azure Machine Learning workspace and associated resources. With this topology, there needs to be another virtual network hosting a DNS server that can send requests to the Azure DNS Virtual Server IP address. The following steps describe how this topology works:
This article is part of a series on securing an Azure Machine Learning workflow.
For information on integrating Private Endpoints into your DNS configuration, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
-For information on deploying models with a custom DNS name or TLS security, see [Secure web services using TLS](how-to-secure-web-service.md).
+For information on deploying models with a custom DNS name or TLS security, see [Secure web services using TLS](how-to-secure-web-service.md).
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
Previously updated : 10/22/2021 Last updated : 01/27/2022 # Create and manage a self-hosted integration runtime
-This article describes how to create and manage a self-hosted integration runtime (SHIR) that let's you scan data sources in Azure Purview.
+The integration runtime (IR) is the compute infrastructure that Azure Purview uses to power data scan across different network environments.
+
+A self-hosted integration runtime (SHIR) can be used to scan data source in an on-premises network or a virtual network. The installation of a self-hosted integration runtime needs an on-premises machine or a virtual machine inside a private network.
+
+This article describes how to create and manage a self-hosted integration runtime.
> [!NOTE] > The Azure Purview Integration Runtime cannot be shared with an Azure Synapse Analytics or Azure Data Factory Integration Runtime on the same machine. It needs to be installed on a separated machine.
This article describes how to create and manage a self-hosted integration runtim
- The supported versions of Windows are: - Windows 8.1 - Windows 10
+ - Windows 11
- Windows Server 2012 - Windows Server 2012 R2 - Windows Server 2016 - Windows Server 2019
+ - Windows Server 2022
Installation of the self-hosted integration runtime on a domain controller isn't supported.
Installation of the self-hosted integration runtime on a domain controller isn't
- The recommended minimum configuration for the self-hosted integration runtime machine is a 2-GHz processor with 4 cores, 8 GB of RAM, and 80 GB of available hard drive space. For the details of system requirements, see [Download](https://www.microsoft.com/download/details.aspx?id=39717). - If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message. - You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime.-- Scan runs happen with a specific frequency per the schedule you've set up. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is scanned. When multiple scan jobs are in progress, you see resource usage go up during peak times.-- Tasks might fail during extraction of data in Parquet, ORC, or Avro formats.
+- Scan runs happen with a specific frequency per the schedule you've set up. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is scanned. When multiple scan jobs are in progress, you see resource usage goes up during peak times.
+- Scanning some data sources requires additional setup on the self-hosted integration runtime machine. For example, JDK, Visual C++ Redistributable, or specific driver. Refer to [each source article](purview-connector-overview.md) for prerequisite details.
> [!IMPORTANT]
-> If you will use the Self-Hosted Integration runtime to scan Parquet files, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check our [Java Runtime Environment section at the bottom of the page](#java-runtime-environment-installation) for an installation guide.
+> If you use the Self-Hosted Integration runtime to scan Parquet files, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check our [Java Runtime Environment section at the bottom of the page](#java-runtime-environment-installation) for an installation guide.
+
+### Considerations for using a self-hosted IR
+
+- You can use a single self-hosted integration runtime for scanning multiple data sources.
+- You can install only one instance of self-hosted integration runtime on any single machine. If you have two Azure Purview accounts that need to scan on-premises data sources, install the self-hosted IR on two machines, one for each Azure Purview account.
+- The self-hosted integration runtime doesn't need to be on the same machine as the data source, unless specially called out as a prerequisite in the respective source article. Having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source.
## Setting up a self-hosted integration runtime To create and set up a self-hosted integration runtime, use the following procedures.
-## Create a self-hosted integration runtime
+### Create a self-hosted integration runtime
1. On the home page of the [Azure Purview Studio](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
To create and set up a self-hosted integration runtime, use the following proced
:::image type="content" source="media/manage-integration-runtimes/select-integration-runtimes.png" alt-text="Select on IR.":::
-3. On the **Integration runtime setup** page, select **Self-Hosted** to create a Self-Hosted IR, and then select **Continue**.
+3. On the **Integration runtime setup** page, select **Self-Hosted** to create a self-Hosted IR, and then select **Continue**.
:::image type="content" source="media/manage-integration-runtimes/select-self-hosted-ir.png" alt-text="Create new SHIR.":::
To create and set up a self-hosted integration runtime, use the following proced
:::image type="content" source="media/manage-integration-runtimes/successfully-registered.png" alt-text="successfully registered.":::
-### Configure proxy server settings
-
-If you select the **Use system proxy** option for the HTTP proxy, the self-hosted integration runtime uses the proxy settings in diahost.exe.config and diawp.exe.config. When these files specify no proxy, the self-hosted integration runtime connects to the cloud service directly without going through a proxy. The following procedure provides instructions for updating the diahost.exe.config file:
-
-1. In File Explorer, make a safe copy of C:\Program Files\Microsoft Integration Runtime\5.0\Shared\diahost.exe.config as a backup of the original file.
-1. Open Notepad running as administrator.
-1. In Notepad, open the text file C:\Program Files\Microsoft Integration Runtime\5.0\Shared\diahost.exe.config.
-1. Find the default **system.net** tag as shown in the following code:
+## Manage a self-hosted integration runtime
- ```xml
- <system.net>
- <defaultProxy useDefaultCredentials="true" />
- </system.net>
- ```
+You can edit a self-hosted integration runtime by navigating to **Integration runtimes** in the **Management center**, selecting the IR and then selecting edit. You can now update the description, copy the key, or regenerate new keys.
- You can then add proxy server details as shown in the following example and include Azure Purview, data sources and other relevant services endpoints in the bypass list:
- ```xml
- <system.net>
- <defaultProxy>
- <bypasslist>
- <add address="scaneastus2test.blob.core.windows.net" />
- <add address="scaneastus2test.queue.core.windows.net" />
- <add address="Atlas-abcd1234-1234-abcd-abcd-1234567890ab.servicebus.windows.net" />
- <add address="contosopurview1.purview.azure.com" />
- <add address="contososqlsrv1.database.windows.net" />
- <add address="contosoadls1.dfs.core.windows.net" />
- <add address="contosoakv1.vault.azure.net" />
- <add address="contosoblob11.blob.core.windows.net" />
- </bypasslist>
- <proxy proxyaddress="http://10.1.0.1:3128" bypassonlocal="True" />
- </defaultProxy>
- </system.net>
- ```
- The proxy tag allows additional properties to specify required settings like `scriptLocation`. See [\<proxy\> Element (Network Settings)](/dotnet/framework/configure-apps/file-schema/network/proxy-element-network-settings) for syntax.
- ```xml
- <proxy autoDetect="true|false|unspecified" bypassonlocal="true|false|unspecified" proxyaddress="uriString" scriptLocation="uriString" usesystemdefault="true|false|unspecified "/>
- ```
+You can delete a self-hosted integration runtime by navigating to **Integration runtimes** in the Management center, selecting the IR and then selecting **Delete**. Once an IR is deleted, any ongoing scans relying on it will fail.
-1. Save the configuration file in its original location. Then restart the self-hosted integration runtime host service, which picks up the changes.
+## Service account for Self-hosted integration runtime
- To restart the service, use the services applet from Control Panel. Or from Integration Runtime Configuration Manager, select the **Stop Service** button, and then select **Start Service**.
+The default logon service account of self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
- If the service doesn't start, you likely added incorrect XML tag syntax in the application configuration file that you edited.
-> [!IMPORTANT]
-> Don't forget to update both diahost.exe.config and diawp.exe.config.
+Make sure the account has the permission of Log on as a service. Otherwise self-hosted integration runtime can't start successfully. You can check the permission in **Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Log on as a service**
-You also need to make sure that Microsoft Azure is in your company's allowlist. You can download the list of valid Azure IP addresses. IP Ranges for each cloud, broken down by region and by the tagged services in that cloud are now available on MS Download:
- - Public: https://www.microsoft.com/download/details.aspx?id=56519
-### Possible symptoms for issues related to the firewall and proxy server
-If you see error messages like the following ones, the likely reason is improper configuration of the firewall or proxy server. Such configuration prevents the self-hosted integration runtime from connecting to Azure managed storage accounts or data sources. To ensure that your firewall and proxy server are properly configured, refer to the previous section.
+## Notification area icons and notifications
-- When you try to register the self-hosted integration runtime, you receive the following error message: "Failed to register this Integration Runtime node! Confirm that the Authentication key is valid and the integration service host service is running on this machine."-- When you open Integration Runtime Configuration Manager, you see a status of **Disconnected** or **Connecting**. When you view Windows event logs, under **Event Viewer** > **Application and Services Logs** > **Microsoft Integration Runtime**, you see error messages like this one:
+If you move your cursor over the icon or message in the notification area, you can see details about the state of the self-hosted integration runtime.
- ```output
- Unable to connect to the remote server
- A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (Self-hosted).
- ```
## Networking requirements
-Your self-hosted integration runtime machine will need to connect to several resources to work correctly:
+Your self-hosted integration runtime machine needs to connect to several resources to work correctly:
-* The sources you want to scan using the self-hosted integration runtime.
-* Any Azure Key Vault used to store credentials for the Azure Purview resource.
-* The managed Storage account and Event Hub resources created by Azure Purview.
+* The Azure Purview services used to manage the self-hosted integration runtime.
+* The data sources you want to scan using the self-hosted integration runtime.
+* The managed Storage account and Event Hubs resource created by Azure Purview. Azure Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime need to be able to connect with these resources.
+* The Azure Key Vault used to store credentials.
-The managed Storage and Event Hub resources can be found in your subscription under a resource group containing the name of your Azure Purview resource. Azure Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime will need to be able to connect directly with these resources.
+There are two firewalls to consider:
-Here are the domains and ports that will need to be allowed through corporate and machine firewalls.
+- The *corporate firewall* that runs on the central router of the organization
+- The *Windows firewall* that is configured as a daemon on the local machine where the self-hosted integration runtime is installed
-> [!NOTE]
-> For domains listed with '\<managed Azure Purview storage account>', you will add the name of the managed storage account associated with your Azure Purview resource. You can find this resource in the Portal. Search your Resource Groups for a group named: managed-rg-\<your Azure Purview Resource name>. For example: managed-rg-contosoPurview. You will use the name of the storage account in this resource group.
->
-> For domains listed with '\<managed Event Hub resource>', you will add the name of the managed Event Hub associated with your Azure Purview resource. You can find this in the same Resource Group as the managed storage account.
+Here are the domains and outbound ports that you need to allow at both **corporate and Windows/machine firewalls**.
+
+> [!TIP]
+> For domains listed with '\<managed_storage_account>' and '\<managed_Event_Hub_resource>', add the name of the managed resources associated with your Azure Purview account. You can find them from Azure portal -> your Azure Purview account -> Managed resources tab.
| Domain names | Outbound ports | Description | | -- | -- | - |
-| `*.servicebus.windows.net` | 443 | Global infrastructure Azure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
-| `<managed Event Hub resource>.servicebus.windows.net` | 443 | Azure Purview uses this to connect with the associated service bus. It will be covered by allowing the above domain, but if you are using Private Endpoints, you will need to test access to this single domain.|
-| `*.frontend.clouddatahub.net` | 443 | Global infrastructure Azure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
-| `<managed Azure Purview storage account>.core.windows.net` | 443 | Used by the self-hosted integration runtime to connect to the managed Azure storage account.|
-| `<managed Azure Purview storage account>.queue.core.windows.net` | 443 | Queues used by purview to run the scan process. |
-| `*.login.windows.net` | 443 | Sign in to Azure Active Directory.|
-| `*.login.microsoftonline.com` | 443 | Sign in to Azure Active Directory. |
-| `download.microsoft.com` | 443 | Optional for SHIR updates. |
+| `*.servicebus.windows.net` | 443 | Required for interactive authoring, for example, test connection on Azure Purview Studio. Currently wildcard is required as there is no dedicated resource. |
+| `*.frontend.clouddatahub.net` | 443 | Required to connect to the Azure Purview service. Currently wildcard is required as there is no dedicated resource. |
+| `<managed_storage_account>.blob.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Blob storage account. |
+| `<managed_storage_account>.queue.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Queue storage account. |
+| `<managed_Event_Hub_resource>.servicebus.windows.net` | 443 | Azure Purview uses this to connect with the associated service bus. It's covered by allowing the above domain. If you use private endpoint, you need to test access to this single domain.|
+| `download.microsoft.com` | 443 | Required to download the self-hosted integration runtime updates. If you have disabled auto-update, you can skip configuring this domain. |
+| `login.windows.net`<br>`login.microsoftonline.com` | 443 | Required to sign in to the Azure Active Directory. |
-Based on your sources, you may also need to allow the domains of other Azure or external sources. A few examples are provided below, as well as the Azure Key Vault domain, if you are connecting to any credentials stored in the Key Vault.
+Depending on the sources you want to scan, you also need to allow additional domains and outbound ports for other Azure or external sources. A few examples are provided here:
| Domain names | Outbound ports | Description | | -- | -- | - |
-| `<storage account>.core.windows.net` | 443 | Optional, to connect to an Azure Storage account. |
-| `*.database.windows.net` | 1433 | Optional, to connect to Azure SQL Database or Azure Synapse Analytics. |
-| `*.azuredatalakestore.net`<br>`login.microsoftonline.com/<tenant>/oauth2/token` | 443 | Optional, to connect to Azure Data Lake Store Gen 1. |
-| `<datastoragename>.dfs.core.windows.net` | 443 | Optional, to connect to Azure Data Lake Store Gen 2. |
-| `<your Key Vault Name>.vault.azure.net` | 443 | Required if any credentials are stored in Azure Key Vault. |
-| Various Domains | Dependant | Domains for any other sources the SHIR will connect to. |
-
-> [!IMPORTANT]
-> In most environments, you will also need to confirm that your DNS is correctly configured. To confirm you can use **nslookup** from your SHIR machine to check connectivity to each of the above domains. Each nslookup should return the IP of the resource. If you are using [Private Endpoints](catalog-private-link.md), the private IP should be returned and not the Public IP. If no IP is returned, or if when using Private Endpoints the public IP is returned, you will need to address your DNS/VNET association, or your Private Endpoint/VNET peering.
-
-## Manage a self-hosted integration runtime
-
-You can edit a self-hosted integration runtime by navigating to **Integration runtimes** in the **Management center**, selecting the IR and then selecting edit. You can now update the description, copy the key, or regenerate new keys.
--
+| `<your_key_vault_name>.vault.azure.net` | 443 | Required if any credentials are stored in Azure Key Vault. |
+| `<your_storage_account>.dfs.core.windows.net` | 443 | When scan Azure Data Lake Store Gen 2. |
+| `<your_storage_account>.blob.core.windows.net` | 443 | When scan Azure Blob storage. |
+| `<your_sql_server>.database.windows.net` | 1433 | When scan Azure SQL Database. |
+| `<your_ADLS_account>.azuredatalakestore.net` | 443 | When scan Azure Data Lake Store Gen 1. |
+| Various domains | Dependent | Domains and ports for any other sources the SHIR will scan. |
-You can delete a self-hosted integration runtime by navigating to **Integration runtimes** in the Management center, selecting the IR and then selecting **Delete**. Once an IR is deleted, any ongoing scans relying on it will fail.
-
-## Java Runtime Environment Installation
-
-If you will be scanning Parquet files using the Self-Hosted Integration runtime with Azure Purview, you will need to install either the Java Runtime Environment or OpenJDK on your self-hosted IR machine.
+For some cloud data stores such as Azure SQL Database and Azure Storage, you need to allow IP address of self-hosted integration runtime machine on their firewall configuration.
-When scanning Parquet files using the Self-hosted IR, the service locates the Java runtime by firstly checking the registry *`(SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome)`* for JRE, if not found, secondly checking system variable *`JAVA_HOME`* for OpenJDK.
--- **To use JRE**: The 64-bit IR requires 64-bit JRE. You can find it from [here](https://go.microsoft.com/fwlink/?LinkId=808605).-- **To use OpenJDK**: It's supported since IR version 3.13. Package the jvm.dll with all other required assemblies of OpenJDK into Self-hosted IR machine, and set system environment variable JAVA_HOME accordingly.
+> [!IMPORTANT]
+> In most environments, you will also need to make sure that your DNS is correctly configured. To confirm, you can use **nslookup** from your SHIR machine to check connectivity to each of the domains. Each nslookup should return the IP of the resource. If you are using [Private Endpoints](catalog-private-link.md), the private IP should be returned and not the Public IP. If no IP is returned, or if when using Private Endpoints the public IP is returned, you need to address your DNS/VNet association, or your Private Endpoint/VNet peering.
## Proxy server considerations
If your corporate network environment uses a proxy server to access the internet
:::image type="content" source="media/manage-integration-runtimes/self-hosted-proxy.png" alt-text="Specify the proxy":::
-When configured, the self-hosted integration runtime uses the proxy server to connect to the cloud service's source and destination (which use the HTTP or HTTPS protocol). This is why you select **Change link** during initial setup.
+When configured, the self-hosted integration runtime uses the proxy server to connect to the services which use HTTP or HTTPS protocol. This is why you select **Change link** during initial setup.
:::image type="content" source="media/manage-integration-runtimes/set-http-proxy.png" alt-text="Set the proxy":::
-There are three configuration options:
+There are two supported configuration options by Azure Purview:
- **Do not use proxy**: The self-hosted integration runtime doesn't explicitly use any proxy to connect to cloud services.-- **Use system proxy**: The self-hosted integration runtime uses the proxy setting that is configured in diahost.exe.config and diawp.exe.config. If these files specify no proxy configuration, the self-hosted integration runtime connects to the cloud service directly without going through a proxy.-- **Use custom proxy**: Configure the HTTP proxy setting to use for the self-hosted integration runtime, instead of using configurations in diahost.exe.config and diawp.exe.config. **Address** and **Port** values are required. **User Name** and **Password** values are optional, depending on your proxy's authentication setting. All settings are encrypted with Windows DPAPI on the self-hosted integration runtime and stored locally on the machine.
+- **Use system proxy**: The self-hosted integration runtime uses the proxy setting that is configured in the executable's configuration files. If no proxy is specified in these files, the self-hosted integration runtime connects to the services directly without going through a proxy.
> [!IMPORTANT]
-> Currently, **custom proxy** is not supported in Azure Purview.
+>
+> Currently, **custom proxy** is not supported in Azure Purview. In addition, system proxy is supported when scanning Azure data sources and SQL Server; scanning other sources doesn't support proxy.
The integration runtime host service restarts automatically after you save the updated proxy settings. After you register the self-hosted integration runtime, if you want to view or update proxy settings, use Microsoft Integration Runtime Configuration Manager. 1. Open **Microsoft Integration Runtime Configuration Manager**.
+1. Select the **Settings** tab.
3. Under **HTTP Proxy**, select the **Change** link to open the **Set HTTP Proxy** dialog box. 4. Select **Next**. You then see a warning that asks for your permission to save the proxy setting and restart the integration runtime host service.
-You can use the configuration manager tool to view and update the HTTP proxy.
- > [!NOTE] > If you set up a proxy server with NTLM authentication, the integration runtime host service runs under the domain account. If you later change the password for the domain account, remember to update the configuration settings for the service and restart the service. Because of this requirement, we suggest that you access the proxy server by using a dedicated domain account that doesn't require you to update the password frequently.
-If using system proxy, configure the outbound [network rules](#networking-requirements) from self-hosted integration runtime virtual machine to required endpoints.
+If using system proxy, make sure your proxy server allow outbound traffic to the [network rules](#networking-requirements).
+
+### Configure proxy server settings
+
+If you select the **Use system proxy** option for the HTTP proxy, the self-hosted integration runtime uses the proxy settings in the following four files under the path C:\Program Files\Microsoft Integration Runtime\5.0\ to perform different operations:
+
+- .\Shared\diahost.exe.config
+- .\Shared\diawp.exe.config
+- .\Gateway\DataScan\Microsoft.DataMap.Agent.exe.config
+- .\Gateway\DataScan\DataTransfer\Microsoft.DataMap.Agent.Connectors.Azure.DataFactory.ServiceHost.exe.config
+
+When no proxy is specified in these files, the self-hosted integration runtime connects to the services directly without going through a proxy.
+
+The following procedure provides instructions for updating the **diahost.exe.config** file.
+
+1. In File Explorer, make a safe copy of C:\Program Files\Microsoft Integration Runtime\5.0\Shared\diahost.exe.config as a backup of the original file.
+
+1. Open Notepad running as administrator.
+
+1. In Notepad, open the text file C:\Program Files\Microsoft Integration Runtime\5.0\Shared\diahost.exe.config.
+
+1. Find the default **system.net** tag as shown in the following code:
+
+ ```xml
+ <system.net>
+ <defaultProxy useDefaultCredentials="true" />
+ </system.net>
+ ```
+
+ You can then add proxy server details as shown in the following example:
+
+ ```xml
+ <system.net>
+ <defaultProxy>
+ <proxy bypassonlocal="true" proxyaddress="<your proxy server e.g. http://proxy.domain.org:8888/>" />
+ </defaultProxy>
+ </system.net>
+ ```
+ The proxy tag allows additional properties to specify required settings like `scriptLocation`. See [\<proxy\> Element (Network Settings)](/dotnet/framework/configure-apps/file-schema/network/proxy-element-network-settings) for syntax.
+
+ ```xml
+ <proxy autoDetect="true|false|unspecified" bypassonlocal="true|false|unspecified" proxyaddress="uriString" scriptLocation="uriString" usesystemdefault="true|false|unspecified "/>
+ ```
+
+1. Save the configuration file in its original location.
++
+Repeat the same procedure to update **diawp.exe.config** and **Microsoft.DataMap.Agent.exe.config** files.
+
+Then go to path C:\Program Files\Microsoft Integration Runtime\5.0\Gateway\DataScan\DataTransfer, create a file named "**Microsoft.DataMap.Agent.Connectors.Azure.DataFactory.ServiceHost.exe.config**", and configure the proxy setting as follows. You can also extend the settings as described above.
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<configuration>
+ <system.net>
+ <defaultProxy>
+ <proxy bypassonlocal="true" proxyaddress="<your proxy server e.g. http://proxy.domain.org:8888/>" />
+ </defaultProxy>
+ </system.net>
+</configuration>
+```
+
+Restart the self-hosted integration runtime host service, which picks up the changes. To restart the service, use the services applet from Control Panel. Or from Integration Runtime Configuration Manager, select the **Stop Service** button, and then select **Start Service**. If the service doesn't start, you likely added incorrect XML tag syntax in the application configuration file that you edited.
+
+> [!IMPORTANT]
+> Don't forget to update all four files mentioned above.
+
+You also need to make sure that Microsoft Azure is in your company's allowlist. You can download the list of valid Azure IP addresses. IP ranges for each cloud, broken down by region and by the tagged services in that cloud are now available on MS Download:
+ - Public: https://www.microsoft.com/download/details.aspx?id=56519
+
+### Possible symptoms for issues related to the firewall and proxy server
+
+If you see error messages like the following ones, the likely reason is improper configuration of the firewall or proxy server. Such configuration prevents the self-hosted integration runtime from connecting to Azure Purview services. To ensure that your firewall and proxy server are properly configured, refer to the previous section.
+
+- When you try to register the self-hosted integration runtime, you receive the following error message: "Failed to register this Integration Runtime node! Confirm that the Authentication key is valid and the integration service host service is running on this machine."
+- When you open Integration Runtime Configuration Manager, you see a status of **Disconnected** or **Connecting**. When you view Windows event logs, under **Event Viewer** > **Application and Services Logs** > **Microsoft Integration Runtime**, you see error messages like this one:
+
+ ```output
+ Unable to connect to the remote server
+ A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (Self-hosted)
+ ```
-## Installation best practices
+## Java Runtime Environment Installation
-You can install the self-hosted integration runtime by downloading a Managed Identity setup package from [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=39717).
+If you scan Parquet files using the self-hosted integration runtime with Azure Purview, you will need to install either the Java Runtime Environment or OpenJDK on your self-hosted IR machine.
-- Configure a power plan on the host machine for the self-hosted integration runtime so that the machine doesn't hibernate. If the host machine hibernates, the self-hosted integration runtime goes offline.-- Regularly back up the credentials associated with the self-hosted integration runtime.
+When scanning Parquet files using the self-hosted IR, the service locates the Java runtime by firstly checking the registry *`(SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome)`* for JRE, if not found, secondly checking system variable *`JAVA_HOME`* for OpenJDK.
+
+- **To use JRE**: The 64-bit IR requires 64-bit JRE. You can find it from [here](https://go.microsoft.com/fwlink/?LinkId=808605).
+- **To use OpenJDK**: It's supported since IR version 3.13. Package the jvm.dll with all other required assemblies of OpenJDK into self-hosted IR machine, and set system environment variable JAVA_HOME accordingly.
## Next steps -- [How scans detect deleted assets](concept-scans-and-ingestion.md#how-scans-detect-deleted-assets)
+- [Azure Purview network architecture and best practices](concept-best-practices-network.md)
- [Use private endpoints with Azure Purview](catalog-private-link.md)+
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-snowflake.md
Previously updated : 01/20/2022 Last updated : 01/30/2022
When scanning Snowflake source, Azure Purview supports:
- Fetching static lineage on assets relationships among tables, views, and streams.
-When setting up scan, you can choose to scan an entire Snowflake database, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+When setting up scan, you can choose to scan one or more Snowflake database(s) entirely, or further scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
## Prerequisites
When setting up scan, you can choose to scan an entire Snowflake database, or sc
* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7971.2.
When setting up scan, you can choose to scan an entire Snowflake database, or sc
Azure Purview supports basic authentication (username and password) for scanning Snowflake. The default role of the given user will be used to perform the scan. The Snowflake user must have usage rights on a warehouse and the database(s) to be scanned, and read access to system tables in order to access advanced metadata.
-Here is a sample walkthrough to create a user specifically for Azure Purview scan and set up the permissions. If you choose to use an existing user, make sure it has adequate rights to the warehouse and database objects.
+Here's a sample walkthrough to create a user specifically for Azure Purview scan and set up the permissions. If you choose to use an existing user, make sure it has adequate rights to the warehouse and database objects.
-1. Set up a `purview_reader` role. You will need _ACCOUNTADMIN_ rights to do this.
+1. Set up a `purview_reader` role. You need _ACCOUNTADMIN_ rights to do this.
```sql USE ROLE ACCOUNTADMIN;
To create and run a new scan, do the following:
1. **Warehouse**: Specify the name of the warehouse instance used to empower scan in capital case. The default role assigned to the user specified in the credential must have USAGE rights on this warehouse.
- 1. **Database**: Specify the name of the database instance to import in capital case. The default role assigned to the user specified in the credential must have adequate rights on the database objects.
+ 1. **Databases**: Specify one or more database instance names to import in capital case. Separate the names in the list with a semi-colon (;). The default role assigned to the user specified in the credential must have adequate rights on the database objects.
1. **Schema**: List subset of schemas to import expressed as a semicolon separated list. For example, `schema1; schema2`. All user schemas are imported if that list is empty. All system schemas and objects are ignored by default.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Troubleshooting tips -- Check your account identifer in the source registration step. Do not include `https://` part at the front.
+- Check your account identifer in the source registration step. Don't include `https://` part at the front.
- Make sure the warehouse name and database name are in capital case on the scan setup page. - Check your key vault. Make sure there are no typos in the password. - Check the credential you set up in Azure Purview. The user you specify must have a default role with the necessary access rights to both the warehouse and the database you are trying to scan. See [Required permissions for scan](#required-permissions-for-scan). USE `DESCRIBE USER;` to verify the default role of the user you've specified for Azure Purview.
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-resource-group.md
Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owne
![Image shows a sample data owner policy giving access to a resource group.](./media/tutorial-data-owner-policies-resource-group/data-owner-policy-example-resource-group.png)
+>[!Important]
+> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s).
+ ## Additional information
+- Creating a policy at subscription or resource group level will enable the Subjects to access Azure Storage system containers e.g., *$logs*. If this is undesired, first scan the data source and then create finer-grained policies for each (i.e., at container or sub-container level).
### Limits The limit for Azure Purview policies that can be enforced by Storage accounts is 100MB per subscription, which roughly equates to 5000 policies.
->[!Important]
-> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in the data source.
- ## Next steps Check blog, demo and related tutorials
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-storage.md
Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owne
![Image shows a sample data owner policy giving access to an Azure Storage account.](./media/tutorial-data-owner-policies-storage/data-owner-policy-example-storage.png)
-## Additional information
>[!Important] > - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s). +
+## Additional information
- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that will execute the access will need to provide a fully qualified name (i.e., a direct absolute path) to the data object. The following documents show examples of how to do that: - [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster) - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)
+- Creating a policy at Storage account level will enable the Subjects to access system containers e.g., *$logs*. If this is undesired, first scan the data source(s) and then create finer-grained policies for each (i.e., at container or sub-container level).
++
+### Limits
- The limit for Azure Purview policies that can be enforced by Storage accounts is 100MB per subscription, which roughly equates to 5000 policies. ### Known issues
Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owne
> [!Warning] > **Known issues** related to Policy creation > - Do not create policy statements based on Azure Purview resource sets. Even if displayed in Azure Purview policy authoring UI, they are not yet enforced. Learn more about [resource sets](concept-resource-sets.md).
-> - Once subscription gets disabled for *Data use governance* any underlying assets that are enabled for *Data use governance* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that.
### Policy action mapping
search Search How To Index Power Query Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-how-to-index-power-query-data-sources.md
Indexers that reference Power Query data sources have the same level of support
Before you start pulling data from one of the supported data sources, you'll want to make sure you have all your resources set up.
-+ Azure Cognitive Search service in a [supported region](search-how-to-index-power-query-data-sources.md#regional-availability).
++ Azure Cognitive Search service in a [supported region](#regional-availability). + [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview). This feature must be enabled on the backend. + Azure Blob Storage account, used as an intermediary for your data. The data will flow from your data source, then to Blob Storage, then to the index. This requirement only exists with the initial gated preview.
+## Regional availability
+
+The preview is only available on search services in the following regions:
+++ Central US++ East US++ East US 2++ North Central US++ North Europe++ South Central US++ West Central US++ West Europe++ West US++ West US 2+
+## Preview limitations
+
+There is a lot to be excited about with this preview, but there are a few limitations. This section describes the limitations that are specific to the current version of the preview.
+++ Pulling binary data from your data source is not supported in this version of the preview. +++ [Debug sessions](cognitive-search-debug-session.md) are not supported at this time.+ ## Getting started using the Azure portal The Azure portal provides support for the Power Query connectors. By sampling data and reading metadata on the container, the Import data wizard in Azure Cognitive Search can create a default index, map source fields to target index fields, and load the index in a single operation. Depending on the size and complexity of source data, you could have an operational full text search index in minutes.
The Azure portal provides support for the Power Query connectors. By sampling da
> [!VIDEO https://www.youtube.com/embed/uy-l4xFX1EE] ### Step 1 ΓÇô Prepare source data+ Make sure your data source contains data. The Import data wizard reads metadata and performs data sampling to infer an index schema, but it also loads data from your data source. If the data is missing, the wizard will stop and return and error. ### Step 2 ΓÇô Start Import data wizard+ After you're approved for the preview, the Azure Cognitive Search team will provide you with an Azure portal link that uses a feature flag so that you can access the Power Query connectors. Open this page and start the start the wizard from the command bar in the Azure Cognitive Search service page by selecting **Import data**. :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true"::: ### Step 3 ΓÇô Select your data source+ There are a few data sources that you can pull data from using this preview. All data sources that use Power Query will include a "Powered By Power Query" on their tile. Select your data source.
Select your data source.
Once you've selected your data source, select **Next: Configure your data** to move to the next section. ### Step 4 ΓÇô Configure your data+ Once you've selected your data source, you'll configure your connection. Each data source will require different information. For a few data sources, the Power Query documentation provides additional details on how to connect to your data. + [PostgreSQL](/power-query/connectors/postgresql)
Once you've selected your data source, you'll configure your connection. Each da
Once you've provided your connection credentials, select **Next**. ### Step 5 ΓÇô Select your data+ The import wizard will preview various tables that are available in your data source. In this step you'll check one table that contains the data you want to import into your index. :::image type="content" source="media/search-power-query-connectors/power-query-preview-data.png" alt-text="Screenshot of data preview." border="true":::
The import wizard will preview various tables that are available in your data so
Once you've selected your table, select **Next**. ### Step 6 ΓÇô Transform your data (Optional)+ Power Query connectors provide you with a rich UI experience that allows you to manipulate your data so you can send the right data to your index. You can remove columns, filter rows, and much more. It's not required that you transform your data before importing it into Azure Cognitive Search.
For more information about transforming data with Power Query, look at [Using Po
Once you're done transforming your data, select **Next**. ### Step 7 ΓÇô Add Azure Blob storage+ The Power Query connector preview currently requires you to provide a blob storage account. This step only exists with the initial gated preview. This blob storage account will serve as temporary storage for data that moves from your data source to an Azure Cognitive Search index. We recommend providing a full access storage account connection string:
You can get the connection string from the Azure portal by navigating to the sto
After you've provided a data source name and connection string, select ΓÇ£Next: Add cognitive skills (Optional)ΓÇ¥. ### Step 8 ΓÇô Add cognitive skills (Optional)+ [AI enrichment](cognitive-search-concept-intro.md) is an extension of indexers that can be used to make your content more searchable. This is an optional step for this preview. When complete, select **Next: Customize target index**. ### Step 9 ΓÇô Customize target index+ On the Index page, you should see a list of fields with a data type and a series of checkboxes for setting index attributes. The wizard can generate a fields list based on metadata and by sampling the source data. You can bulk-select attributes by clicking the checkbox at the top of an attribute column. Choose Retrievable and Searchable for every field that should be returned to a client app and subject to full text search processing. You'll notice that integers are not full text or fuzzy searchable (numbers are evaluated verbatim and are often useful in filters).
Take a moment to review your selections. Once you run the wizard, physical data
When complete, select **Next: Create an Indexer**. ### Step 10 ΓÇô Create an indexer+ The last step creates the indexer. Naming the indexer allows it to exist as a standalone resource, which you can schedule and manage independently of the index and data source object, created in the same wizard sequence. The output of the Import data wizard is an indexer that crawls your data source and imports the data you selected into an index on Azure Cognitive Search.
Field names in an Azure Cognitive Search index have to meet certain requirements
To index content from a column in your table that has an unsupported field name, rename the column during the "Transform your data" phase of the import data process. For example, you can rename a column named "Billing code/Zip code" to "zipcode". By renaming the column, the index schema detection will recognize it as a valid field name and add it as a suggestion to your index definition.
-## Regional availability
-
-The preview is only available to customers with search services in the following regions:
-
-+ Central US
-+ East US
-+ East US 2
-+ North Central US
-+ North Europe
-+ South Central US
-+ West Central US
-+ West Europe
-+ West US
-+ West US 2
-
-## Preview limitations
-
-There is a lot to be excited about with this preview, but there are a few limitations. This section describes the limitations that are specific to the current version of the preview.
-
-+ Pulling binary data from your data source is not supported in this version of the preview.
-
-+ [Debug sessions](cognitive-search-debug-session.md) are not supported at this time.
- ## Next steps You have learned how to pull data from new data sources using the Power Query connectors. To learn more about indexers, see [Indexers in Azure Cognitive Search](search-indexer-overview.md).
search Search Howto Index Encrypted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-encrypted-blobs.md
Title: Index encrypted Azure Blob Storage content
+ Title: 'Tutorial: Index encrypted blobs'
description: Learn how to index and extract text from encrypted documents in Azure Blob Storage with Azure Cognitive Search.
ms.devlang: rest-api - Previously updated : 11/19/2021+ Last updated : 01/28/2022+
-# How to index encrypted blobs using blob indexers and skillsets in Azure Cognitive Search
+# Tutorial: Index and enrich encrypted blobs for full-text search in Azure Cognitive Search
-**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
+This tutorial shows you how to use [Azure Cognitive Search](search-what-is-azure-search.md) to index documents that have been previously encrypted with a customer-managed key in [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md).
-This article shows you how to use [Azure Cognitive Search](search-what-is-azure-search.md) to index documents that have been previously encrypted within [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) using [Azure Key Vault](../key-vault/general/overview.md). Normally, an indexer cannot extract content from encrypted files because it doesn't have access to the encryption key. However, by leveraging the [DecryptBlobFile](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Utils/DecryptBlobFile) custom skill, followed by the [DocumentExtractionSkill](cognitive-search-skill-document-extraction.md), you can provide controlled access to the key to decrypt the files and then have content extracted from them. This unlocks the ability to index these documents without compromising the encryption status of your stored documents.
+Normally, an indexer can't extract content from encrypted files because it doesn't have access to the customer-managed encryption key in [Azure Key Vault](../key-vault/general/overview.md). However, by leveraging the [DecryptBlobFile custom skill](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Utils/DecryptBlobFile), followed by the [Document Extraction skill](cognitive-search-skill-document-extraction.md), you can provide controlled access to the key to decrypt the files and then extract content from them. This unlocks the ability to index and enrich these documents without compromising the encryption status of your stored documents.
-Starting with previously encrypted whole documents (unstructured text) such as PDF, HTML, DOCX, and PPTX in Azure Blob Storage, this guide uses Postman and the Search REST APIs to perform the following tasks:
+Starting with previously encrypted whole documents (unstructured text) such as PDF, HTML, DOCX, and PPTX in Azure Blob Storage, this tutorial uses Postman and the Search REST APIs to perform the following tasks:
> [!div class="checklist"]
-> * Define a pipeline that decrypts the documents and extracts text from them.
-> * Define an index to store the output.
-> * Execute the pipeline to create and load the index.
-> * Explore results using full text search and a rich query syntax.
+> + Define a pipeline that decrypts the documents and extracts text from them.
+> + Define an index to store the output.
+> + Execute the pipeline to create and load the index.
+> + Explore results using full text search and a rich query syntax.
If you don't have an Azure subscription, open a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-This example assumes that you have already uploaded your files to Azure Blob Storage and have encrypted them in the process. If you need help with getting your files initially uploaded and encrypted, check out [this tutorial](../storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md) for how to do so.
++ [Azure Cognitive Search](search-create-service-portal.md) on any tier or region.+++ [Azure Storage](https://azure.microsoft.com/services/storage/), Standard performance (general-purpose v2)+++ Blobs encrypted with a customer-managed key. See [Tutorial: Encrypt and decrypt blobs using Azure Key Vault](../storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md) if you need to create sample data.
-+ [Azure Storage](https://azure.microsoft.com/services/storage/)
+ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) in the same subscription as Azure Cognitive Search. The key vault must have **soft-delete** and **purge protection** enabled.
-+ [Azure Cognitive Search](search-create-service-portal.md) on a [billable tier](search-sku-tier.md#tier-descriptions) (Basic or above, in any region)
-+ [Azure Function](https://azure.microsoft.com/services/functions/)
+ + [Postman desktop app](https://www.getpostman.com/)
+Custom skill deployment creates an Azure Function app and an Azure Storage account. Since these resources are created for you, they aren't listed as a prerequisite. When you're finished with this tutorial, remember to clean up the resources so that you aren't billed for services you're not using.
+
+> [!NOTE]
+> Skillsets often require [attaching a Cognitive Services resource](cognitive-search-attach-cognitive-services.md). As written, this skillset has no dependency on Cognitive Services and thus no key is required. If you later add enrichments that invoke built-in skills, remember to update your skillset accordingly.
+ ## 1 - Create services and collect credentials
-### Set up the custom skill
+### Deploy the custom skill
This example uses the sample [DecryptBlobFile](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Utils/DecryptBlobFile) project from the [Azure Search Power Skills](https://github.com/Azure-Samples/azure-search-power-skills) GitHub repository. In this section, you will deploy the skill to an Azure Function so that it can be used in a skillset. A built-in deployment script creates an Azure Function resource named starting with **psdbf-function-app-** and loads the skill. You'll be prompted to provide a subscription and resource group. Be sure to choose the same subscription that your Azure Key Vault instance lives in.
-Operationally, the DecryptBlobFile skill takes the URL and SAS token for each blob as inputs, and it outputs the downloaded, decrypted file using the file reference contract that Azure Cognitive Search expects. Recall that DecryptBlobFile needs the encryption key to perform the decryption. As part of set up, you'll also create an access policy that grants DecryptBlobFile function access to the encryption key in Azure Key Vault.
+Operationally, the DecryptBlobFile skill takes the URL and SAS token for each blob as inputs, and it outputs the downloaded, decrypted file using the file reference contract that Azure Cognitive Search expects. Recall that DecryptBlobFile needs the encryption key to perform the decryption. As part of setup, you'll also create an access policy that grants DecryptBlobFile function access to the encryption key in Azure Key Vault.
1. Click the **Deploy to Azure** button found on the [DecryptBlobFile landing page](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Utils/DecryptBlobFile#deployment), which will open the provided Resource Manager template within the Azure portal.
-1. Select **the subscription where your Azure Key Vault instance exists** (this guide will not work if you select a different subscription), and either select an existing resource group or create a new one (if you create a new one, you will also need to select a region to deploy to).
+1. Choose the same subscription where your Azure Key Vault instance exists (this tutorial will not work if you select a different subscription).
+
+1. Select an existing resource group or create a new one. A dedicated resource group makes cleanup easier later.
1. Select **Review + create**, make sure you agree to the terms, and then select **Create** to deploy the Azure Function.
- ![ARM template in portal](media/indexing-encrypted-blob-files/arm-template.jpg "ARM template in portal")
+ :::image type="content" source="media/indexing-encrypted-blob-files/arm-template.png" alt-text="Screenshot of the arm template page in Azure portal." border="true":::
1. Wait for the deployment to finish.
-1. Navigate to your Azure Key Vault instance in the portal. [Create an access policy](../key-vault/general/assign-access-policy-portal.md) in the Azure Key Vault that grants key access to the custom skill.
-
- 1. Under **Settings**, select **Access policies**, and then select **Add access policy**
-
- ![Keyvault add access policy](media/indexing-encrypted-blob-files/keyvault-access-policies.jpg "Keyvault access policies")
+You should have an Azure Function app that contains the decryption logic and an Azure Storage resource that will store application data. In the next several steps, you'll give the app permissions to access the key vault and collect information that you'll need for the REST calls.
+
+### Grant permissions in Azure Key Vault
- 1. Under **Configure from template**, select **Azure Data Lake Storage or Azure Storage**.
+1. Navigate to your Azure Key Vault service in the portal. [Create an access policy](../key-vault/general/assign-access-policy-portal.md) in the Azure Key Vault that grants key access to the custom skill.
- 1. For the principal, select the Azure Function instance that you deployed. You can search for it using the resource prefix that was used to create it in step 2, which has a default prefix value of **psdbf-function-app**.
+1. On the left navigation pane, select **Access policies**, and then select **+ Create** to start the **Create an access policy** wizard.
- 1. Do not select anything for the authorized application option.
-
- ![Keyvault add access policy template](media/indexing-encrypted-blob-files/keyvault-add-access-policy.jpg "Keyvault access policy template")
+ :::image type="content" source="media/indexing-encrypted-blob-files/keyvault-access-policies.png" alt-text="Screenshot of the Access Policy command in the left navigation pane." border="true":::
- 1. Be sure to click **Save** on the access policies page before navigating away to actually add the access policy.
-
- ![Keyvault save access policy](media/indexing-encrypted-blob-files/keyvault-save-access-policy.jpg "Save Keyvault access policy")
+1. On the **Permissions** page under **Configure from template**, select **Azure Data Lake Storage or Azure Storage**.
-1. Navigate to the **psdbf-function-app** function in the portal, and make a note of the following properties as you will need them later in the guide:
+1. Select **Next**.
- 1. The function URL, which can be found under **Essentials** on the main page for the function.
-
- ![Function URL](media/indexing-encrypted-blob-files/function-uri.jpg "Where to find the Azure Function URL")
+1. On the **Principal** page, select the Azure Function instance that you deployed. You can search for it using the resource prefix that was used to create it in step 2, which has a default prefix value of **psdbf-function-app**.
- 1. The host key code, which can be found by navigating to **App keys**, clicking to show the **default** key, and copying the value.
-
- ![Function Host Key Code](media/indexing-encrypted-blob-files/function-host-key.jpg "Where to find the Azure Function host key code")
+1. Select **Next**.
-### Cognitive Services
+1. On **Review + create**, select **Create**.
-AI enrichment and skillset execution are backed by Cognitive Services, including Language service and Computer Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Cognitive Services (in the same region as Azure Cognitive Search) so that you can attach it to indexing operations.
+### Collect app information
-For this exercise, however, you can skip resource provisioning because Azure Cognitive Search can connect to Cognitive Services behind the scenes and give you 20 free transactions per indexer run. After it processes 20 documents, the indexer will fail unless a Cognitive Services key is attached to the skillset. For larger projects, plan on provisioning Cognitive Services at the pay-as-you-go S0 tier. For more information, see [Attach Cognitive Services](cognitive-search-attach-cognitive-services.md). Note that a Cognitive Services key is required to run a skillset with more than 20 documents even if none of your selected cognitive skills connect to Cognitive Services (such as with the provided skillset if no skills are added to it).
+1. Navigate to the **psdbf-function-app** function in the portal, and make a note of the following properties you'll need for the REST calls:
-### Azure Cognitive Search
+1. Get the function URL, which can be found under **Essentials** on the main page for the function.
-The last component is Azure Cognitive Search, which you can [create in the portal](search-create-service-portal.md). You can use the Free tier to complete this guide.
+ :::image type="content" source="media/indexing-encrypted-blob-files/function-uri.png" alt-text="Screenshot of the overview page and Essentials section of the Azure Function app." border="true":::
-As with the Azure Function, take a moment to collect the admin key. Further on, when you begin structuring requests, you will need to provide the endpoint and admin api-key used to authenticate each request.
+1. Get the host key code, which can be found by navigating to **App keys**, clicking to show the **default** key, and copying the value.
+
+ :::image type="content" source="media/indexing-encrypted-blob-files/function-host-key.png" alt-text="Screenshot of the App Keys page of the Azure Function app." border="true":::
### Get an admin api-key and URL for Azure Cognitive Search
Install and set up Postman.
### Download and install Postman 1. Download the [Postman collection source code](https://github.com/Azure-Samples/azure-search-postman-samples/blob/master/index-encrypted-blobs/Index%20encrypted%20Blob%20files.postman_collection.json).+ 1. Select **File** > **Import** to import the source code into Postman.+ 1. Select the **Collections** tab, and then select the **...** (ellipsis) button.+ 1. Select **Edit**.
-
+ ![Postman app showing navigation](media/indexing-encrypted-blob-files/postman-edit-menu.jpg "Go to the Edit menu in Postman")+ 1. In the **Edit** dialog box, select the **Variables** tab.
-On the **Variables** tab, you can add values that Postman swaps in every time it encounters a specific variable inside double braces. For example, Postman replaces the symbol `{{admin-key}}` with the current value that you set for `admin-key`. Postman makes the substitution in URLs, headers, the request body, and so on.
+ ![Postman app variables tab](media/indexing-encrypted-blob-files/postman-variables-window.jpg "Postman's variables window")
+
+1. On the **Variables** tab, provide the values that you've collected in the previous steps. Postman swaps in a value every time it encounters a specific variable inside double braces. For example, Postman replaces the symbol `{{admin-key}}` with the current value that you set for the search service admin API key.
-To get the value for `admin-key`, use the Azure Cognitive Search admin api-key you noted earlier. Set `search-service-name` to the name of the Azure Cognitive Search service you are using. Set `storage-connection-string` by using the value on your storage account's **Access Keys** tab, and set `storage-container-name` to the name of the blob container on that storage account where the encrypted files are stored. Set `function-uri` to the Azure Function URL you noted before, and set `function-code` to the Azure Function host key code you noted before. You can leave the defaults for the other values.
+ | Variable | Where to get it |
+ |-|--|
+ | `admin-key` | On the **Keys** page of the Azure Cognitive Search service. |
+ | `search-service-name` | The name of the Azure Cognitive Search service. The URL is `https://{{search-service-name}}.search.windows.net`. |
+ | `storage-connection-string` | In the storage account, on the **Access Keys** tab, select **key1** > **Connection string**. |
+ | `storage-container-name` | The name of the blob container that has the encrypted files to be indexed. |
+ | `function-uri` | In the Azure Function under **Essentials** on the main page. |
+ | `function-code` | In the Azure Function, by navigating to **App keys**, clicking to show the **default** key, and copying the value. |
+ | `api-version` | Leave as **2020-06-30**. |
+ | `datasource-name` | Leave as **encrypted-blobs-ds**. |
+ | `index-name` | Leave as **encrypted-blobs-idx**. |
+ | `skillset-name` | Leave as **encrypted-blobs-ss**. |
+ | `indexer-name` | Leave as **encrypted-blobs-ixr**. |
-![Postman app variables tab](media/indexing-encrypted-blob-files/postman-variables-window.jpg "Postman's variables window")
+### Review and run each request
-| Variable | Where to get it |
-|-|--|
-| `admin-key` | On the **Keys** page of the Azure Cognitive Search service. |
-| `search-service-name` | The name of the Azure Cognitive Search service. The URL is `https://{{search-service-name}}.search.windows.net`. |
-| `storage-connection-string` | In the storage account, on the **Access Keys** tab, select **key1** > **Connection string**. |
-| `storage-container-name` | The name of the blob container that has the encrypted files to be indexed. |
-| `function-uri` | In the Azure Function under **Essentials** on the main page. |
-| `function-code` | In the Azure Function, by navigating to **App keys**, clicking to show the **default** key, and copying the value. |
-| `api-version` | Leave as **2020-06-30**. |
-| `datasource-name` | Leave as **encrypted-blobs-ds**. |
-| `index-name` | Leave as **encrypted-blobs-idx**. |
-| `skillset-name` | Leave as **encrypted-blobs-ss**. |
-| `indexer-name` | Leave as **encrypted-blobs-ixr**. |
+In this section, you'll issue four HTTP requests:
-### Review the request collection in Postman
++ **PUT request to create the index**: This search index holds the data that Azure Cognitive Search uses and returns.
-When you run this guide, you must issue four HTTP requests:
++ **POST request to create the data source**: This data source specifies the connection to your storage account containing the encrypted blob files. -- **PUT request to create the index**: This index holds the data that Azure Cognitive Search uses and returns.-- **POST request to create the datasource**: This datasource connects your Azure Cognitive Search service to your storage account and therefore encrypted blob files. -- **PUT request to create the skillset**: The skillset specifies the custom skill definition for the Azure Function that will decrypt the blob file data, and a [DocumentExtractionSkill](cognitive-search-skill-document-extraction.md) to extract the text from each document after it has been decrypted.-- **PUT request to create the indexer**: Running the indexer reads the data, applies the skillset, and stores the results. You must run this request last.++ **PUT request to create the skillset**: The skillset specifies the custom skill definition for the Azure Function that will decrypt the blob file data, and a [DocumentExtractionSkill](cognitive-search-skill-document-extraction.md) to extract the text from each document after it has been decrypted.
-The [source code](https://github.com/Azure-Samples/azure-search-postman-samples/blob/master/index-encrypted-blobs/Index%20encrypted%20Blob%20files.postman_collection.json) contains a Postman collection that has the four requests, as well as some useful follow-up requests. To issue the requests, in Postman, select the tab for the requests and select **Send** for each of them.
++ **PUT request to create the indexer**: Running the indexer retrieves the blobs, applies the skillset, and indexes and stores the results. You must run this request last. The custom skill in the skillset invokes the decryption logic.+
+To issue the requests, in Postman, select the tab for the requests and select **Send** for each of them.
## 3 - Monitor indexing
If you are using the Free tier, the following message is expected: `"Could not e
After indexer execution is finished, you can run some queries to verify that the data has been successfully decrypted and indexed. Navigate to your Azure Cognitive Search service in the portal, and use the [search explorer](search-explorer.md) to run queries over the indexed data.
+## Clean up resources
+
+When you're working in your own subscription, at the end of a project, it's a good idea to remove the resources that you no longer need. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+
+You can find and manage resources in the portal, using the All resources or Resource groups link in the left-navigation pane.
+ ## Next steps Now that you have successfully indexed encrypted files, you can [iterate on this pipeline by adding more cognitive skills](cognitive-search-defining-skillset.md). This will allow you to enrich and gain additional insights to your data.
-If you are working with doubly encrypted data, you might want to investigate the index encryption features available in Azure Cognitive Search. Although the indexer needs decrypted data for indexing purposes, once the index exists, it can be encrypted using a customer-managed key. This will ensure that your data is always encrypted when at rest. For more information, see [Configure customer-managed keys for data encryption in Azure Cognitive Search](search-security-manage-encryption-keys.md).
+If you are working with doubly encrypted data, you might want to investigate the index encryption features available in Azure Cognitive Search. Although the indexer needs decrypted data for indexing purposes, once the index exists, it can be encrypted in a search index using a customer-managed key. This will ensure that your data is always encrypted when at rest. For more information, see [Configure customer-managed keys for data encryption in Azure Cognitive Search](search-security-manage-encryption-keys.md).
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-manage-encryption-keys.md
Access permissions could be revoked at any given time. Once revoked, any search
1. Still in the Azure portal, open your key vault **Overview** page.
-1. Select the **Access policies** on the left, and select **+ Create**.
+1. Select the **Access policies** on the left, and select **+ Create** to start the **Create an access policy** wizard.
:::image type="content" source="media/search-manage-encryption-keys/cmk-add-access-policy.png" alt-text="Create an access policy." border="true":::
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sentinel-solutions-catalog.md
Title: Microsoft Sentinel content hub catalog | Microsoft Docs
description: This article displays and details the currently available Microsoft Sentinel content hub packages. Previously updated : 01/04/2022 Last updated : 01/30/2022
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Apache Log4j Vulnerability Detection** | Analytics rules, hunting queries | Application, Security - Threat Protection, Security - Vulnerability Management | Microsoft|
-|**Microsoft Insider Risk Management** (IRM) |[Data connector](data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview), workbook, analytics rules, hunting queries |Security - Insider threat | Microsoft|
+|**Cybersecurity Maturity Model Certification (CMMC)** | Analytics rules, workbook, playbook | Compliance | Microsoft|
+| **IoT/OT Threat Monitoring with Defender for IoT** | [Analytics rules, playbooks, workbook](iot-solution.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft |
+|**Maturity Model for Event Log Management M2131** | [Analytics rules, hunting queries, playbooks, workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/modernize-log-management-with-the-maturity-model-for-event-log/ba-p/3072842) | Compliance | Microsoft|
+|**Microsoft Insider Risk Management** (IRM) |[Data connector](data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview), workbook, analytics rules, hunting queries, playbook |Security - Insider threat | Microsoft|
| **Microsoft Sentinel Deception** | [Workbooks, analytics rules, watchlists](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft |
-|**Zero Trust** (TIC3.0) |[Workbooks](https://techcommunity.microsoft.com/t5/public-sector-blog/announcing-the-azure-sentinel-zero-trust-tic3-0-workbook/ba-p/2313761) |Identity, Security - Others |Microsoft |
+|**Zero Trust** (TIC3.0) |[Analytics rules, playbook, workbooks](https://techcommunity.microsoft.com/t5/public-sector-blog/announcing-the-azure-sentinel-zero-trust-tic3-0-workbook/ba-p/2313761) |Identity, Security - Others |Microsoft |
| | | | | ## Arista Networks
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Microsoft Sentinel 4 Microsoft Dynamics 365** | [Data connector](data-connectors-reference.md#dynamics-365), workbooks, analytics rules, and hunting queries | Application |Microsoft | |**Microsoft Sentinel for Teams** | Analytics rules, playbooks, hunting queries | Application | Microsoft |
-| **IoT OT Threat Monitoring with Defender for IoT** | [Analytics rules, playbooks, workbook](iot-solution.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft |
| **Microsoft Sysmon for Linux** | [Data connector](data-connectors-reference.md#microsoft-sysmon-for-linux-preview) | Platform | Microsoft | | | | | |
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new-archive.md
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
> Our threat hunting teams across Microsoft contribute queries, playbooks, workbooks, and notebooks to the [Azure Sentinel Community](https://github.com/Azure/Azure-Sentinel), including specific [hunting queries](https://github.com/Azure/Azure-Sentinel) that your teams can adapt and use. > > You can also contribute! Join us in the [Azure Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
->
++
+## July 2021
+
+- [Microsoft Threat Intelligence Matching Analytics (Public preview)](#microsoft-threat-intelligence-matching-analytics-public-preview)
+- [Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)](#use-azure-ad-data-with-azure-sentinels-identityinfo-table-public-preview)
+- [Enrich Entities with geolocation data via API (Public preview)](#enrich-entities-with-geolocation-data-via-api-public-preview)
+- [Support for ADX cross-resource queries (Public preview)](#support-for-adx-cross-resource-queries-public-preview)
+- [Watchlists are in general availability](#watchlists-are-in-general-availability)
+- [Support for data residency in more geos](#support-for-data-residency-in-more-geos)
+- [Bidirectional sync in Azure Defender connector (Public preview)](#bidirectional-sync-in-azure-defender-connector-public-preview)
+
+### Microsoft Threat Intelligence Matching Analytics (Public preview)
+
+Azure Sentinel now provides the built-in **Microsoft Threat Intelligence Matching Analytics** rule, which matches Microsoft-generated threat intelligence data with your logs. This rule generates high-fidelity alerts and incidents, with appropriate severities based on the context of the logs detected. After a match is detected, the indicator is also published to your Azure Sentinel threat intelligence repository.
+
+The **Microsoft Threat Intelligence Matching Analytics** rule currently matches domain indicators against the following log sources:
+
+- [CEF](connect-common-event-format.md)
+- [DNS](./data-connectors-reference.md#windows-dns-server-preview)
+- [Syslog](connect-syslog.md)
+
+For more information, see [Detect threats using matching analytics (Public preview)](work-with-threat-indicators.md#detect-threats-using-matching-analytics-public-preview).
+
+### Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)
+
+As attackers often use the organization's own user and service accounts, data about those user accounts, including the user identification and privileges, are crucial for the analysts in the process of an investigation.
+
+Now, having [UEBA enabled](enable-entity-behavior-analytics.md) in your Azure Sentinel workspace also synchronizes Azure AD data into the new **IdentityInfo** table in Log Analytics. Synchronizations between your Azure AD and the **IdentifyInfo** table create a snapshot of your user profile data that includes user metadata, group information, and the Azure AD roles assigned to each user.
+
+Use the **IdentityInfo** table during investigations and when fine-tuning analytics rules for your organization to reduce false positives.
+
+For more information, see [IdentityInfo table](ueba-enrichments.md#identityinfo-table-public-preview) in the UEBA enrichments reference and [Use UEBA data to analyze false positives](investigate-with-ueba.md#use-ueba-data-to-analyze-false-positives).
+
+### Enrich entities with geolocation data via API (Public preview)
+
+Azure Sentinel now offers an API to enrich your data with geolocation information. Geolocation data can then be used to analyze and investigate security incidents.
+
+For more information, see [Enrich entities in Azure Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md) and [Classify and analyze data using entities in Azure Sentinel](entities.md).
++
+### Support for ADX cross-resource queries (Public preview)
+
+The hunting experience in Azure Sentinel now supports [ADX cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
+
+Although Log Analytics remains the primary data storage location for performing analysis with Azure Sentinel, there are cases where ADX is required to store data due to cost, retention periods, or other factors. This capability enables customers to hunt over a wider set of data and view the results in the [Azure Sentinel hunting experiences](hunting.md), including hunting queries, [livestream](livestream.md), and the Log Analytics search page.
+
+To query data stored in ADX clusters, use the adx() function to specify the ADX cluster, database name, and desired table. You can then query the output as you would any other table. See more information in the pages linked above.
++++
+### Watchlists are in general availability
+
+The [watchlists](watchlists.md) feature is now generally available. Use watchlists to enrich alerts with business data, to create allowlists or blocklists against which to check access events, and to help investigate threats and reduce alert fatigue.
+
+### Support for data residency in more geos
+
+Azure Sentinel now supports full data residency in the following additional geos:
+
+Brazil, Norway, South Africa, Korea, Germany, United Arab Emirates (UAE), and Switzerland.
+
+See the [complete list of supported geos](quickstart-onboard.md#geographical-availability-and-data-residency) for data residency.
+
+### Bidirectional sync in Azure Defender connector (Public preview)
+
+The Azure Defender connector now supports bi-directional syncing of alerts' status between Defender and Azure Sentinel. When you close a Sentinel incident containing a Defender alert, the alert will automatically be closed in the Defender portal as well.
+
+See this [complete description of the updated Azure Defender connector](connect-defender-for-cloud.md).
## June 2021
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 01/13/2022 Last updated : 01/30/2022
If you're looking for items older than six months, you'll find them in the [Arch
## January 2022
+- [Maturity Model for Event Log Management (M-21-31) Solution (Public preview)](#maturity-model-for-event-log-management-m-21-31-solution-public-preview)
- [SentinelHealth data table (Public preview)](#sentinelhealth-data-table-public-preview) - [More workspaces supported for Multiple Workspace View](#more-workspaces-supported-for-multiple-workspace-view) - [Kusto Query Language workbook and tutorial](#kusto-query-language-workbook-and-tutorial)
+### Maturity Model for Event Log Management (M-21-31) Solution (Public preview)
+
+The Microsoft Sentinel content hub now includes the **Maturity Model for Event Log Management (M-21-31)** solution, which integrates Microsoft Sentinel and Microsoft Defender for Cloud to provide an industry differentiator for meeting challenging requirements in regulated industries.
+
+The Maturity Model for Event Log Management (M-21-31) solution provides a quantifiable framework to measure maturity. Use the analytics rules, hunting queries, playbooks, and workbook provided with the solution to do any of the following:
+
+- Design and build log management architectures
+- Monitor and alert on log health issues, coverage, and blind spots
+- Respond to notifications with Security Orchestration Automation & Response (SOAR) activities
+- Remediate with Cloud Security Posture Management (CSPM)
+
+For more information, see:
+
+- [Modernize Log Management with the Maturity Model for Event Log Management (M-21-31) Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/modernize-log-management-with-the-maturity-model-for-event-log/ba-p/3072842) (blog)
+- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md)
+- [The Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md#domain-solutions)
+ ### SentinelHealth data table (Public preview) Microsoft Sentinel now provides the **SentinelHealth** data table to help you monitor your connector health, providing insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states. Use this data to create alerts and other automated actions, such as Microsoft Teams messages, new tickets in a ticketing system, and so on.
For more information, see:
> You can find more guidance added across our documentation in relevant conceptual and how-to articles. For more information, see [Best practice references](best-practices.md#best-practice-references). >
-## July 2021
--- [Microsoft Threat Intelligence Matching Analytics (Public preview)](#microsoft-threat-intelligence-matching-analytics-public-preview)-- [Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)](#use-azure-ad-data-with-azure-sentinels-identityinfo-table-public-preview)-- [Enrich Entities with geolocation data via API (Public preview)](#enrich-entities-with-geolocation-data-via-api-public-preview)-- [Support for ADX cross-resource queries (Public preview)](#support-for-adx-cross-resource-queries-public-preview)-- [Watchlists are in general availability](#watchlists-are-in-general-availability)-- [Support for data residency in more geos](#support-for-data-residency-in-more-geos)-- [Bidirectional sync in Azure Defender connector (Public preview)](#bidirectional-sync-in-azure-defender-connector-public-preview)-
-### Microsoft Threat Intelligence Matching Analytics (Public preview)
-
-Azure Sentinel now provides the built-in **Microsoft Threat Intelligence Matching Analytics** rule, which matches Microsoft-generated threat intelligence data with your logs. This rule generates high-fidelity alerts and incidents, with appropriate severities based on the context of the logs detected. After a match is detected, the indicator is also published to your Azure Sentinel threat intelligence repository.
-
-The **Microsoft Threat Intelligence Matching Analytics** rule currently matches domain indicators against the following log sources:
--- [CEF](connect-common-event-format.md)-- [DNS](./data-connectors-reference.md#windows-dns-server-preview)-- [Syslog](connect-syslog.md)-
-For more information, see [Detect threats using matching analytics (Public preview)](work-with-threat-indicators.md#detect-threats-using-matching-analytics-public-preview).
-
-### Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)
-
-As attackers often use the organization's own user and service accounts, data about those user accounts, including the user identification and privileges, are crucial for the analysts in the process of an investigation.
-
-Now, having [UEBA enabled](enable-entity-behavior-analytics.md) in your Azure Sentinel workspace also synchronizes Azure AD data into the new **IdentityInfo** table in Log Analytics. Synchronizations between your Azure AD and the **IdentifyInfo** table create a snapshot of your user profile data that includes user metadata, group information, and the Azure AD roles assigned to each user.
-
-Use the **IdentityInfo** table during investigations and when fine-tuning analytics rules for your organization to reduce false positives.
-
-For more information, see [IdentityInfo table](ueba-enrichments.md#identityinfo-table-public-preview) in the UEBA enrichments reference and [Use UEBA data to analyze false positives](investigate-with-ueba.md#use-ueba-data-to-analyze-false-positives).
-
-### Enrich entities with geolocation data via API (Public preview)
-
-Azure Sentinel now offers an API to enrich your data with geolocation information. Geolocation data can then be used to analyze and investigate security incidents.
-
-For more information, see [Enrich entities in Azure Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md) and [Classify and analyze data using entities in Azure Sentinel](entities.md).
--
-### Support for ADX cross-resource queries (Public preview)
-
-The hunting experience in Azure Sentinel now supports [ADX cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
-
-Although Log Analytics remains the primary data storage location for performing analysis with Azure Sentinel, there are cases where ADX is required to store data due to cost, retention periods, or other factors. This capability enables customers to hunt over a wider set of data and view the results in the [Azure Sentinel hunting experiences](hunting.md), including hunting queries, [livestream](livestream.md), and the Log Analytics search page.
-
-To query data stored in ADX clusters, use the adx() function to specify the ADX cluster, database name, and desired table. You can then query the output as you would any other table. See more information in the pages linked above.
----
-### Watchlists are in general availability
-
-The [watchlists](watchlists.md) feature is now generally available. Use watchlists to enrich alerts with business data, to create allowlists or blocklists against which to check access events, and to help investigate threats and reduce alert fatigue.
-
-### Support for data residency in more geos
-
-Azure Sentinel now supports full data residency in the following additional geos:
-
-Brazil, Norway, South Africa, Korea, Germany, United Arab Emirates (UAE), and Switzerland.
-
-See the [complete list of supported geos](quickstart-onboard.md#geographical-availability-and-data-residency) for data residency.
-
-### Bidirectional sync in Azure Defender connector (Public preview)
-
-The Azure Defender connector now supports bi-directional syncing of alerts' status between Defender and Azure Sentinel. When you close a Sentinel incident containing a Defender alert, the alert will automatically be closed in the Defender portal as well.
-
-See this [complete description of the updated Azure Defender connector](connect-defender-for-cloud.md).
- ## Next steps > [!div class="nextstepaction"]
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-architecture.md
When you enable Azure VM replication, by default Site Recovery creates a new rep
**Policy setting** | **Details** | **Default** | |
-**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 24 hours
-**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot. | Every four hours
+**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 1 day
+**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot. | 0 hours (Disabled)
### Managing replication policies
You can manage and modify the default replication policies settings as follows:
- You can modify the settings as you enable replication. - You can create a replication policy at any time, and then apply it when you enable replication.
+>[!NOTE]
+>High recovery point retention period may have an implication on the storage cost since more recovery points may need to be saved.
+ ### Multi-VM consistency If you want VMs to replicate together, and have shared crash-consistent and app-consistent recovery points at failover, you can gather them together into a replication group. Multi-VM consistency impacts workload performance, and should only be used for VMs running workloads that need consistency across all machines.
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-common-questions.md
No, this is unsupported. If you accidentally move storage accounts to a differen
A replication policy defines the retention history of recovery points, and the frequency of app-consistent snapshots. Site Recovery creates a default replication policy as follows: -- Retain recovery points for 24 hours.-- Take app-consistent snapshots every four hours.
+- Retain recovery points for 1 day.
+- App-consistent snapshots are disabled and are not created by default.
[Learn more](azure-to-azure-how-to-enable-replication.md#customize-target-resources) about replication settings.
Yes. The Mobility agent for Linux support custom scripts for app-consistency. A
To understand how Site Recovery generates recovery points, let's use an example. -- A replication policy retains recovery points for 24 hours, and takes an app-consistent frequency snapshot every hour.
+- A replication policy retains recovery points for one day, and takes an app-consistent snapshot every hour.
- Site Recovery creates a crash-consistent recovery point every five minutes. You can't change this frequency.-- Site Recovery prunes recovery points after an hour, saving one point per hour.
+- Site Recovery prunes recovery points after two hours, saving one point per hour.
-So, in the last hour, you can choose from 12 crash-consistent points, and one app-consistent point, as shown in the graphic.
+So, for the recent two hours, you can choose from 24 crash-consistent points, and two app-consistent points, as shown in the graphic.
![List of generated recovery points](./media/azure-to-azure-common-questions/recovery-points.png) ### How far back can I recover?
-The oldest recovery point that you can use is 72 hours.
+The oldest recovery point that you can use is 15 days with Managed disk and 3 days with Unmanaged disk.
-### What happens if Site Recovery can't generate recovery points for more than 24 hours?
+### How does the pruning of recovery points happen?
-If you have a replication policy of 24 hours, and Site Recovery can't generate recovery points for more than 24 hours, your old recovery points remain. Site Recovery only replaces the oldest point if it generates new points. Until there are new recovery points, all the old points remain after you reach the retention window.
+Crash-consistent recovery points are generated in every five minutes. App-consistent snapshots are generated based on the input frequency entered by you. Beyond two hours, pruning of recovery points may happen based on the retention period that you input. Following are the scenarios:
+
+|**Retention Period input** | **Pruning mechanism** |
+|-|--|
+|0 day|No recovery point saved. You can failover only to the latest point|
+|1 day|One recovery point saved per hour beyond the last two hours|
+|2 - 7 days|One recovery point saved per two hours beyond the last two hours|
+|8 - 15 days|One recovery point saved per two hours beyond the last two hours for 7 days. Post that, one recovery point saved per four hours.<p>App-consistent snapshots will also be pruned based on the duration mentioned above in the table even if you had input lesser app-consistent snapshot frequency.|
+++
+### What happens if Site Recovery can't generate recovery points for more than one day?
+
+If you have a replication policy of one day, and Site Recovery can't generate recovery points for more than one day, your old recovery points remain. Site Recovery only replaces the oldest point if it generates new points. Until there are new recovery points, all the old points remain after you reach the retention window.
### Can I change the replication policy after replication is enabled?
The first recovery point that's generated has the complete copy. Successive reco
### Do increases in recovery point retention increase storage costs?
-Yes. For example, if you increase retention from 24 hours to 72, Site Recovery saves recovery points for an additional 48 hours. The added time incurs storage changes. As an example only, if a single recovery point had delta changes of 10 GB, with a per-GB cost of $0.16 per month, then additional charges would be $1.60 × 48 per month.
+Yes. For example, if you increase retention from 1 day to 3 days, Site Recovery saves recovery points for an additional two days.The added time incurs storage changes. Earlier, it was saving recovery points per hour for 1 day. Now, it is saving recovery points per two hours for 3 days. Refer [pruning of recovery points](#how-does-the-pruning-of-recovery-points-happen). So additional 12 recovery points are saved. As an example only, if a single recovery point had delta changes of 10 GB, with a per-GB cost of $0.16 per month, then additional charges would be $1.60 × 12 per month.
## Multi-VM consistency
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Enable replication. This procedure assumes that the primary Azure region is East
- **Target subscription**: The target subscription used for disaster recovery. By default, the target subscription will be same as the source subscription. - **Target resource group**: The resource group to which all your replicated virtual machines belong. - By default Site Recovery creates a new resource group in the target region with an "asr" suffix in the name.
- - If the resource group created by Site Recovery already exists, it is reused.
+ - If the resource group created by Site Recovery already exists, it's reused.
- You can customize the resource group settings. - The location of the target resource group can be any Azure region, except the region in which the source VMs are hosted. - **Target virtual network**: By default, Site Recovery creates a new virtual network in the target region with an "asr" suffix in the name. This is mapped to your source network, and used for any future protection. [Learn more](./azure-to-azure-network-mapping.md) about network mapping.
- - **Target storage accounts (source VM doesn't use managed disks)**: By default, Site Recovery creates a new target storage account mimicking your source VM storage configuration. In case storage account already exists, it is reused.
+ - **Target storage accounts (source VM doesn't use managed disks)**: By default, Site Recovery creates a new target storage account mimicking your source VM storage configuration. In case storage account already exists, it's reused.
- **Replica-managed disks (source VM uses managed disks)**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk. - **Cache Storage accounts**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
- - **Target availability sets**: By default, Site Recovery creates a new availability set in the target region with the "asr" suffix in the name, for VMs that are part of an availability set in the source region. If the availability set created by Site Recovery already exists, it is reused.
+ - **Target availability sets**: By default, Site Recovery creates a new availability set in the target region with the "asr" suffix in the name, for VMs that are part of an availability set in the source region. If the availability set created by Site Recovery already exists, it's reused.
>[!NOTE] >While configuring the target availability sets, please configure different availability sets for differently sized VMs. > - **Target availability zones**: By default, Site Recovery assigns the same zone number as the source region in target region if the target region supports availability zones.
- If the target region does not support availability zones, the target VMs are configured as single instances by default. If required, you can configure such VMs to be part of availability sets in target region by clicking 'Customize'.
+ If the target region does not support availability zones, the target VMs are configured as single instances by default. If necessary, you can configure such VMs to be part of availability sets in target region by clicking 'Customize'.
>[!NOTE] >You cannot change the availability type - single instance, availability set or availability zone, after you enable replication. You need to disable and enable replication to change the availability type. >
- - **Replication Policy**: It defines the settings for recovery point retention history and app consistent snapshot frequency. By default, Azure Site Recovery creates a new replication policy with default settings of ΓÇÿ24 hoursΓÇÖ for recovery point retention and ΓÇÖ4 hoursΓÇÖ for app consistent snapshot frequency.
+ - **Replication Policy**: It defines the settings for retention period of recovery points and app-consistent snapshot frequency. By default, Azure Site Recovery creates a default replication policy with the following settings:
+ - One day of retention for recovery points.
+ - No app-consistent snapshots.
![Enable replication](./media/site-recovery-replicate-azure-to-azure/enabledrwizard3.PNG)
site-recovery Physical Azure Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/physical-azure-disaster-recovery.md
Select and verify target resources.
1. To create a new replication policy, click **Site Recovery infrastructure** > **Replication Policies** > **+Replication Policy**. 2. In **Create replication policy**, specify a policy name. 3. In **RPO threshold**, specify the recovery point objective (RPO) limit. This value specifies how often data recovery points are created. An alert is generated if continuous replication exceeds this limit.
-4. In **Recovery point retention**, specify how long (in hours) the retention window is for each recovery point. Replicated VMs can be recovered to any point in a window. Up to 24 hours retention is supported for machines replicated to premium storage, and 72 hours for standard storage.
-5. In **App-consistent snapshot frequency**, specify how often (in minutes) recovery points containing application-consistent snapshots will be created. Click **OK** to create the policy.
+4. In **Recovery point retention**, specify how long (in days) the retention window is for each recovery point. Replicated VMs can be recovered to any point in a window. Up to 15 days retention is supported.
+5. In **App-consistent snapshot frequency**, specify how often (in hours) recovery points containing application-consistent snapshots will be created. Click **OK** to create the policy.
![Screenshot of the options for creating a replication policy.](./media/physical-azure-disaster-recovery/replication-policy.png)
-The policy is automatically associated with the configuration server. By default, a matching policy is automatically created for failback. For example, if the replication policy is **rep-policy** then a failback policy **rep-policy-failback** is created. This policy isn't used until you initiate a failback from Azure.
+By default, a matching policy is automatically created for failback. For example, if the replication policy is **rep-policy** then a failback policy **rep-policy-failback** is created. This policy isn't used until you initiate a failback from Azure.
## Enable replication
site-recovery Vmware Azure Architecture Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-architecture-preview.md
If you're using a URL-based firewall proxy to control outbound connectivity, all
- For VMware VMs, replication is block-level, near-continuous, using the Mobility service agent running on the VM. - Any replication policy settings are applied: - **RPO threshold**. This setting does not affect replication. It helps with monitoring. An event is raised, and optionally an email sent, if the current RPO exceeds the threshold limit that you specify.
- - **Recovery point retention**. This setting specifies how far back in time you want to go when a disruption occurs. Maximum retention on premium storage is 24 hours. On standard storage it's 72 hours.
+ - **Recovery point retention**. This setting specifies how far back in time you want to go when a disruption occurs. Maximum retention is 15 days.
- **App-consistent snapshots**. App-consistent snapshot can be taken every 1 to 12 hours, depending on your app needs. Snapshots are standard Azure blob snapshots. The Mobility agent running on a VM requests a VSS snapshot in accordance with this setting, and bookmarks that point-in-time as an application consistent point in the replication stream.
+ >[!NOTE]
+ >High recovery point retention period may have an implication on the storage cost since more recovery points may need to be saved.
+
2. Traffic replicates to Azure storage public endpoints over the internet. Alternately, you can use Azure ExpressRoute with [Microsoft peering](../expressroute/expressroute-circuit-peerings.md#microsoftpeering). Replicating traffic over a site-to-site virtual private network (VPN) from an on-premises site to Azure isn't supported. 3. Initial replication operation ensures that entire data on the machine at the time of enable replication is sent to Azure. After initial replication finishes, replication of delta changes to Azure begins. Tracked changes for a machine are sent to the process server.
When you enable Azure VM replication, by default Site Recovery creates a new rep
**Policy setting** | **Details** | **Default** | |
-**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 72 hours
+**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 3 days
**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot. | Every 4 hours ### Managing replication policies
site-recovery Vmware Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-architecture.md
For exhaustive list of URLs to be filtered for communication between on-premises
- For VMware VMs, replication is block-level, near-continuous, using the Mobility service agent running on the VM. - Any replication policy settings are applied: - **RPO threshold**. This setting does not affect replication. It helps with monitoring. An event is raised, and optionally an email sent, if the current RPO exceeds the threshold limit that you specify.
- - **Recovery point retention**. This setting specifies how far back in time you want to go when a disruption occurs. Maximum retention on premium storage is 24 hours. On standard storage it's 72 hours.
+ - **Recovery point retention**. This setting specifies how far back in time you want to go when a disruption occurs. Maximum retention is 15 days on Managed disk.
- **App-consistent snapshots**. App-consistent snapshot can be taken every 1 to 12 hours, depending on your app needs. Snapshots are standard Azure blob snapshots. The Mobility agent running on a VM requests a VSS snapshot in accordance with this setting, and bookmarks that point-in-time as an application consistent point in the replication stream.
+ >[!NOTE]
+ >High recovery point retention period may have an implication on the storage cost since more recovery points may need to be saved.
+
2. Traffic replicates to Azure storage public endpoints over the internet. Alternately, you can use Azure ExpressRoute with [Microsoft peering](../expressroute/expressroute-circuit-peerings.md#microsoftpeering). Replicating traffic over a site-to-site virtual private network (VPN) from an on-premises site to Azure isn't supported. 3. Initial replication operation ensures that entire data on the machine at the time of enable replication is sent to Azure. After initial replication finishes, replication of delta changes to Azure begins. Tracked changes for a machine are sent to the process server.
For exhaustive list of URLs to be filtered for communication between on-premises
6. If default resynchronization fails outside office hours and a manual intervention is required, then an error is generated on the specific machine in Azure portal. You can resolve the error and trigger the resynchronization manually. 7. After completion of resynchronization, replication of delta changes will resume.
-## Replication policy
-
-When you enable Azure VM replication, by default Site Recovery creates a new replication policy with the default settings summarized in the table.
-
-**Policy setting** | **Details** | **Default**
- | |
-**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 24 hours
-**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot. | Every four hours
- ### Managing replication policies
-You can manage and modify the default replication policies settings as follows:
-- You can modify the settings as you enable replication.
+- You can customize the settings of replication policies as you enable replication.
- You can create a replication policy at any time, and then apply it when you enable replication. ### Multi-VM consistency
site-recovery Vmware Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-common-questions.md
Yes, you can change the type or size of the VM at any time before failover. In t
### How far back can I recover?
-For VMware to Azure, the oldest recovery point you can use is 72 hours.
+For VMware to Azure, the oldest recovery point you can use is 15 days.
+
+### How does the pruning of recovery points happen?
+
+Crash-consistent recovery points are generated in every five minutes. App-consistent snapshots are generated based on the input frequency entered by you. Beyond two hours, pruning of recovery points may happen based on the retention period that you input. Following are the scenarios:
+
+|**Retention Period input** | **Pruning mechanism** |
+|-||
+|0 day|No recovery point saved. You can failover only to the latest point|
+|1 day|One recovery point saved per hour beyond the last two hours|
+|2 - 7 days|One recovery point saved per two hours beyond the last two hours|
+|8 - 15 days|One recovery point saved per two hours beyond last two hours for 7 days. Post that, one recovery point saved per four hours.<p>App-consistent snapshots will also be pruned based on duration mentioned above even if you input lesser app-consistent snapshot frequency.|
+
+### Do increases in recovery point retention increase storage costs?
+
+Yes. For example, if you increase retention from 1 day to 3 days, Site Recovery saves recovery points for an additional 2 days.The added time incurs storage changes. Earlier, it was saving recovery points per hour for 1 day. Now, it is saving recovery points per two hours for 3 days. Refer [pruning of recovery points](#how-does-the-pruning-of-recovery-points-happen). So additional 12 recovery points are saved. As an example only, if a single recovery point had delta changes of 10 GB, with a per-GB cost of $0.16 per month, then additional charges would be $1.60 × 12 per month.
### How do I access Azure VMs after failover?
site-recovery Vmware Azure Set Up Replication Tutorial Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-set-up-replication-tutorial-preview.md
Follow these steps to enable replication:
10. Create a new replication policy if needed.
- A default replication policy gets created under the vault with 72 hour recovery point retention and 4 hour app consistency frequency. You can create a new replication policy as per your RPO requirements.
+ A default replication policy gets created under the vault with 3 days recovery point retention and 4 hours app consistency frequency. You can create a new replication policy as per your RPO requirements.
- Select **Create new**. - Enter the Name.
- - Enter **Recovery point retention** in hours
+ - Enter **Recovery point retention** in days.
- Select **App-consistent snapshot frequency in hours** as per business requirements
site-recovery Vmware Azure Set Up Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-set-up-replication.md
This article describes how to configure a replication policy when you're replica
2. In **For VMware and Physical machines**, select **Replication policies**. 3. Click **+Replication policy**, and specify the policy name. 4. In **RPO threshold**, specify the RPO limit. Alerts are generated when continuous replication exceeds this limit.
-5. In **Recovery point retention**, specify (in hours) the duration of the retention window for each recovery point. Protected machines can be recovered to any point within a retention window. Up to 24 hours of retention is supported for machines replicated to premium storage. Up to 72 hours is supported for standard storage.
-6. In **App-consistent snapshot frequency**, choose from the dropdown how often (in hours) recovery points that contain application-consistent snapshots should be created. If you wish to turn off generation of application consistency points, choose "Off" value in the dropdown.
+5. In **Recovery point retention**, specify (in days) the duration of the retention window for each recovery point. Protected machines can be recovered to any point within a retention window. Up to 15 days of retention is supported.
+6. In **App-consistent snapshot frequency**, you can choose to enable app-consistent snapshot frequency and input the frequency from 0 - 12 (in hours) that will determine how frequently application-consistent snapshots should be created.
7. Click **OK**. The policy should be created in 30 to 60 seconds. When you create a replication policy, a matching failback replication policy is automatically created, with the suffix "failback". After creating the policy, you can edit it by selecting it > **Edit Settings**.
+>[!NOTE]
+>High recovery point retention period in a policy may have an implication on storage cost since more recovery points may need to be saved.
+ ## Associate a configuration server Associate the replication policy with your on-premises configuration server.
-1. Click **Associate**, and select the configuration server.
+1. Select the replication policy.
+
+ ![Replication policy listing.](./media/vmware-azure-set-up-replication/replication-policy-listing.png)
+2. Click **Associate**.
+
+ ![Associate configuration server.](./media/vmware-azure-set-up-replication/associate1.png)
+3. Select the configuration server.
- ![Associate configuration server](./media/vmware-azure-set-up-replication/associate1.png)
-2. Click **OK**. The configuration server should be associated in one to two minutes.
+ ![Configuration server selection.](./media/vmware-azure-set-up-replication/select-config-server.png)
+3. Click **OK**. The configuration server should be associated in one to two minutes.
- ![Configuration server association](./media/vmware-azure-set-up-replication/associate2.png)
+ ![Configuration server association.](./media/vmware-azure-set-up-replication/associate2.png)
## Edit a policy
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-policy-configure-version-scope.md
az storage account create \
+If version-level immutability support is enabled for the storage account and the account contains one or more containers, then you must delete all containers before you delete the storage account, even if there are no immutability policies in effect for the account or containers.
+ ### Enable version-level immutability support on a container Both new and existing containers can be configured to support version-level immutability. However, an existing container must undergo a migration process in order to enable support.
az storage container-rm show \
+If version-level immutability support is enabled for a container and the container contains one or more blobs, then you must delete all blobs in the container before you can delete the container, even if there are no immutability policies in effect for the container or its blobs.
+ #### Migrate an existing container to support version-level immutability To configure version-level immutability policies for an existing container, you must migrate the container to support version-level immutable storage. Container migration may take some time and cannot be reversed. You can migrate only one container at a time per storage account.
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/sas-expiration-policy.md
Previously updated : 09/14/2021 Last updated : 01/28/2022
The SAS expiration period appears in the console output.
-## Query logs for policy violations
+## Check for SAS expiration policy violations
-To log the creation of a SAS that is valid over a longer interval than the SAS expiration policy recommends, first create a diagnostic setting that sends logs to an Azure Log Analytics workspace. For more information, see [Send logs to Azure Log Analytics](../blobs/monitor-blob-storage.md#send-logs-to-azure-log-analytics).
+You can monitor your storage accounts with Azure Policy to ensure that storage accounts in your subscription have configured SAS expiration policies. Azure Storage provides a built-in policy for ensuring that accounts have this setting configured. For more information about the built-in policy, see **Storage accounts should have shared access signature (SAS) policies configured** in [List of built-in policy definitions](../../governance/policy/samples/built-in-policies.md#storage).
-Next, use an Azure Monitor log query to monitor whether policy has been violated. Create a new query in your Log Analytics workspace, add the following query text, and press **Run**.
+### Assign the built-in policy for a resource scope
-```kusto
-StorageBlobLogs | where SasExpiryStatus startswith "Policy Violated"
-```
+Follow these steps to assign the built-in policy to the appropriate scope in the Azure portal:
+
+1. In the Azure portal, search for *Policy* to display the Azure Policy dashboard.
+1. In the **Authoring** section, select **Assignments**.
+1. Choose **Assign policy**.
+1. On the **Basics** tab of the **Assign policy** page, in the **Scope** section, specify the scope for the policy assignment. Select the **More** button to choose the subscription and optional resource group.
+1. For the **Policy definition** field, select the **More** button, and enter *storage account keys* in the **Search** field. Select the policy definition named **Storage account keys should not be expired**.
+
+ :::image type="content" source="media/sas-expiration-policy/policy-definition-select-portal.png" alt-text="Screenshot showing how to select the built-in policy to monitor validity intervals for shared access signatures for your storage accounts":::
+
+1. Select **Review + create** to assign the policy definition to the specified scope.
+
+ :::image type="content" source="media/sas-expiration-policy/policy-assignment-create.png" alt-text="Screenshot showing how to create the policy assignment":::
+
+### Monitor compliance with the key expiration policy
+
+To monitor your storage accounts for compliance with the key expiration policy, follow these steps:
+
+1. On the Azure Policy dashboard, locate the built-in policy definition for the scope that you specified in the policy assignment. You can search for *Storage accounts should have shared access signature (SAS) policies configured* in the **Search** box to filter for the built-in policy.
+1. Select the policy name with the desired scope.
+1. On the **Policy assignment** page for the built-in policy, select **View compliance**. Any storage accounts in the specified subscription and resource group that do not meet the policy requirements appear in the compliance report.
+
+ :::image type="content" source="media/sas-expiration-policy/policy-compliance-report-portal-inline.png" alt-text="Screenshot showing how to view the compliance report for the SAS expiration built-in policy" lightbox="media/sas-expiration-policy/policy-compliance-report-portal-expanded.png":::
+
+To bring a storage account into compliance, configure a SAS expiration policy for that account, as described in [Create a SAS expiration policy](#create-a-sas-expiration-policy).
## See also
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-keys-manage.md
Follow these steps to assign the built-in policy to the appropriate scope in the
1. On the **Basics** tab of the **Assign policy** page, in the **Scope** section, specify the scope for the policy assignment. Select the **More** button to choose the subscription and optional resource group. 1. For the **Policy definition** field, select the **More** button, and enter *storage account keys* in the **Search** field. Select the policy definition named **Storage account keys should not be expired**.
- :::image type="content" source="media/storage-account-keys-manage/policy-definition-select-portal.png" alt-text="Screenshot showing how to select the built-in policy to monitor key expiration for your storage accounts":::
+ :::image type="content" source="media/storage-account-keys-manage/policy-definition-select-portal.png" alt-text="Screenshot showing how to select the built-in policy to monitor key rotation intervals for your storage accounts":::
1. Select **Review + create** to assign the policy definition to the specified scope.
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/np-series.md
Title: NP-series - Azure Virtual Machines description: Specifications for the NP-series VMs.-+ Last updated 02/09/2021-+ # NP-series
VM Generation Support: Generation 1<br>
**Q:** How to request quota for NP VMs?
-**A:** Please follow this page [Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md). NP VMs are available in East US, West US2, West Europe and SouthEast Asia.
+**A:** Please follow this page [Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md). NP VMs are available in East US, West US2, West Europe, SouthEast Asia, and SouthCentral US.
**Q:** What version of Vitis should I use?
-**A:** Xilinx recommends [Vitis 2020.2](https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html), you can also use the Development VM marketplace options (Vitis 2020.2 Development VM for Ubuntu 18.04 and Centos 7.8)
+**A:** Xilinx recommends [Vitis 2021.1](https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html), you can also use the Development VM marketplace options (Vitis 2021.1 Development VM for Ubuntu 18.04, Ubuntu 20.04, and CentOS 7.8)
**Q:** Do I need to use NP VMs to develop my solution?
-**A:** No, you can develop on-premise and deploy to the cloud! Please make sure to follow the [attestation documentation](./field-programmable-gate-arrays-attestation.md) to deploy on NP VMs.
+**A:** No, you can develop on-premise and deploy to the cloud. Please make sure to follow the [attestation documentation](./field-programmable-gate-arrays-attestation.md) to deploy on NP VMs.
**Q:** Which file returned from attestation should I use when programming my FPGA in an NP VM? **A:** Attestation returns two xclbins, **design.bit.xclbin** and **design.azure.xclbin**. Please use **design.azure.xclbin**.
-**Q:** Where should I get all the XRT/Platform files?
+**Q:** Where should I get all the XRT / Platform files?
**A:** Please visit Xilinx's [Microsoft-Azure](https://www.xilinx.com/microsoft-azure.html) site for all files. **Q:** What Version of XRT should I use?
-**A:** xrt_202020.2.8.832
+**A:** xrt_202110.2.11.680
**Q:** What is the target deployment platform?
VM Generation Support: Generation 1<br>
**A:** xilinx-u250-gen3x16-xdma-2.1-202010-1-dev_1-2954688_all
-**Q:** What are the supported OS (Operating Systems)?
+**Q:** What are the supported Operating Systems?
-**A:** Xilinx and Microsoft have validated Ubuntu 18.04 LTS and CentOS 7.8.
+**A:** Xilinx and Microsoft have validated Ubuntu 18.04 LTS, Ubuntu 20.04 LTS, and CentOS 7.8.
Xilinx has created the following marketplace images to simplify the deployment of these VMs.
-[Xilinx Alveo U250 Deployment VM ΓÇô Ubuntu18.04](https://ms.portal.azure.com/#blade/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/xilinx.xilinx_alveo_u250_deployment_vm_ubuntu1804_032321)
+[Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu18.04](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/xilinx.xilinx_xrt2021_1_ubuntu1804_deployment_image)
-[Xilinx Alveo U250 Deployment VM ΓÇô CentOS7.8](https://ms.portal.azure.com/#blade/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/xilinx.xilinx_alveo_u250_deployment_vm_centos78_032321)
+[Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu20.04](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/xilinx.xilinx_xrt2021_1_ubuntu2004_deployment_image)
-**Q:** Can I deploy my Own Ubuntu/CentOS VMs and install XRT/Deployment Target Platform?
+[Xilinx Alveo U250 2021.1 Deployment VM ΓÇô CentOS7.8](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/xilinx.xilinx_xrt2021_1_centos78_deployment_image)
+
+**Q:** Can I deploy my Own Ubuntu / CentOS VMs and install XRT / Deployment Target Platform?
**A:** Yes. **Q:** If I deploy my own Ubuntu18.04 VM then what are the required packages and steps?
-**A:** Use Kernel 4.1X per [Xilinx XRT documentation](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_2/ug1451-xrt-release-notes.pdf)
+**A:** Use Kernel 4.15 per [Xilinx XRT documentation](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2021_1/ug1451-xrt-release-notes.pdf)
Install the following packages.-- xrt_202020.2.8.832_18.04-amd64-xrt.deb
+- xrt_202110.2.11.680_18.04-amd64-xrt.deb
-- xrt_202020.2.8.832_18.04-amd64-azure.deb
+- xrt_202110.2.11.680_18.04-amd64-azure.deb
- xilinx-u250-gen3x16-xdma-platform-2.1-3_all_18.04.deb.tar.gz - xilinx-u250-gen3x16-xdma-validate_2.1-3005608.1_all.deb
-**Q:** On Ubuntu, after rebooting my VM I cannot find my FPGA(s):
+**Q:** On Ubuntu, after rebooting my VM I can't find my FPGA(s):
+
+**A:** Please verify that your kernel hasn't been upgraded (uname -a). If so, please downgrade to kernel 4.1X.
+
+**Q:** If I deploy my own Ubuntu20.04 VM then what are the required packages and steps?
+
+**A:** Use Kernel 5.4 per [Xilinx XRT documentation](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2021_1/ug1451-xrt-release-notes.pdf)
+
+Install the following packages.
+- xrt_202110.2.11.680_20.04-amd64-xrt.deb
+
+- xrt_202110.2.11.680_20.04-amd64-azure.deb
+
+- xilinx-u250-gen3x16-xdma-platform-2.1-3_all_18.04.deb.tar.gz
+
+- xilinx-u250-gen3x16-xdma-validate_2.1-3005608.1_all.deb
-**A:** Please verify that your kernel has not been upgraded (uname -a). If so, please downgrade to kernel 4.1X.
**Q:** If I deploy my own CentOS7.8 VM then what are the required packages and steps?
Install the following packages.
Install the following packages.
+ - xrt_202110.2.11.680_7.8.2003-x86_64-xrt.rpm
+ - xrt_202110.2.11.680_7.8.2003-x86_64-azure.rpm
- xilinx-u250-gen3x16-xdma-platform-2.1-3.noarch.rpm.tar.gz - xilinx-u250-gen3x16-xdma-validate-2.1-3005608.1.noarch.rpm
-**Q:** When running xbutil validate on CentOS I get this warning: ΓÇ£WARNING: Kernel version 3.10.0-1160.15.2.el7.x86_64 is not officially supported. 4.18.0-193 is the latest supported version.ΓÇ¥
-
-**A:** This can be safely ignored.
- **Q:** What are the differences between OnPrem and NP VMs? **A:**
OnPrem FPGA, both the management endpoint (Device ID 5004) and role endpoint (De
<br> On Azure NP VMs, the XDMA 2.1 platform only supports Host_Mem(SB) and DDR data retention features. <br>
-To enable Host_Mem(SB) (up to 1Gb RAM): sudo xbutil host_mem --enable --size 1g
+To enable Host_Mem(SB) (up to 1 Gb RAM): sudo xbutil host_mem --enable --size 1g
To disable Host_Mem(SB): sudo xbutil host_mem --disable **Q:** Can I run xbmgmt commands?
-**A:** No, on Azure VMs there is no management support directly from the Azure VM.
+**A:** No, on Azure VMs there's no management support directly from the Azure VM.
**Q:** Do I need to load a PLP?
-**A:** No, the PLP is loaded automatically for you, so there is no need to load via xbmgmt commands.
+**A:** No, the PLP is loaded automatically for you, so there's no need to load via xbmgmt commands.
**Q:** Does Azure support different PLPs?
To disable Host_Mem(SB): sudo xbutil host_mem --disable
- [High performance compute](sizes-hpc.md) - [Previous generations](sizes-previous-gen.md)
-Pricing Calculator : [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
For more information on disk types, see [What disk types are available in Azure?](disks-types.md)