Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | How To Single Page App Vanillajs Configure Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-configure-authentication.md | The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit- 1. Replace the following values with the values from the Azure portal: - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center.- - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is *caseyjensen@onmicrosoft.com*, the value you should enter is *casyjensen*. -1. Save the file. + - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +2. Save the file. ## Adding code to the redirection file |
active-directory | How To Single Page App Vanillajs Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-prepare-tenant.md | Last updated 06/09/2023 This tutorial series demonstrates how to build a vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences. -In this tutorial, you'll; +In this tutorial; > [!div class="checklist"] > * Register a SPA in the Microsoft Entra admin center, and record its identifiers |
active-directory | How To Single Page Application React Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-prepare-app.md | Identity related **npm** packages must be installed in the project to enable use // }; ``` -1. Replace the following values with the values from the Azure admin center: - - Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the **Overview** page of the registered application. - - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is *caseyjensen@onmicrosoft.com*, the value you should enter is *casyjensen*. +1. Replace the following values with the values from the Azure portal: + - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center. + - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +2. Save the file. ## Modify *index.js* to include the authentication provider |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | Note important changes to make, before you upgrade to any of the available minor | 1.24 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| No Breaking Changes | None | 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2 | 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>|No Breaking Changes |None-| 1.27 Preview | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V1 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 preview onwards. +| 1.27 Preview | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>AAD Pod Identity 1.8.13.6<br>Gitops 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 preview onwards. ## Alias minor version > [!NOTE] |
api-management | Soft Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md | Use the API Management [Get By Name](/rest/api/apimanagement/current-ga/deleted- GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01 ``` -If available for undelete, Azure will return a record of the APIM instance showing its `deletionDate` and `scheduledPurgeDate`, for example: +If available for undelete, Azure will return a record of the API Management instance showing its `deletionDate` and `scheduledPurgeDate`, for example: ```json { DELETE https://management.azure.com/subscriptions/{subscriptionId}/providers/Mic This will permanently delete your API Management instance from Azure. +## Reuse an API Management instance name after deletion ++You **can** reuse the name of an API Management instance in a new deployment: ++* After the instance has been permanently deleted (purged) from Azure. ++* In the same subscription as the original instance. ++You **can't** reuse the name of an API Management instance in a new deployment: ++* While the instance is soft-deleted. ++* In a subscription other than the one used to deploy the original instance, even after the original instance has been permanently deleted (purged) from Azure. This restriction applies whether the new subscription used is in the same or a different Azure Active Directory tenant. The restriction is in effect for several days or longer after deletion, depending on the subscription type. ++ This restriction is because Azure reserves the service host name to a customer's tenant for a reservation period to prevent the threat of subdomain takeover with dangling DNS entries. For more information, see [Prevent dangling DNS entries and avoid subdomain takeover](/azure/security/fundamentals/subdomain-takeover). To see all dangling DNS entries for subscriptions in an Azure AD tenant, see [Identify dangling DNS entries](/azure/security/fundamentals/subdomain-takeover#identify-dangling-dns-entries). ++ ## Next steps Learn about long-term API Management backup and recovery options: |
azure-functions | Create First Function Cli Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md | Title: Create a Python function from the command line - Azure Functions description: Learn how to create a Python function from the command line, then publish the local project to serverless hosting in Azure Functions. Previously updated : 03/22/2023 Last updated : 07/15/2023 ms.devlang: python Before you begin, you must have the following requirements in place: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.-+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.2.1 or later. + + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. Before you begin, you must have the following requirements in place: [!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)] -### Prerequisite check --Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources. --# [Azure CLI](#tab/azure-cli) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.x. -+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.0.4785 or later. -+ Run `az --version` to check that the Azure CLI version is 2.4 or later. --+ Run `az login` to sign in to Azure and verify an active subscription. --+ Run `python --version` (Linux/macOS) or `py --version` (Windows) to check your Python version reports 3.9.x, 3.8.x, or 3.7.x. --# [Azure PowerShell](#tab/azure-powershell) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.x. --+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. --+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription. --+ Run `python --version` (Linux/macOS) or `py --version` (Windows) to check your Python version reports 3.9.x, 3.8.x, or 3.7.x. --- ## <a name="create-venv"></a>Create and activate a virtual environment In a suitable folder, run the following commands to create and activate a virtual environment named `.venv`. Make sure that you're using Python 3.9, 3.8, or 3.7, which are supported by Azure Functions. In Azure Functions, a function project is a container for one or more individual func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous" ``` - `func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*. -- Get the list of templates by using the following command: -- ```console - func templates list -l python - ``` + `func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*. 1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime and the specified programming model version. ```console In Azure Functions, a function project is a container for one or more individual ```console cd LocalFunctionProj ```-+ This folder contains various files for the project, including configuration files named [*local.settings.json*](functions-develop-local.md#local-settings-file) and [*host.json*](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file. 1. The file `function_app.py` can include all functions within your project. To start with, there's already an HTTP function stored in the file. -```python -import azure.functions as func --app = func.FunctionApp() --@app.function_name(name="HttpTrigger1") -@app.route(route="hello") -def test_function(req: func.HttpRequest) -> func.HttpResponse: - return func.HttpResponse("HttpTrigger1 function processed a request!") -``` --### (Optional) Examine the file contents --If desired, you can skip to [Run the function locally](#run-the-function-locally) and examine the file contents later. --#### \_\_init\_\_.py --*\_\_init\_\_.py* contains a `main()` Python function that's triggered according to the configuration in *function.json*. ---For an HTTP trigger, the function receives request data in the variable `req` as defined in *function.json*. `req` is an instance of the [azure.functions.HttpRequest class](/python/api/azure-functions/azure.functions.httprequest). The return object, defined as `$return` in *function.json*, is an instance of [azure.functions.HttpResponse class](/python/api/azure-functions/azure.functions.httpresponse). For more information, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=python). --#### function.json --*function.json* is a configuration file that defines the input and output `bindings` for the function, including the trigger type. --If desired, you can change `scriptFile` to invoke a different Python file. ---Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type [`httpTrigger`](functions-bindings-http-webhook-trigger.md) and output binding of type [`http`](functions-bindings-http-webhook-output.md). -`function_app.py` is the entry point to the function and where functions will be stored and/or referenced. This file will include configuration of triggers and bindings through decorators, and the function content itself. --For more information, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=python). + ```python + import azure.functions as func + + app = func.FunctionApp() + + @app.function_name(name="HttpTrigger1") + @app.route(route="hello") + def test_function(req: func.HttpRequest) -> func.HttpResponse: + return func.HttpResponse("HttpTrigger1 function processed a request!") + ``` + +1. Open the local.settings.json project file and verify that the `AzureWebJobsFeatureFlags` setting has a value of `EnableWorkerIndexing`. This is required for Functions to interpret your project correctly as the Python v2 model. You'll add this same setting to your application settings after you publish your project to Azure. ++1. In the local.settings.json file, update the `AzureWebJobsStorage` setting as in the following example: ++ ```json + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + ``` + This tells the local Functions host to use the storage emulator for the storage connection currently required by the Python v2 model. When you publish your project to Azure, you'll need to instead use the default storage account. If you're instead using an Azure Storage account, set your storage account connection string here. ## Start the storage emulator By default, local development uses the Azurite storage emulator. This emulator is used when the `AzureWebJobsStorage` setting in the *local.settings.json* project file is set to `UseDevelopmentStorage=true`. When using the emulator, you must start the local Azurite storage emulator before running the function. |
azure-functions | Create First Function Vs Code Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Select a template for your project's first function**| Choose `HTTP trigger`.| |**Provide a function name**| Enter `HttpExample`.| |**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**| Choose `Open in current window`.| 4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files). ::: zone-end In this section, you use Visual Studio Code to create a local Azure Functions pr |--|--| |**Select a language**| Choose `Python (Programming Model V2)`.| |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|- |**Select how you would like to open your project**| Choose `Open in current window`.| + |**Select a template for your project's first function** | Choose `HTTP trigger`. | + |**Name of the function you want to create**| Enter `HttpExample`.| + |**Authorization level**| Choose `ANONYMOUS`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| -4. Visual Studio Code uses the provided information and generates an Azure Functions project. --5. Open the generated `function_app.py` project file, which contains your functions. --6. Uncomment the `test_function` function, which is an HTTP triggered function. --7. Replace the `app.route()` method call with the following code: -- ```python - @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS) - ``` -- This code enables your HTTP function endpoint to be called in Azure without having to provide an [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys). Local execution doesn't require authorization keys. -- Your function code should now look like the following example: -- ```python - app = func.FunctionApp() - @app.function_name(name="HttpTrigger1") - @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS) - def test_function(req: func.HttpRequest) -> func.HttpResponse: - logging.info('Python HTTP trigger function processed a request.') -- name = req.params.get('name') - if not name: - try: - req_body = req.get_json() - except ValueError: - pass - else: - name = req_body.get('name') -- if name: - return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.") - else: - return func.HttpResponse( - "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", - status_code=200 - ) - ``` +4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. The generated `function_app.py` project file contains your functions. -8. Open the local.settings.json project file and updated the `AzureWebJobsStorage` setting as in the following example: +5. Open the local.settings.json project file and verify that the `AzureWebJobsFeatureFlags` setting has a value of `EnableWorkerIndexing`. This is required for Functions to interpret your project correctly as the Python v2 model. You'll add this same setting to your application settings after you publish your project to Azure. ++6. In the local.settings.json file, update the `AzureWebJobsStorage` setting as in the following example: ```json "AzureWebJobsStorage": "UseDevelopmentStorage=true", ``` - This tells the local Functions host to use the storage emulator for the storage connection currently required by the v2 model. When you publish your project to Azure, you'll instead use the default storage account. If you're instead using an Azure Storage account, set your storage account connection string here. + This tells the local Functions host to use the storage emulator for the storage connection currently required by the Python v2 model. When you publish your project to Azure, you'll need to instead use the default storage account. If you're instead using an Azure Storage account, set your storage account connection string here. ## Start the emulator |
azure-functions | Functions Machine Learning Tensorflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-machine-learning-tensorflow.md | In Azure Functions, a function project is a container for one or more individual func new --name classify --template "HTTP trigger" ``` - This command creates a folder matching the name of the function, *classify*. In that folder are two files: *\_\_init\_\_.py*, which contains the function code, and *function.json*, which describes the function's trigger and its input and output bindings. For details on the contents of these files, see [Examine the file contents](./create-first-function-cli-python.md#optional-examine-the-file-contents) in the Python quickstart. + This command creates a folder matching the name of the function, *classify*. In that folder are two files: *\_\_init\_\_.py*, which contains the function code, and *function.json*, which describes the function's trigger and its input and output bindings. For details on the contents of these files, see [Programming model](./functions-reference-python.md?pivots=python-mode-configuration#programming-model) in the Python developer guide. ## Run the function locally |
azure-functions | Machine Learning Pytorch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/machine-learning-pytorch.md | In Azure Functions, a function project is a container for one or more individual func new --name classify --template "HTTP trigger" ``` - This command creates a folder matching the name of the function, *classify*. In that folder are two files: *\_\_init\_\_.py*, which contains the function code, and *function.json*, which describes the function's trigger and its input and output bindings. For details on the contents of these files, see [Examine the file contents](./create-first-function-cli-python.md#optional-examine-the-file-contents) in the Python quickstart. + This command creates a folder matching the name of the function, *classify*. In that folder are two files: *\_\_init\_\_.py*, which contains the function code, and *function.json*, which describes the function's trigger and its input and output bindings. For details on the contents of these files, see [Programming model](./functions-reference-python.md?pivots=python-mode-configuration#programming-model) in the Python developer guide. ## Run the function locally |
azure-monitor | Container Insights Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md | The following table summarizes known errors you might encounter when you use Con | Error message "Error retrieving data" | While an AKS cluster is setting up for health and performance monitoring, a connection is established between the cluster and a Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error might occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, reenable monitoring of your cluster with Container insights. Then specify an existing workspace or create a new one. To reenable, [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. | | "Error retrieving data" after adding Container insights through `az aks cli` | When you enable monitoring by using `az aks cli`, Container insights might not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Legacy solutions** from the pane on the left side. To resolve this issue, redeploy the solution. Follow the instructions in [Enable Container insights](container-insights-onboard.md). | -To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot). +To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_prod/scripts/troubleshoot). + ## Authorization error during onboarding or update operation |
azure-resource-manager | Azure Services Resource Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md | Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 02/28/2023 Last updated : 07/14/2023 +content_well_notification: + - AI-contribution # Resource providers for Azure services The resources providers that are marked with **- registered** are registered by ## Registration -The resources providers in the preceding section that are marked with **- registered** are registered by default for your subscription. To use the other resource providers, you must [register them](resource-providers-and-types.md). However, many resource providers are registered for you when you take certain actions. For example, if you create a resource through the portal, the portal automatically registers any unregistered resource providers that are needed. When deploy resources through an [Azure Resource Manager template](../templates/overview.md), any required resource providers are also registered. +Resource providers marked with **- registered** in the previous section are automatically registered for your subscription. For other resource providers, you need to [register them](resource-providers-and-types.md). However, many resource providers are registered automatically when you perform specific actions. For example, when you create resources through the portal or by deploying an [Azure Resource Manager template](../templates/overview.md), Azure Resource Manager automatically registers any required unregistered resource providers. > [!IMPORTANT]-> Only register a resource provider when you're ready to use it. The registration step enables you to maintain least privileges within your subscription. A malicious user can't use resource providers that aren't registered. +> Register a resource provider only when you're ready to use it. This registration step helps maintain least privileges within your subscription. A malicious user can't use unregistered resource providers. >-> When you register resource providers that aren't needed, you may see apps in your Azure Active Directory tenant that you don't recognize. Microsoft adds the app for a resource provider when you register it. These applications are typically added by Windows Azure Service Management API. To avoid having unnecessary apps in your tenant, only register resource providers that are needed. +> Registering unnecessary resource providers may result in unrecognized apps appearing in your Azure Active Directory tenant. Microsoft adds the app for a resource provider when you register it. These apps are typically added by the Windows Azure Service Management API. To prevent unnecessary apps in your tenant, only register needed resource providers. ## Find resource provider -If you have existing infrastructure in Azure, but aren't sure which resource provider is used, you can use either Azure CLI or PowerShell to find the resource provider. Specify the name of the resource group that contains the resources to find. +To identify resource providers used for your existing Azure infrastructure, list the deployed resources. Specify the resource group containing the resources. The following example uses Azure CLI: ```azurecli-interactive-az resource list -g examplegroup +az resource list --resource-group examplegroup ``` The results include the resource type. The resource provider namespace is the first part of the resource type. The following example shows the **Microsoft.KeyVault** resource provider. -```json +```output [ { ... Get-AzResource -ResourceGroupName examplegroup The results include the resource type. The resource provider namespace is the first part of the resource type. The following example shows the **Microsoft.KeyVault** resource provider. -```azurepowershell +```output Name : examplekey ResourceGroupName : examplegroup ResourceType : Microsoft.KeyVault/vaults ... ``` +The following example uses Python: ++```python +import os +from azure.identity import DefaultAzureCredential +from azure.mgmt.resource import ResourceManagementClient ++subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] +credential = DefaultAzureCredential() +resource_client = ResourceManagementClient(credential, subscription_id) ++resource_group_name = "examplegroup" +resources = resource_client.resources.list_by_resource_group(resource_group_name) ++for resource in resources: + print(resource.type) +``` ++The results list the resource type. The resource provider namespace is the first part of the resource type. The following example shows the **Microsoft.KeyVault** resource provider. ++```output +Microsoft.KeyVault/vaults +``` + ## Next steps For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md). |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 02/28/2023 Last updated : 07/14/2023 Azure NetApp Files datastores for Azure VMware Solution are currently supported * Japan West * North Central US * North Europe+* Qatar Central * South Africa North * South Central US * Southeast Asia Azure NetApp Files datastores for Azure VMware Solution are currently supported * West Europe * West US * West US 2-+* West US 3 ## Performance best practices |
chaos-studio | Chaos Studio Fault Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md | Currently, the Windows agent doesn't reduce memory pressure when other applicati ``` ### Limitations-* **Windows**: Service-friendly names aren't supported. Use `sc.exe query` in the command prompt to explore service names. +* **Windows**: Display names for services aren't supported. Use `sc.exe query` in the command prompt to explore service names. * **Linux**: Other service types besides systemd, like sysvinit, aren't supported. ## Time change Currently, the Windows agent doesn't reduce memory pressure when other applicati | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:killProcess/1.0 | | Parameters (key, value) | |-| processName | Name of a process running on a VM (without the .exe). | +| processName | Name of a process to continuously kill (without the .exe). The process does not need to be running when the fault begins executing. | | killIntervalInMilliseconds | Amount of time the fault waits in between successive kill attempts in milliseconds. | | virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. | Currently, the Windows agent doesn't reduce memory pressure when other applicati } ``` +### Limitations ++* The agent-based network faults currently only support IPv4 addresses. ++ ## Network disconnect | Property | Value | Currently, the Windows agent doesn't reduce memory pressure when other applicati } ``` -> [!WARNING] -> The network disconnect fault only affects new connections. Existing *active* connections continue to persist. You can restart the service or process to force connections to break. +### Limitations ++* The agent-based network faults currently only support IPv4 addresses. +* The network disconnect fault only affects new connections. Existing active connections continue to persist. You can restart the service or process to force connections to break. +* On Windows, the network disconnect fault currently only works with TCP or UDP packets. ## Network disconnect with firewall rule Currently, the Windows agent doesn't reduce memory pressure when other applicati } ``` +### Limitations ++* The agent-based network faults currently only support IPv4 addresses. ++## Network packet loss ++| Property | Value | +|-|-| +| Capability name | NetworkPacketLoss-1.0 | +| Target type | Microsoft-Agent | +| Supported OS types | Windows, Linux | +| Description | Introduces packet loss for outbound traffic at a specified rate, between 0.0 (no packets lost) and 1.0 (all packets lost). This can help simulate scenarios like network congestion or network hardware issues. | +| Prerequisites | Agent must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. | +| Urn | urn:csci:microsoft:agent:networkPacketLoss/1.0 | +| Parameters (key, value) | | +| lossRate | The rate at which packets matching the destination filters will be lost, ranging from 0.0 to 1.0. | +| virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. | +| destinationFilters | Delimited JSON array of packet filters (parameters below) that define which outbound packets to target for fault injection. Maximum of three.| +| address | IP address that indicates the start of the IP range. | +| subnetMask | Subnet mask for the IP address range. | +| portLow | (Optional) Port number of the start of the port range. | +| portHigh | (Optional) Port number of the end of the port range. | ++### Sample JSON ++```json +{ + "name": "branchOne", + "actions": [ + { + "type": "continuous", + "name": "urn:csci:microsoft:agent:networkPacketLoss/1.0", + "parameters": [ + { + "key": "destinationFilters", + "value": "[{\"address\":\"23.45.229.97\",\"subnetMask\":\"255.255.255.224\",\"portLow\":5000,\"portHigh\":5200}]" + }, + { + "key": "lossRate", + "value": "0.5" + }, + { + "key": "virtualMachineScaleSetInstances", + "value": "[0,1,2]" + } + ], + "duration": "PT10M", + "selectorid": "myResources" + } + ] +} +``` ++### Limitations ++* The agent-based network faults currently only support IPv4 addresses. + ## Azure Resource Manager virtual machine shutdown | Property | Value | |-|-| Currently, the Windows agent doesn't reduce memory pressure when other applicati ## Azure Resource Manager virtual machine scale set instance shutdown -This fault has two available versions that you can use, Version 1.0 and Version 2.0. +This fault has two available versions that you can use, Version 1.0 and Version 2.0. The main difference is that Version 2.0 allows you to filter by availability zones, only shutting down instances within a specified zone or zones. ### Version 1.0 Currently, only virtual machine scale sets configured with the **Uniform** orche "parameters": [ { "key": "jsonSpec",- "value": "{\"action\":\"pod-failure\",\"mode\":\"one\",\"duration\":\"30s\",\"selector\":{\"labelSelectors\":{\"app.kubernetes.io\/component\":\"tikv\"}}}" + "value": "{\"action\":\"pod-failure\",\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app.kubernetes.io\/component\":\"tikv\"}}}" } ], "selectorid": "myResources" Currently, only virtual machine scale sets configured with the **Uniform** orche "parameters": [ { "key": "jsonSpec",- "value": "{\"action\":\"latency\",\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"etcd\"}},\"volumePath\":\"\/var\/run\/etcd\",\"path\":\"\/var\/run\/etcd\/**\/*\",\"delay\":\"100ms\",\"percent\":50,\"duration\":\"400s\"}" + "value": "{\"action\":\"latency\",\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"etcd\"}},\"volumePath\":\"\/var\/run\/etcd\",\"path\":\"\/var\/run\/etcd\/**\/*\",\"delay\":\"100ms\",\"percent\":50}" } ], "selectorid": "myResources" Currently, only virtual machine scale sets configured with the **Uniform** orche "parameters": [ { "key": "jsonSpec",- "value": "{\"mode\":\"all\",\"selector\":{\"labelSelectors\":{\"app\":\"nginx\"}},\"target\":\"Request\",\"port\":80,\"method\":\"GET\",\"path\":\"\/api\",\"abort\":true,\"duration\":\"5m\",\"scheduler\":{\"cron\":\"@every 10m\"}}" + "value": "{\"mode\":\"all\",\"selector\":{\"labelSelectors\":{\"app\":\"nginx\"}},\"target\":\"Request\",\"port\":80,\"method\":\"GET\",\"path\":\"\/api\",\"abort\":true,\"scheduler\":{\"cron\":\"@every 10m\"}}" } ], "selectorid": "myResources" Currently, only virtual machine scale sets configured with the **Uniform** orche | Parameters (key, value) | | | name | A unique name for the security rule that's created. The fault fails if another rule already exists on the NSG with the same name. Must begin with a letter or number. Must end with a letter, number, or underscore. May contain only letters, numbers, underscores, periods, or hyphens. | | protocol | Protocol for the security rule. Must be Any, TCP, UDP, or ICMP. |-| sourceAddresses | A string that represents a JSON-delimited array of CIDR-formatted IP addresses. Can also be a service tag name for an inbound rule, for example, `AppService`. An asterisk `*` can also be used to match all source IPs. | -| destinationAddresses | A string that represents a JSON-delimited array of CIDR-formatted IP addresses. Can also be a service tag name for an outbound rule, for example, `AppService`. An asterisk `*` can also be used to match all destination IPs. | +| sourceAddresses | A string that represents a JSON-delimited array of CIDR-formatted IP addresses. Can also be a [service tag name](../virtual-network/service-tags-overview.md) for an inbound rule, for example, `AppService`. An asterisk `*` can also be used to match all source IPs. | +| destinationAddresses | A string that represents a JSON-delimited array of CIDR-formatted IP addresses. Can also be a [service tag name](../virtual-network/service-tags-overview.md) for an outbound rule, for example, `AppService`. An asterisk `*` can also be used to match all destination IPs. | | action | Security group access type. Must be either Allow or Deny. | | destinationPortRanges | A string that represents a JSON-delimited array of single ports and/or port ranges, such as 80 or 1024-65535. | | sourcePortRanges | A string that represents a JSON-delimited array of single ports and/or port ranges, such as 80 or 1024-65535. | Currently, only virtual machine scale sets configured with the **Uniform** orche ] } ```++## App Service Stop + +| Property | Value | +| - | | +| Capability name | Stop-1.0 | +| Target type | Microsoft-AppService | +| Description | Stops the targeted App Service applications, then restarts them at the end of the fault duration. This applies to resources of the "Microsoft.Web/sites" type, including App Service, API Apps, Mobile Apps, and Azure Functions. | +| Prerequisites | None. | +| Urn | urn:csci:microsoft:appService:stop/1.0 | +| Fault type | Continuous. | +| Parameters (key, value) | None. | ++### Sample JSON ++```json +{ + "name": "branchOne", + "actions": [ + { + "type": "continuous", + "name": "urn:csci:microsoft:appService:stop/1.0", + "duration": "PT10M", + "parameters":[], + "selectorid": "myResources" + } + ] +} +``` |
chaos-studio | Chaos Studio Fault Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md | The following table lists the supported resource types for faults, the target ty | Microsoft.Insights/autoscalesettings (service-direct) | Microsoft-AutoScaleSettings | Web Plan Contributor | | Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Azure Key Vault Contributor | | Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor |+| Microsoft.Web/sites (service-direct) | Microsoft-AppService | Website Contributor | |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md | Types of system messages: ## Real-time notifications -Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This feature lets clients listen to Communication Services for real-time updates and incoming messages to a chat thread without having to poll the APIs. The client app can subscribe to following events: +JavaScript Chat SDK supports real-time notifications. This feature lets clients listen to Communication Services for real-time updates and incoming messages to a chat thread without having to poll the APIs. +Use an Event Grid resource to subscribe to chat related events (post operation) which can be plugged into your custom application notification service. You will need to validate(../../how-tos/event-grid/view-events-request-bin.md) and locally test events(../../how-tos/event-grid/local-testing-event-grid.md) once you set up the event grid resource to ensure that events are being sent. ++The client app can subscribe to following events: - `chatMessageReceived` - when a new message is sent to a chat thread by a participant. - `chatMessageEdited` - when a message is edited in a chat thread. - `chatMessageDeleted` - when a message is deleted in a chat thread. Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This f - `realTimeNotificationConnected` - when real time notification is connected. - `realTimeNotificationDisconnected` -when real time notification is disconnected. -## Push notifications -To send push notifications for messages missed by your users while they were away, Communication Services provides two different ways to integrate: +> [!NOTE] +> Real time notifications are not to be used with server applications. ++For more information, see [Server Events](../../../event-grid/event-schema-communication-services.md?bc=/azure/bread/toc.json&toc=/azure/communication-services/toc.json). ++## Push notifications ++Android and iOS Chat SDKs support push notifications. To send push notifications for messages missed by your users while they were away, connect a Notification Hub resource with Communication Services resource to send push notifications and notify your application users about incoming chats and messages when the mobile app is not running in the foreground. - IOS and Android SDK can support the below event: - - `chatMessageReceived` - when a new message is sent to a chat thread by a participant. +IOS and Android SDK support the below event: +- `chatMessageReceived` - when a new message is sent to a chat thread by a participant. - Android SDK can support additional events: - - `chatMessageEdited` - when a message is edited in a chat thread. - - `chatMessageDeleted` - when a message is deleted in a chat thread. - - `chatThreadCreated` - when a Communication Services user creates a chat thread. - - `chatThreadDeleted` - when a Communication Services user deletes a chat thread. - - `chatThreadPropertiesUpdated` - when chat thread properties are updated; currently, only updating the topic for the thread is supported. - - `participantsAdded` - when a user is added as a chat thread participant. - - `participantsRemoved` - when an existing participant is removed from the chat thread. +Android SDK supports additional events: +- `chatMessageEdited` - when a message is edited in a chat thread. +- `chatMessageDeleted` - when a message is deleted in a chat thread. +- `chatThreadCreated` - when a Communication Services user creates a chat thread. +- `chatThreadDeleted` - when a Communication Services user deletes a chat thread. +- `chatThreadPropertiesUpdated` - when chat thread properties are updated; currently, only updating the topic for the thread is supported. +- `participantsAdded` - when a user is added as a chat thread participant. +- `participantsRemoved` - when an existing participant is removed from the chat thread. For more information, see [Push Notifications](../notifications.md). This way, the message history contains both original and translated messages. In > [Get started with chat](../../quickstarts/chat/get-started.md) The following documents may be interesting to you:-- Familiarize yourself with the [Chat SDK](sdk-features.md)+- Familiarize yourself with the [Chat SDK](sdk-features.md) |
databox-online | Azure Stack Edge Deploy Aks On Azure Stack Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md | Depending on the workloads you intend to deploy, you may need to ensure the foll ```azurepowershell az login- az ad sp show --id 'bc313c14-387c-4e7d-a58e-70417303ee3b' --query id -o tsv + az ad sp show --id bc313c14-387c-4e7d-a58e-70417303ee3b --query id -o tsv ``` Here's a sample output using the Azure CLI. You can run the same commands via the Cloud Shell in the Azure portal. |
databox-online | Azure Stack Edge Gpu Virtual Machine Sizes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md | |
iot-edge | Iot Edge Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md | sequenceDiagram EdgeGateway->>ContosoIotHub: Let's talk securely with TLS 🔒 EdgeGateway->>ContosoIotHub: Here's my certificate 📜- ContosoIotHub->>ContosoIotHub: Check if certificate thumbprint matches record note over EdgeGateway, ContosoIotHub: Cryptographic algorithms+ ContosoIotHub->>ContosoIotHub: Check if certificate thumbprint matches record ContosoIotHub->>EdgeGateway: Great, let's connect --> |
lighthouse | Manage Sentinel Workspaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/manage-sentinel-workspaces.md | You can also deploy workbooks directly in an individual managed tenant for scena ## Run Log Analytics and hunting queries across Microsoft Sentinel workspaces -Create and save Log Analytics queries for threat detection centrally in the managing tenant, including [hunting queries](../../sentinel/extend-sentinel-across-workspaces-tenants.md#cross-workspace-hunting). These queries can be run across all of your customers' Microsoft Sentinel workspaces by using the Union operator and the [workspace() expression](../../azure-monitor/logs/workspace-expression.md). +Create and save Log Analytics queries for threat detection centrally in the managing tenant, including [hunting queries](../../sentinel/extend-sentinel-across-workspaces-tenants.md#hunt-across-multiple-workspaces). These queries can be run across all of your customers' Microsoft Sentinel workspaces by using the Union operator and the [workspace() expression](../../azure-monitor/logs/workspace-expression.md). -For more information, see [Cross-workspace querying](../../sentinel/extend-sentinel-across-workspaces-tenants.md#cross-workspace-querying). +For more information, see [Cross-workspace querying](../../sentinel/extend-sentinel-across-workspaces-tenants.md#query-multiple-workspaces). ## Use automation for cross-workspace management -You can use automation to manage multiple Microsoft Sentinel workspaces and configure [hunting queries](../../sentinel/hunting.md), playbooks, and workbooks. For more information, see [Cross-workspace management using automation](../../sentinel/extend-sentinel-across-workspaces-tenants.md#cross-workspace-management-using-automation). +You can use automation to manage multiple Microsoft Sentinel workspaces and configure [hunting queries](../../sentinel/hunting.md), playbooks, and workbooks. For more information, see [Cross-workspace management using automation](../../sentinel/extend-sentinel-across-workspaces-tenants.md#manage-multiple-workspaces-using-automation). ## Monitor security of Office 365 environments |
logic-apps | Create Managed Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md | Before you can use your logic app's managed identity for authentication, you hav > the resource. Likewise, if you have to select your subscription before you can select the > target resource, you must give the identity access to the subscription. +> [!NOTE] +> In some cases, you might need the identity to have access to the associated resource. For example, +> suppose you have a managed identity for a logic app that needs access to update the application +> settings for that same logic app from a workflow. You must give that identity access to the associated logic app. + For example, to access an Azure Blob storage account with your managed identity, you have to set up access by using Azure role-based access control (Azure RBAC) and assign the appropriate role for that identity to the storage account. The steps in this section describe how to complete this task by using the [Azure portal](#azure-portal-assign-role) and [Azure Resource Manager template (ARM template)](../role-based-access-control/role-assignments-template.md). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation: | Tool | Documentation | |
migrate | Concepts Migration Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-planning.md | When you're ready for migration, use the Migration and modernization tool, and t - With the Migration and modernization tool, you can migrate on-premises VMs and servers, or VMs located in other private or public cloud (including AWS, GCP) with around zero downtime. - Azure DMS provides a fully managed service that's designed to enable seamless migrations from multiple database sources to Azure Data platforms, with minimal downtime. +### Upgrade Windows OS ++Azure Migrate provides an option to customers to upgrade their Windows Server OS seamlessly during the migration. Azure Migrate OS upgrade allows you to move from an older operating system to a newer one while keeping your settings, server roles, and data intact. [Learn more](how-to-upgrade-windows.md). ++Azure Migrate OS upgrade uses an Azure VM [Custom script extension](../virtual-machines/extensions/custom-script-windows.md) to perform the following activities for an in-place upgrade experience: ++- A data disk containing Windows Server setup files is created and attached to the VM. +- A Custom Script Extension called `InPlaceOsUpgrade` is enabled on the VM, which downloads a script from the storage account and initiates the upgrade in a quiet mode. ++ ## Next steps - Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework. |
migrate | How To Upgrade Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-upgrade-windows.md | + + Title: Upgrade Windows Operating System +description: Learn how to upgrade Windows OS during migration. +++ms. + Last updated : 07/10/2023++++# Azure Migrate Windows Server upgrade (Preview)  ++This article describes how to upgrade Windows Server OS while migrating to Azure. Azure Migrate OS upgrade allows you to move from an older operating system to a newer one while keeping your settings, server roles, and data intact. You can move your on-premises server to Azure with an upgraded OS version of Windows Server using Windows upgrade. ++> [!NOTE] +> This feature is currently available only for VMWare agentless migration. ++## Prerequisites ++- Ensure you have an existing Migrate project or [create](create-manage-projects.md) a project. +- Ensure you have discovered the servers according to [Discover servers in VMware environment](tutorial-discover-vmware.md) and replicated the servers as described in [Migrate VMware VMs](tutorial-migrate-vmware.md#replicate-vms). +- Verify the operating system disk has enough [free space](https://learn.microsoft.com/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements) to perform the in-place upgrade. The minimum disk space requirement is 32 GB.  +- The upgrade feature only works for Windows Server Standard and Datacenter editions. +- This feature does not work for Windows Server with an evaluation license and needs a full license. If you have any server with an evaluation license, upgrade to full edition before starting migration to Azure. +- Disable antivirus and anti-spyware software and firewalls. These types of software can conflict with the upgrade process. Re-enable antivirus and anti-spyware software and firewalls after the upgrade is completed. +- Ensure that your VM has the capability of adding another data disk as this feature requires the addition of an extra data disk temporarily for a seamless upgrade experience.  +- For Private Endpoint enabled Azure Migrate projects, follow [these](migrate-servers-to-azure-using-private-link.md?pivots=agentlessvmware#replicate-vms) steps before initiating any Test migration/Migration with OS upgrade. +++> [!NOTE] +> In case of OS upgrade failure, Azure Migrate may download the Windows SetupDiag for error details. Ensure the VM created in Azure post the migration has access to [SetupDiag](https://go.microsoft.com/fwlink/?linkid=870142). In case there is no access to SetupDiag, you may not be able to get detailed OS upgrade failure error codes but the upgrade can still proceed. ++## Overview ++The Windows OS upgrade capability helps you move from an older operating system to a newer one while keeping your settings, server roles, and data intact. Since both upgrade and migration operations are completed at once, this reduces duplicate planning, downtime, and test efforts. The upgrade capability also reduces the risk, as customers can first test their OS upgrade in an isolated environment in Azure using test migration without any impact on their on-premises server.    ++You can upgrade to up to two versions from the current version.   ++**Source** | **Supported target versions** + | +Windows Server 2012 | Windows Server 2016 +Windows Server 2012 R2 | Windows Server 2016, Windows Server 2019 +Windows Server 2016 | Windows Server 2019, Windows Server 2022 +Windows Server 2019 | Windows Server 2022 ++## Upgrade Windows OS during test migration ++To upgrade Windows during the test migration, follow these steps: ++1. On the **Get started** page > **Servers, databases and web apps**, select **Replicate**. + + A Start Replication job begins. When the Start Replication job finishes successfully, the machines begin their initial replication to Azure. ++3. Select **Replicating servers** in **Migration and modernization** to monitor the replication status. ++4. In **Migration goals** > **Servers, databases and webapps** > **Migration and modernization**, select **Replicated servers** under **Replications**. ++5. In the **Replicating machines** tab, right-click the VM to test and select **Test migrate**. ++6. Select the **Upgrade available** option. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. ++7. Select **Test migration** to initiate the test migration followed by the OS upgrade. ++8. After the migration job is successful, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has the suffix *-Test*. ++ You can now use this server with upgraded OS to complete any application testing. The original server continues running on-premises without any impact while you test the newly upgraded server in an isolated environment. ++9. After the test is done, right-click the Azure VM in **Replicating machines**, and select **Clean up test migration**. This deletes the test VM and any resources associated with it. ++## Upgrade Windows OS during migration ++After you've verified that the test migration works as expected, you can migrate the on-premises machines. To upgrade Windows during the migration, follow these steps: ++1. On the **Get started** page > **Servers, databases and web apps**, select **Replicate**. A Start Replication job begins. +2. In **Replicating machines**, right-click the VM and select **Migrate**. +3. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**. + - By default, Azure Migrate shuts down the on-premises VM to ensure minimum data loss. + - If you don't want to shut down the VM, select No. +4. Select the **Upgrade available** option. +5. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. The upgrade available option changes to upgrade configured. ++5. Select **Migrate** to start the migration and the upgrade. ++## Next steps ++Investigate the [cloud migration journey](https://learn.microsoft.com/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework. ++ |
migrate | Troubleshoot Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-upgrade.md | + + Title: Troubleshoot Windows upgrade issues +description: Provides an overview of known issues in the Windows OS upgrade feature ++++ Last updated : 06/08/2023++++# Troubleshoot Windows OS upgrade issues  ++This article describes some common issues that you might encounter when you upgrade the Windows OS using the Migration and modernization tool.  ++## Cannot attach the OS setup disk as VM reached maximum disk allowed ++The Test Migration and Migration will fail in the prerequisite stage if the VM already has the maximum number of data disks based on its SKU. Since the workflow creates an additional data disk temporarily, it's mandatory to have n-1 (n represents maximum number of disks supported for the respective SKU of the VM) disks to complete the upgrade successfully.    ++### Recommended action ++Select a different target Azure VM SKU that can attach more data disks and retry the operation. Since Azure Migrate completed the migration and created a VM in Azure, retry the operation by following these steps:    ++1. Clean up the migration:   + 1. Test Migration: Right-click the Azure VM in **Replications** and select **Clean up test migration**.  + 1. Migration: Since the VM is already migrated to Azure, follow [these](https://learn.microsoft.com/azure/virtual-machines/windows-in-place-upgrade) steps here to upgrade the OS. ++2. Update the target VM SKU settings:    + 1. In the Azure portal, select the Azure Migrate project.    + 2. Go to **Migration tools** > **Replications** > Azure VM count.    + 3. Select the Replicating machine.    + 4. Go to **Compute and network**.    + 5. In **Compute**, change the VM size to support more data disks.    ++3. Verify that the operating system disk has enough [free space](https://learn.microsoft.com/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements) to perform the in-place upgrade. The minimum disk space requirement is 32 GB. If more space is needed, follow [these](https://learn.microsoft.com/azure/virtual-machines/windows/expand-os-disk) steps to expand the operating system disk attached to the VM for a successful OS upgrade.    ++## Migration fails for Private endpoint enabled Azure Migrate projects  ++The migration fails if the storage account that you select for replicating VMs doesn't have the Firewall settings of the target VNET. ++### Recommended action ++Add the target VNET into the firewall in the storage account that you select in the above step for replicating VMs:  ++1. Go to **Networking** > **Firewall and Virtual Networks** > **Public Network Access – Enabled from selected Virtual Network and IP address** > **Virtual Network** > Add existing Virtual Network and add your target VNET. Then proceed with the Test Migration/Migration.    ++2. Perform the initial replication by following [these](migrate-servers-to-azure-using-private-link.md?pivots=agentlessvmware#replicate-vms) steps.  ++## Server is migrated without OS upgrade with status “Completed with errors”  ++If the source OS version and the OS version to be upgraded are the same, the server migrates without an OS upgrade with the status **Completed with errors**. For example, if the source OS version is Windows 2019 and the upgrade option selected is Windows 2019, then the server is migrated without an OS upgrade with the status **Completed with errors**.  ++### Recommended action ++Ensure the current OS version is different from the target OS version.   ++## Next steps ++[Learn more](tutorial-migrate-vmware.md) about migrating VMware VMs. |
migrate | Tutorial Migrate Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware.md | Before you begin this tutorial, you should: 3. Go to the already created project or [create a new project](./create-manage-projects.md) 4. Verify permissions for your Azure account - Your Azure account needs permissions to create a VM, and write to an Azure managed disk. +> [!NOTE] +> If you're planning to upgrade your Windows operating system, Azure Migrate may download the Windows SetupDiag for error details in case upgrade fails. Ensure the VM created in Azure post the migration has access to [SetupDiag](https://go.microsoft.com/fwlink/?linkid=870142). In case there is no access to SetupDiag, you may not be able to get detailed OS upgrade failure error codes but the upgrade can still proceed. + ## Set up the Azure Migrate appliance The Migration and modernization tool runs a lightweight VMware VM appliance that's used for discovery, assessment, and agentless migration of VMware VMs. If you follow the [assessment tutorial](./tutorial-assess-vmware-azure-vm.md), you've already set the appliance up. If you didn't, set it up now, using one of these methods: The Migration and modernization tool runs a lightweight VMware VM appliance that - **OVA template**: [Set up](how-to-set-up-appliance-vmware.md) on a VMware VM using a downloaded OVA template. - **Script**: [Set up](deploy-appliance-script.md) on a VMware VM or physical machine, using a PowerShell installer script. This method should be used if you can't set up a VM using an OVA template, or if you're in Azure Government. -After creating the appliance, you check that it can connect to Azure Migrate:Server Assessment, configure it for the first time, and register it with the Azure Migrate project. +After creating the appliance, you check that it can connect to Azure Migrate: Server Assessment, configure it for the first time, and register it with the Azure Migrate project. ## Replicate VMs Enable replication as follows: 11. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements). - - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**. + - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**. - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer. - **Availability Zone**: Specify the Availability Zone to use. - **Availability Set**: Specify the Availability Set to use. Do a test migration as follows: 4. Choose the subnet to which you would like to associate each of the Network Interface Cards (NICs) of the migrated VM. :::image type="content" source="./media/tutorial-migrate-vmware/test-migration-subnet-selection.png" alt-text="Screenshot shows subnet selection during test migration.":::-+1. You have an option to upgrade the Windows Server OS during test migration. To upgrade, select the **Upgrade available** option. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. [Learn more](how-to-upgrade-windows.md). 5. The **Test migration** job starts. Monitor the job in the portal notifications. 6. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**. 7. After the test is done, right-click the Azure VM in **Replicating machines**, and click **Clean up test migration**. After you've verified that the test migration works as expected, you can migrate 3. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**. - By default Azure Migrate shuts down the on-premises VM, and runs an on-demand replication to synchronize any VM changes that occurred since the last replication occurred. This ensures no data loss. - If you don't want to shut down the VM, select **No**+1. You have an option to upgrade the Windows Server OS during migration. To upgrade, select the **Upgrade available** option. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. [Learn more](how-to-upgrade-windows.md). 4. A migration job starts for the VM. Track the job in Azure notifications. 5. After the job finishes, you can view and manage the VM from the **Virtual Machines** page. |
migrate | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md | +## Update (July 2023) +- Public Preview: Upgrade your Windows OS during Migration using the Migration and modernization tool in your VMware environment. [Learn more](how-to-upgrade-windows.md). + ## Update (June 2023) - Envision security cost savings with [Microsoft Defender for Cloud (MDC)](https://www.microsoft.com/security/business/cloud-security/microsoft-defender-cloud) using Azure Migrate business case. -- Resolve issues impacting the performance data collection and accuracy of Azure VM and Azure VMware Solution assessment recommendation and improve the confidence ratings of assessments.[Learn more](common-questions-discovery-assessment.md).+- Resolve issues impacting the performance data collection and accuracy of Azure VM and Azure VMware Solution assessment recommendation and improve the confidence ratings of assessments. [Learn more](common-questions-discovery-assessment.md). + ## Update (May 2023) - SQL Server discovery and assessment in Azure Migrate is now Generally Available (GA). [Learn more](concepts-azure-sql-assessment-calculation.md). ## Update (April 2023)-- Build a quick business case for servers imported via a .csv file. [Learn more](tutorial-discover-import.md)+- Build a quick business case for servers imported via a .csv file. [Learn more](tutorial-discover-import.md). - Build business case using Azure Migrate for: - Servers and workloads running in your Microsoft Hyper-V and Physical/ Bare-metal environments as well as IaaS services of other public cloud. - SQL Server Always On Failover Cluster Instances and Always On Availability Groups. [Learn more](how-to-discover-applications.md). |
nat-gateway | Tutorial Dual Stack Outbound Nat Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer.md | In this tutorial, learn how to configure NAT gateway and a public load balancer NAT gateway supports the use of IPv4 public IP addresses for outbound connectivity whereas load balancer supports both IPv4 and IPv6 public IP addresses. When NAT gateway with an IPv4 public IP is present with a load balancer using an IPv4 public IP address, NAT gateway takes precedence over load balancer for providing outbound connectivity. When a NAT gateway is deployed in a dual-stack network with a IPv6 load balancer, IPv4 outbound traffic uses the NAT gateway, and IPv6 outbound traffic uses the load balancer. ++ In this tutorial, you learn how to: > [!div class="checklist"] |
nat-gateway | Tutorial Hub Spoke Route Nat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-route-nat.md | In this tutorial, you learn how to: ## Create a NAT gateway -All outbound internet traffic will traverse the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network. +All outbound internet traffic traverses the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network. 1. Sign in to the [Azure portal](https://portal.azure.com). -2. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results. +1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results. -3. Select **+ Create**. +1. Select **+ Create**. -4. In the **Basics** tab of **Create network address translation (NAT) gateway** enter or select the following information: +1. In the **Basics** tab of **Create network address translation (NAT) gateway** enter or select the following information: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **Create new**. </br> Enter **TutorialNATHubSpoke-rg** in **Name**. </br> Select **OK**. | + | Resource group | Select **Create new**. </br> Enter **test-rg** in **Name**. </br> Select **OK**. | | **Instance details** | |- | NAT gateway name | Enter **myNATgateway**. | - | Region | Select **South Central US**. | + | NAT gateway name | Enter **nat-gateway**. | + | Region | Select **East US 2**. | | Availability zone | Select a **Zone** or **No zone**. | | TCP idle timeout (minutes) | Leave the default of **4**. | -5. Select **Next: Outbound IP**. +1. Select **Next: Outbound IP**. -6. In **Outbound IP** in **Public IP addresses**, select **Create a new public IP address**. +1. In **Outbound IP** in **Public IP addresses**, select **Create a new public IP address**. -7. Enter **myPublicIP-NAT** in **Name**. +1. Enter **public-ip-nat** in **Name**. -8. Select **OK**. +1. Select **OK**. -9. Select **Review + create**. +1. Select **Review + create**. -10. Select **Create**. +1. Select **Create**. ## Create hub virtual network The hub virtual network is the central network of the solution. The hub network 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -2. Select **+ Create**. +1. Select **+ Create**. -3. In the **Basics** tab of **Create virtual network**, enter or select the following information: +1. In the **Basics** tab of **Create virtual network**, enter or select the following information: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter **myVNet-Hub**. | - | Region | Select **South Central US**. | + | Name | Enter **vnet-hub**. | + | Region | Select **East US 2**. | -4. Select **Next: IP Addresses**. +1. Select **Next: IP Addresses**. -5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. +1. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. -6. In **IPv4 address space** enter **10.1.0.0/16**. +1. In **IPv4 address space** enter **10.0.0.0/16**. -7. Select **+ Add subnet**. +1. Select **+ Add subnet**. -8. In **Add subnet** enter or select the following information: +1. In **Add subnet** enter or select the following information: | Setting | Value | | - | -- | | Subnet name | Enter **subnet-private**. |- | Subnet address range | Enter **10.1.0.0/24**. | + | Subnet address range | Enter **10.0.0.0/24**. | -9. Select **Add**. +1. Select **Add**. -10. Select **+ Add subnet**. +1. Select **+ Add subnet**. -11. In **Add subnet** enter or select the following information: +1. In **Add subnet** enter or select the following information: | Setting | Value | | - | -- | | Subnet name | Enter **subnet-public**. |- | Subnet address range | Enter **10.1.253.0/28**. | + | Subnet address range | Enter **10.0.253.0/28**. | | **NAT GATEWAY** | |- | NAT gateway | Select **myNATgateway**. | + | NAT gateway | Select **nat-gateway**. | -12. Select **Add**. +1. Select **Add**. -13. Select **Next: Security**. +1. Select **Next: Security**. -14. In the **Security** tab in **BastionHost**, select **Enable**. +1. In the **Security** tab in **BastionHost**, select **Enable**. -15. Enter or select the following information: +1. Enter or select the following information: | Setting | Value | | - | -- |- | Bastion name | Enter **myBastion**. | - | AzureBastionSubnet address space | Enter **10.1.1.0/26**. | - | Public IP address | Select **Create new**. </br> In **Name** enter **myPublicIP-Bastion**. </br> Select **OK**. | + | Bastion name | Enter **bastion**. | + | AzureBastionSubnet address space | Enter **10.0.1.0/26**. | + | Public IP address | Select **Create new**. </br> In **Name** enter **public-ip**. </br> Select **OK**. | -16. Select **Review + create**. +1. Select **Review + create**. -17. Select **Create**. +1. Select **Create**. -It will take a few minutes for the bastion host to deploy. When the virtual network is created as part of the deployment, you can proceed to the next steps. +It takes a few minutes for the bastion host to deploy. When the virtual network is created as part of the deployment, you can proceed to the next steps. ## Create simulated NVA virtual machine -The simulated NVA will act as a virtual appliance to route all traffic between the spokes and hub and traffic outbound to the internet. An Ubuntu virtual machine is used for the simulated NVA. Use the following example to create the simulated NVA and configure the network interfaces. +The simulated NVA acts as a virtual appliance to route all traffic between the spokes and hub and traffic outbound to the internet. An Ubuntu virtual machine is used for the simulated NVA. Use the following example to create the simulated NVA and configure the network interfaces. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **+ Create** then **Azure virtual machine**. +1. Select **+ Create** then **Azure virtual machine**. -3. In **Create a virtual machine** enter or select the following information in the **Basics** tab: +1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Virtual machine name | Enter **myVM-NVA**. | - | Region | Select **(US) South Central US**. | + | Virtual machine name | Enter **vm-nva**. | + | Region | Select **(US) East US 2**. | | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. |- | Image | Select **Ubuntu Server 20.04 LTS - x64 Gen2**. | + | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. | | VM architecture | Leave the default of **x64**. | | Size | Select a size. | | **Administrator account** | | The simulated NVA will act as a virtual appliance to route all traffic between t | **Inbound port rules** | | | Public inbound ports | Select **None**. | -4. Select **Next: Disks** then **Next: Networking**. +1. Select **Next: Disks** then **Next: Networking**. -5. In the Networking tab, enter or select the following information: +1. In the Networking tab, enter or select the following information: | Setting | Value | | - | -- | | **Network interface** | |- | Virtual network | Select **myVNet-Hub**. | - | Subnet | Select **subnet-public**. | + | Virtual network | Select **vnet-hub**. | + | Subnet | Select **subnet-public (10.0.253.0/28)**. | | Public IP | Select **None**. |+ | NIC network security group | Select **Advanced**. | + | Configure network security group | Select **Create new**. </br> In **Name** enter **nsg-nva**. </br> Select **OK**. | -6. Leave the rest of the options at the defaults and select **Review + create**. +1. Leave the rest of the options at the defaults and select **Review + create**. -7. Select **Create**. +1. Select **Create**. ### Configure virtual machine network interfaces The IP configuration of the primary network interface of the virtual machine is 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM-NVA**. +1. Select **vm-nva**. -3. In the **Overview** select **Stop** if the virtual machine is running. +1. In the **Overview** select **Stop** if the virtual machine is running. -4. Select **Networking** in **Settings**. +1. Select **Networking** in **Settings**. -5. In **Networking** select the network interface name next to **Network Interface:**. The interface name is the virtual machine name and random numbers and letters. In this example, the interface name is **myvm-nva271**. +1. In **Networking** select the network interface name next to **Network Interface:**. The interface name is the virtual machine name and random numbers and letters. In this example, the interface name is **vm-nva271**. -6. In the network interface properties, select **IP configurations** in **Settings**. +1. In the network interface properties, select **IP configurations** in **Settings**. -7. In **IP forwarding** select **Enabled**. +1. In **IP forwarding** select **Enabled**. -8. Select **Save**. +1. Select **Apply**. -9. When the save action completes, select **ipconfig1**. +1. When the apply action completes, select **ipconfig1**. -10. In **Assignment** in **ipconfig1** select **Static**. +1. In **Assignment** in **ipconfig1** select **Static**. -11. In **IP address** enter **10.1.253.10**. +1. In **IP address** enter **10.0.253.10**. -12. Select **Save**. +1. Select **Save**. -13. When the save action completes, return to the networking configuration for **myVM-NVA**. +1. When the save action completes, return to the networking configuration for **vm-nva**. -14. In **Networking** of **myVM-NVA** select **Attach network interface**. +1. In **Networking** of **vm-nva** select **Attach network interface**. -15. Select **Create and attach network interface**. +1. Select **Create network interface**. -16. In **Create network interface** enter or select the following information: +1. In **Create network interface** enter or select the following information: | Setting | Value | | - | -- | | **Project details** | |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Network interface** | |- | Name | Enter **myVM-NVA-private-nic**. | - | Subnet | Select **subnet-private (10.1.0.0/24)**. | + | Name | Enter **nic-private**. | + | Subnet | Select **subnet-private (10.0.0.0/24)**. | | NIC network security group | Select **Advanced**. |- | Configure network security group | Select **myVM-VNA-nsg**. | + | Configure network security group | Select **nsg-nva**. | | Private IP address assignment | Select **Static**. |- | Private IP address | Enter **10.1.0.10**. | + | Private IP address | Enter **10.0.0.10**. | -17. Select **Create**. +1. Select **Create**. ### Configure virtual machine software The routing for the simulated NVA uses IP tables and internal NAT in the Ubuntu 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM-NVA**. +1. Select **vm-nva**. -3. Start **myVM-NVA**. +1. Start **vm-nva**. -4. When the virtual machine is completed booting, continue with the next steps. +1. When the virtual machine is completed booting, continue with the next steps. -5. Select **Connect** then **Bastion**. +1. Select **Connect** then **Bastion**. Select **Use Bastion**. -6. Enter the username and password you entered when the virtual machine was created. +1. Enter the username and password you entered when the virtual machine was created. -7. Select **Connect**. +1. Select **Connect**. -8. Enter the following information at the prompt of the virtual machine to enable IP forwarding: +1. Enter the following information at the prompt of the virtual machine to enable IP forwarding: ```bash sudo vim /etc/sysctl.conf ``` -9. In the Vim editor, remove the **`#`** from the line **`net.ipv4.ip_forward=1`**: +1. In the Vim editor, remove the **`#`** from the line **`net.ipv4.ip_forward=1`**: Press the **Insert** key. The routing for the simulated NVA uses IP tables and internal NAT in the Ubuntu Enter **`:wq`** and press **Enter**. -10. Enter the following information to enable internal NAT in the virtual machine: +1. Enter the following information to enable internal NAT in the virtual machine: ```bash sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE The routing for the simulated NVA uses IP tables and internal NAT in the Ubuntu exit ``` -11. Use Vim to edit the configuration with the following information: +1. Use Vim to edit the configuration with the following information: ```bash sudo vim /etc/rc.local The routing for the simulated NVA uses IP tables and internal NAT in the Ubuntu Enter **`:wq`** and press **Enter**. -12. Reboot the virtual machine: +1. Reboot the virtual machine: ```bash sudo reboot Route tables are used to overwrite Azure's default routing. Create a route table 1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -2. Select **+ Create**. +1. Select **+ Create**. -3. In **Create Route table** enter or select the following information: +1. In **Create Route table** enter or select the following information: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Region | Select **South Central US**. | - | Name | Enter **myRouteTable-NAT-Hub**. | + | Region | Select **East US 2**. | + | Name | Enter **route-table-nat-hub**. | | Propagate gateway routes | Leave the default of **Yes**. | -4. Select **Review + create**. +1. Select **Review + create**. -5. Select **Create**. +1. Select **Create**. -6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. +1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -7. Select **myRouteTable-NAT-Hub**. +1. Select **route-table-nat-hub**. -8. In **Settings** select **Routes**. +1. In **Settings** select **Routes**. -9. Select **+ Add** in **Routes**. +1. Select **+ Add** in **Routes**. -10. Enter or select the following information in **Add route**: +1. Enter or select the following information in **Add route**: | Setting | Value | | - | -- |- | Route name | Enter **default-via-NAT-Hub**. | - | Address prefix destination | Select **IP Addresses**. | + | Route name | Enter **default-via-nat-hub**. | + | Destination type | Select **IP Addresses**. | | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. | | Next hop type | Select **Virtual appliance**. |- | Next hop address | Enter **10.1.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. | + | Next hop address | Enter **10.0.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. | -11. Select **Add**. +1. Select **Add**. -12. Select **Subnets** in **Settings**. +1. Select **Subnets** in **Settings**. -13. Select **+ Associate**. +1. Select **+ Associate**. -14. Enter or select the following information in **Associate subnet**: +1. Enter or select the following information in **Associate subnet**: | Setting | Value | | - | -- |- | Virtual network | Select **myVNet-Hub (TutorialNATHubSpoke-rg)**. | + | Virtual network | Select **vnet-hub (test-rg)**. | | Subnet | Select **subnet-private**. | -15. Select **OK**. +1. Select **OK**. ## Create spoke one virtual network Create another virtual network in a different region for the first spoke of the 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -2. Select **+ Create**. +1. Select **+ Create**. -3. In the **Basics** tab of **Create virtual network**, enter or select the following information: +1. In the **Basics** tab of **Create virtual network**, enter or select the following information: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter **myVNet-Spoke-1**. | - | Region | Select **East US 2**. | + | Name | Enter **vnet-spoke-1**. | + | Region | Select **South Central US**. | -4. Select **Next: IP Addresses**. +1. Select **Next: IP Addresses**. -5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. +1. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. -6. In **IPv4 address space** enter **10.2.0.0/16**. +1. In **IPv4 address space** enter **10.1.0.0/16**. -7. Select **+ Add subnet**. +1. Select **+ Add subnet**. -8. In **Add subnet** enter or select the following information: +1. In **Add subnet** enter or select the following information: | Setting | Value | | - | -- | | Subnet name | Enter **subnet-private**. |- | Subnet address range | Enter **10.2.0.0/24**. | + | Subnet address range | Enter **10.1.0.0/24**. | -9. Select **Add**. +1. Select **Add**. -10. Select **Review + create**. +1. Select **Review + create**. -11. Select **Create**. +1. Select **Create**. ## Create peering between hub and spoke one A virtual network peering is used to connect the hub to spoke one and spoke one 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -2. Select **myVNet-Hub**. +1. Select **vnet-hub**. -3. Select **Peerings** in **Settings**. +1. Select **Peerings** in **Settings**. -4. Select **+ Add**. +1. Select **+ Add**. -5. Enter or select the following information in **Add peering**: +1. Enter or select the following information in **Add peering**: | Setting | Value | | - | -- | | **This virtual network** | |- | Peering link name | Enter **myVNet-Hub-To-myVNet-Spoke-1**. | + | Peering link name | Enter **vnet-hub-to-vnet-spoke-1**. | | Traffic to remote virtual network | Leave the default of **Allow (default)**. | | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | | Virtual network gateway or Route Server | Leave the default of **None**. | | **Remote virtual network** | |- | Peering link name | Enter **myVNet-Spoke-1-To-myVNet-Hub**. | + | Peering link name | Enter **vnet-spoke-1-to-vnet-hub**. | | Virtual network deployment model | Leave the default of **Resource manager**. | | Subscription | Select your subscription. |- | Virtual network | Select **myVNet-Spoke-1**. | + | Virtual network | Select **vnet-spoke-1**. | | Traffic to remote virtual network | Leave the default of **Allow (default)**. | | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | | Virtual network gateway or Route Server | Leave the default of **None**. | -6. Select **Add**. +1. Select **Add**. -7. Select **Refresh** and verify **Peering status** is **Connected**. +1. Select **Refresh** and verify **Peering status** is **Connected**. ## Create spoke one network route table -Create a route table to force all inter-interspoke and internet egress traffic through the simulated NVA in the hub virtual network. +Create a route table to force all inter-spoke and internet egress traffic through the simulated NVA in the hub virtual network. 1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -2. Select **+ Create**. +1. Select **+ Create**. -3. In **Create Route table** enter or select the following information: +1. In **Create Route table** enter or select the following information: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Region | Select **East US 2**. | - | Name | Enter **myRouteTable-NAT-Spoke-1**. | + | Region | Select **South Central US**. | + | Name | Enter **route-table-nat-spoke-1**. | | Propagate gateway routes | Leave the default of **Yes**. | -4. Select **Review + create**. +1. Select **Review + create**. -5. Select **Create**. +1. Select **Create**. -6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. +1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -7. Select **myRouteTable-NAT-Spoke-1**. +1. Select **route-table-nat-spoke-1**. -8. In **Settings** select **Routes**. +1. In **Settings** select **Routes**. -9. Select **+ Add** in **Routes**. +1. Select **+ Add** in **Routes**. -10. Enter or select the following information in **Add route**: +1. Enter or select the following information in **Add route**: | Setting | Value | | - | -- |- | Route name | Enter **default-via-NAT-Spoke-1**. | - | Address prefix destination | Select **IP Addresses**. | + | Route name | Enter **default-via-nat-spoke-1**. | + | Destination type | Select **IP Addresses**. | | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. | | Next hop type | Select **Virtual appliance**. |- | Next hop address | Enter **10.1.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. | + | Next hop address | Enter **10.0.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. | -11. Select **Add**. +1. Select **Add**. -12. Select **Subnets** in **Settings**. +1. Select **Subnets** in **Settings**. -13. Select **+ Associate**. +1. Select **+ Associate**. -14. Enter or select the following information in **Associate subnet**: +1. Enter or select the following information in **Associate subnet**: | Setting | Value | | - | -- |- | Virtual network | Select **myVNet-Spoke-1 (TutorialNATHubSpoke-rg)**. | + | Virtual network | Select **vnet-spoke-1 (test-rg)**. | | Subnet | Select **subnet-private**. | -15. Select **OK**. +1. Select **OK**. ## Create spoke one test virtual machine A Windows Server 2022 virtual machine is used to test the outbound internet traf 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **+ Create** then **Azure virtual machine**. +1. Select **+ Create** then **Azure virtual machine**. -3. In **Create a virtual machine** enter or select the following information in the **Basics** tab: +1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Virtual machine name | Enter **myVM-Spoke-1**. | - | Region | Select **(US) East US 2**. | + | Virtual machine name | Enter **vm-spoke-1**. | + | Region | Select **(US) South Central US**. | | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. | | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | A Windows Server 2022 virtual machine is used to test the outbound internet traf | **Inbound port rules** | | | Public inbound ports | Select **None**. | -4. Select **Next: Disks** then **Next: Networking**. +1. Select **Next: Disks** then **Next: Networking**. -5. In the Networking tab, enter or select the following information: +1. In the Networking tab, enter or select the following information: | Setting | Value | | - | -- | | **Network interface** | |- | Virtual network | Select **myVNet-Spoke-1**. | - | Subnet | Select **subnet-private (10.2.0.0/24)**. | + | Virtual network | Select **vnet-spoke-1**. | + | Subnet | Select **subnet-private (10.1.0.0/24)**. | | Public IP | Select **None**. |- | NIC network security group | Select **Basic**. | - | Public inbound ports | Select **Allow selected ports**. | - | Select inbound ports | Select **HTTP (80)**. </br> Select **RDP (3389)**. | + | NIC network security group | Select **Advanced**. | + | Configure network security group | Select **Create new**. </br> Enter **nsg-spoke-1**. | + | Inbound rules | Select **+ Add an inbound rule**. </br> Select **HTTP** in **Service**. </br> Select **Add**. </br> Select **OK**. | ++1. Select **OK**. ++1. Leave the rest of the options at the defaults and select **Review + create**. ++1. Select **Create**. ++## Install IIS on spoke one test virtual machine ++IIS is installed on the Windows Server 2022 virtual machine to test outbound internet traffic through the NAT gateway and inter-spoke traffic in the hub and spoke network. ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **vm-spoke-1**. ++1. In **Operations**, select **Run command**. ++1. Select **RunPowerShellScript**. ++1. Enter the following script in **Run Command Script**: ++ ```powershell + # Install IIS server role + Install-WindowsFeature -name Web-Server -IncludeManagementTools + + # Remove default htm file + Remove-Item C:\inetpub\wwwroot\iisstart.htm + + # Add a new htm file that displays server name + Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername) + ``` -6. Leave the rest of the options at the defaults and select **Review + create**. +1. Select **Run**. -7. Select **Create**. +1. Wait for the script to complete before continuing to the next step. It can take a few minutes for the script to complete. +1. When the script completes, the **Output*** displays the following: ++ ```output + Success Restart Needed Exit Code Feature Result + - -- -- + True No Success {Common HTTP Features, Default Document, D... + ``` + ## Create the second spoke virtual network Create the second virtual network for the second spoke of the hub and spoke network. 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -2. Select **+ Create**. +1. Select **+ Create**. -3. In the **Basics** tab of **Create virtual network**, enter or select the following information: +1. In the **Basics** tab of **Create virtual network**, enter or select the following information: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter **myVNet-Spoke-2**. | + | Name | Enter **vnet-spoke-2**. | | Region | Select **West US 2**. | -4. Select **Next: IP Addresses**. +1. Select **Next: IP Addresses**. -5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. +1. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. -6. In **IPv4 address space** enter **10.3.0.0/16**. +1. In **IPv4 address space** enter **10.2.0.0/16**. -7. Select **+ Add subnet**. +1. Select **+ Add subnet**. -8. In **Add subnet** enter or select the following information: +1. In **Add subnet** enter or select the following information: | Setting | Value | | - | -- | | Subnet name | Enter **subnet-private**. |- | Subnet address range | Enter **10.3.0.0/24**. | + | Subnet address range | Enter **10.2.0.0/24**. | -9. Select **Add**. +1. Select **Add**. -10. Select **Review + create**. +1. Select **Review + create**. -11. Select **Create**. +1. Select **Create**. ## Create peering between hub and spoke two Create a two-way virtual network peer between the hub and spoke two. 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -2. Select **myVNet-Hub**. +1. Select **vnet-hub**. -3. Select **Peerings** in **Settings**. +1. Select **Peerings** in **Settings**. -4. Select **+ Add**. +1. Select **+ Add**. -5. Enter or select the following information in **Add peering**: +1. Enter or select the following information in **Add peering**: | Setting | Value | | - | -- | | **This virtual network** | |- | Peering link name | Enter **myVNet-Hub-To-myVNet-Spoke-2**. | + | Peering link name | Enter **vnet-hub-to-vnet-spoke-2**. | | Traffic to remote virtual network | Leave the default of **Allow (default)**. | | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | | Virtual network gateway or Route Server | Leave the default of **None**. | | **Remote virtual network** | |- | Peering link name | Enter **myVNet-Spoke-2-To-myVNet-Hub**. | + | Peering link name | Enter **vnet-spoke-2-to-vnet-hub**. | | Virtual network deployment model | Leave the default of **Resource manager**. | | Subscription | Select your subscription. |- | Virtual network | Select **myVNet-Spoke-2**. | + | Virtual network | Select **vnet-spoke-2**. | | Traffic to remote virtual network | Leave the default of **Allow (default)**. | | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | | Virtual network gateway or Route Server | Leave the default of **None**. | -6. Select **Add**. +1. Select **Add**. -7. Select **Refresh** and verify **Peering status** is **Connected**. +1. Select **Refresh** and verify **Peering status** is **Connected**. ## Create spoke two network route table Create a route table to force all outbound internet and inter-spoke traffic thro 1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -2. Select **+ Create**. +1. Select **+ Create**. -3. In **Create Route table** enter or select the following information: +1. In **Create Route table** enter or select the following information: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | | | Region | Select **West US 2**. |- | Name | Enter **myRouteTable-NAT-Spoke-2**. | + | Name | Enter **route-table-nat-spoke-2**. | | Propagate gateway routes | Leave the default of **Yes**. | -4. Select **Review + create**. +1. Select **Review + create**. -5. Select **Create**. +1. Select **Create**. -6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. +1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -7. Select **myRouteTable-NAT-Spoke-2**. +1. Select **route-table-nat-spoke-2**. -8. In **Settings** select **Routes**. +1. In **Settings** select **Routes**. -9. Select **+ Add** in **Routes**. +1. Select **+ Add** in **Routes**. -10. Enter or select the following information in **Add route**: +1. Enter or select the following information in **Add route**: | Setting | Value | | - | -- |- | Route name | Enter **default-via-NAT-Spoke-2**. | - | Address prefix destination | Select **IP Addresses**. | + | Route name | Enter **default-via-nat-spoke-2**. | + | Destination type | Select **IP Addresses**. | | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. | | Next hop type | Select **Virtual appliance**. |- | Next hop address | Enter **10.1.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. | + | Next hop address | Enter **10.0.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. | -11. Select **Add**. +1. Select **Add**. -12. Select **Subnets** in **Settings**. +1. Select **Subnets** in **Settings**. -13. Select **+ Associate**. +1. Select **+ Associate**. -14. Enter or select the following information in **Associate subnet**: +1. Enter or select the following information in **Associate subnet**: | Setting | Value | | - | -- |- | Virtual network | Select **myVNet-Spoke-2 (TutorialNATHubSpoke-rg)**. | + | Virtual network | Select **vnet-spoke-2 (test-rg)**. | | Subnet | Select **subnet-private**. | -15. Select **OK**. +1. Select **OK**. ## Create spoke two test virtual machine Create a Windows Server 2022 virtual machine for the test virtual machine in spo 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **+ Create** then **Azure virtual machine**. +1. Select **+ Create** then **Azure virtual machine**. -3. In **Create a virtual machine** enter or select the following information in the **Basics** tab: +1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: | Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpoke-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Virtual machine name | Enter **myVM-Spoke-2**. | + | Virtual machine name | Enter **vm-spoke-2**. | | Region | Select **(US) West US 2**. | | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. | Create a Windows Server 2022 virtual machine for the test virtual machine in spo | **Inbound port rules** | | | Public inbound ports | Select **None**. | -4. Select **Next: Disks** then **Next: Networking**. +1. Select **Next: Disks** then **Next: Networking**. -5. In the Networking tab, enter or select the following information: +1. In the Networking tab, enter or select the following information: | Setting | Value | | - | -- | | **Network interface** | |- | Virtual network | Select **myVNet-Spoke-2**. | - | Subnet | Select **subnet-private (10.3.0.0/24)**. | + | Virtual network | Select **vnet-spoke-2**. | + | Subnet | Select **subnet-private (10.2.0.0/24)**. | | Public IP | Select **None**. |- | NIC network security group | Select **Basic**. | - | Public inbound ports | Select **Allow selected ports**. | - | Select inbound ports | Select **HTTP (80)**. </br> Select **RDP (3389)**. | + | NIC network security group | Select **Advanced**. | + | Configure network security group | Select **Create new**. </br> Enter **nsg-spoke-2**. | + | Inbound rules | Select **+ Add an inbound rule**. </br> Select **HTTP** in **Service**. </br> Select **Add**. </br> Select **OK**. | ++1. Leave the rest of the options at the defaults and select **Review + create**. ++1. Select **Create**. ++## Install IIS on spoke two test virtual machine ++IIS is installed on the Windows Server 2022 virtual machine to test outbound internet traffic through the NAT gateway and inter-spoke traffic in the hub and spoke network. ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **vm-spoke-2**. ++1. In **Operations**, select **Run command**. ++1. Select **RunPowerShellScript**. ++1. Enter the following script in **Run Command Script**: ++ ```powershell + # Install IIS server role + Install-WindowsFeature -name Web-Server -IncludeManagementTools + + # Remove default htm file + Remove-Item C:\inetpub\wwwroot\iisstart.htm + + # Add a new htm file that displays server name + Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername) + ``` ++1. Select **Run**. -6. Leave the rest of the options at the defaults and select **Review + create**. +1. Wait for the script to complete before continuing to the next step. It can take a few minutes for the script to complete. -7. Select **Create**. +1. When the script completes, the **Output*** displays the following: ++ ```output + Success Restart Needed Exit Code Feature Result + - -- -- + True No Success {Common HTTP Features, Default Document, D... + ``` ## Test NAT gateway -You'll connect to the Windows Server 2022 virtual machines you created in the previous steps to verify that the outbound internet traffic is leaving the NAT gateway. +Connect to the Windows Server 2022 virtual machines you created in the previous steps to verify that the outbound internet traffic is leaving the NAT gateway. ### Obtain NAT gateway public IP address Obtain the NAT gateway public IP address for verification of the steps later in 1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results. -2. Select **myPublic-NAT**. +1. Select **public-ip-nat**. -3. Make note of value in **IP address**. The example used in this article is **52.153.224.79**. +1. Make note of value in **IP address**. The example used in this article is **52.153.224.79**. ### Test NAT gateway from spoke one Use Microsoft Edge on the Windows Server 2022 virtual machine to connect to http 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM-Spoke-1**. +1. Select **vm-spoke-1**. -3. Select **Connect** then **Bastion**. +1. Select **Connect** then **Bastion**. Select **Use Bastion**. -4. Enter the username and password you entered when the virtual machine was created. +1. Enter the username and password you entered when the virtual machine was created. -5. Select **Connect**. +1. Select **Connect**. -6. Open **Microsoft Edge** when the desktop finishes loading. +1. Open **Microsoft Edge** when the desktop finishes loading. -7. In the address bar, enter **https://whatsmyip.com**. +1. In the address bar, enter **https://whatsmyip.com**. -8. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously. +1. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously. :::image type="content" source="./media/tutorial-hub-spoke-route-nat/outbound-ip-address.png" alt-text="Screenshot of outbound IP address."::: -9. Open **Windows PowerShell**. --10. Use the following example to install IIS. IIS will be used later to test inter-spoke routing. -- ```powershell - Install-WindowsFeature Web-Server - ``` --11. Leave the bastion connection open to **myVM-Spoke-1**. +1. Leave the bastion connection open to **vm-spoke-1**. ### Test NAT gateway from spoke two Use Microsoft Edge on the Windows Server 2022 virtual machine to connect to http 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM-Spoke-2**. +1. Select **vm-spoke-2**. -3. Select **Connect** then **Bastion**. +1. Select **Connect** then **Bastion**. Select **Use Bastion**. -4. Enter the username and password you entered when the virtual machine was created. +1. Enter the username and password you entered when the virtual machine was created. -5. Select **Connect**. +1. Select **Connect**. -6. Open **Microsoft Edge** when the desktop finishes loading. +1. Open **Microsoft Edge** when the desktop finishes loading. -7. In the address bar, enter **https://whatsmyip.com**. +1. In the address bar, enter **https://whatsmyip.com**. -8. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously. +1. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously. :::image type="content" source="./media/tutorial-hub-spoke-route-nat/outbound-ip-address.png" alt-text="Screenshot of outbound IP address."::: -9. Open **Windows PowerShell**. --10. Use the following example to install IIS. IIS will be used later to test inter-spoke routing. -- ```powershell - Install-WindowsFeature Web-Server - ``` --11. Leave the bastion connection open to **myVM-Spoke-2**. +1. Leave the bastion connection open to **vm-spoke-2**. ## Test routing between the spokes -Traffic from spoke one to spoke two and spoke two to spoke one will route through the simulated NVA in the hub virtual network. Use the following examples to verify the routing between spokes of the hub and spoke network. +Traffic from spoke one to spoke two and spoke two to spoke one route through the simulated NVA in the hub virtual network. Use the following examples to verify the routing between spokes of the hub and spoke network. ### Test routing from spoke one to spoke two -Use Microsoft Edge to connect to the web server on **myVM-Spoke-2** you installed in the previous steps. +Use Microsoft Edge to connect to the web server on **vm-spoke-2** you installed in the previous steps. -1. Return to the open bastion connection to **myVM-Spoke-1**. +1. Return to the open bastion connection to **vm-spoke-1**. -2. Open **Microsoft Edge** if it's not open. +1. Open **Microsoft Edge** if it's not open. -3. In the address bar, enter **10.3.0.4**. +1. In the address bar, enter **10.2.0.4**. -4. Verify the default IIS page is displayed from **myVM-Spoke-2**. +1. Verify the IIS page is displayed from **vm-spoke-2**. - :::image type="content" source="./media/tutorial-hub-spoke-route-nat/iis-myvm-spoke-1.png" alt-text="Screenshot of default IIS page on myVM-Spoke-1."::: + :::image type="content" source="./media/tutorial-hub-spoke-route-nat/iis-myvm-spoke-1.png" alt-text="Screenshot of default IIS page on vm-spoke-1."::: -5. Close the bastion connection to **myVM-Spoke-1**. +1. Close the bastion connection to **vm-spoke-1**. ### Test routing from spoke two to spoke one -Use Microsoft Edge to connect to the web server on **myVM-Spoke-1** you installed in the previous steps. --1. Return to the open bastion connection to **myVM-Spoke-2**. --2. Open **Microsoft Edge** if it's not open. --3. In the address bar, enter **10.2.0.4**. --4. Verify the default IIS page is displayed from **myVM-Spoke-1**. -- :::image type="content" source="./media/tutorial-hub-spoke-route-nat/iis-myvm-spoke-2.png" alt-text="Screenshot of default IIS page on myVM-Spoke-2."::: --5. Close the bastion connection to **myVM-Spoke-1**. +Use Microsoft Edge to connect to the web server on **vm-spoke-1** you installed in the previous steps. -## Clean up resources +1. Return to the open bastion connection to **vm-spoke-2**. -If you're not going to continue to use this application, delete the created resources with the following steps: +1. Open **Microsoft Edge** if it's not open. -1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results. +1. In the address bar, enter **10.1.0.4**. -2. Select **myResourceGroup**. +1. Verify the IIS page is displayed from **vm-spoke-1**. -3. In the **Overview** of **myResourceGroup**, select **Delete resource group**. + :::image type="content" source="./media/tutorial-hub-spoke-route-nat/iis-myvm-spoke-2.png" alt-text="Screenshot of default IIS page on vm-spoke-2."::: -4. In **TYPE THE RESOURCE GROUP NAME:**, enter **TutorialNATHubSpoke-rg**. +1. Close the bastion connection to **vm-spoke-1**. -5. Select **Delete**. ## Next steps |
network-watcher | Network Watcher Connectivity Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-portal.md | In this section, you test connectivity between a virtual machines and `www.bing. | Virtual machine | Select **VM1**. | | **Destination** | | | Destination type | Select **Specify manually**. |- | Resource group | Enter *www\.bing.com*. | + | URI, FQDN or IP address | Enter *www\.bing.com*. | | **Probe Settings** | | | Preferred IP version | Select **IPv4**. | | Protocol | Select **TCP**. | |
network-watcher | Network Watcher Nsg Flow Logging Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md | For steps to disable and enable NSG flow logs, see [Configure NSG flow logs](./n When you delete an NSG flow log, you not only stop the flow logging for the associated network security group but also delete the flow log resource (with all its settings and associations). To begin flow logging again, you must create a new flow log resource for that network security group. -You can delete a flow log using [PowerShell](/powershell/module/az.network/remove-aznetworkwatcherflowlog), the [Azure CLI](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete), or the [REST API](/rest/api/network-watcher/flowlogs/delete). At this time, you can't delete flow logs from the Azure portal. +You can delete a flow log using [PowerShell](/powershell/module/az.network/remove-aznetworkwatcherflowlog), the [Azure CLI](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete), or the [REST API](/rest/api/network-watcher/flowlogs/delete). When you delete a network security group, the associated flow log resource is deleted by default. |
postgresql | Quickstart Create Server Database Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md | Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database >[!div class="mx-imgBorder"] > :::image type="content" source="./media/quickstart-create-database-portal/search-postgres.png" alt-text="Find Azure Database for PostgreSQL."::: -1. Select **Add**. mark is showing me how to make a change +1. Select **+ Create**. 2. On the Create a Azure Database for PostgreSQL page , select **Single server**. |
search | Vector Search How To Create Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md | In Azure Cognitive Search, vector data is represented in fields in a [search ind Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created. -+ Pre-existing embeddings in your source documents. Cognitive Search doesn't generate embeddings. We recommend Azure OpenAI but you can use any model for vectorization. For more information, see [Create and use embeddings for search queries and documents](vector-search-how-to-generate-embeddings.md). ++ Pre-existing vector embeddings in your source documents. Cognitive Search doesn't generate vectors. We recommend Azure OpenAI but you can use any model for vectorization. For more information, see [Create and use embeddings for search queries and documents](vector-search-how-to-generate-embeddings.md). - Be sure to use the same embedding model for both indexing and queries. At query time, you must include a step that converts the user's query into a vector. + Be sure to use the same embedding model for both indexing and queries. At query time, you must include a step that converts the user's query string into a vector. ## Prepare documents for indexing -Prior to indexing, assemble a document payload that includes vector data. The document structure must conform to the index schema. Make sure your documents include the following elements: +Prior to indexing, assemble a document payload that includes vector data. The document structure must conform to the index schema. Make sure your documents: -1. Provide a unique value or a metadata property that uniquely identifies each source document. All search indexes require a document key as a unique identifier, which means all documents must have one field that can be mapped to type `Edm.String` and `key=true` in the search index. +1. Provide a field or a metadata property that uniquely identifies each document. All search indexes require a document key. Your documents must have one field or property that can be mapped to type `Edm.String` and `key=true` in the search index. 1. Provide vector data (an array of single-precision floating point numbers) in source fields. - Vector fields contain vector data generated by embedding models. We recommend the embedding models in [Azure OpenAI](https://aka.ms/oai/access), such as **text-embedding-ada-002** for text documents or the [Image Retrieval REST API](/rest/api/computervision/2023-02-01-preview/image-retrieval/vectorize-image) for images. + Vector fields contain vector data generated by embedding models, one embedding per field. We recommend the embedding models in [Azure OpenAI](https://aka.ms/oai/access), such as **text-embedding-ada-002** for text documents or the [Image Retrieval REST API](/rest/api/computervision/2023-02-01-preview/image-retrieval/vectorize-image) for images. -1. Provide any other fields with alphanumeric content for any nonvector queries you want to support, as well as for hybrid query scenarios that include full text search or semantic ranking in the same request. +1. Provide other fields with alphanumeric content for the query response and for hybrid query scenarios that include full text search or semantic ranking in the same request. -Your search index should include fields and content for all of the query scenarios you want to support. Suppose you want to search or filter over product names, versions, metadata, or addresses. In this case, similarity search isn't especially helpful and keyword search, geo-search, or filters would be a better choice. A search index that includes a comprehensive field collection of vector and non-vector data provides maximum flexibility for query construction. +Your search index should include fields and content for all of the query scenarios you want to support. Suppose you want to search or filter over product names, versions, metadata, or addresses. In this case, similarity search isn't especially helpful. Keyword search, geo-search, or filters would be a better choice. A search index that includes a comprehensive field collection of vector and non-vector data provides maximum flexibility for query construction and response composition. ## Add a vector field to the fields collection The schema must include fields for the document key, vector fields, and any other fields that you require for hybrid search scenarios. In the following example, "title" and "content" contain textual content used in full text search and semantic search, while "titleVector" and "contentVector" contain vector data. > [!NOTE]-> + You don't need a special "vector index" to use vector search. You'll only need to add one or more "vector fields" to a new or existing index. -> + Both new and existing indexes support vector search. However, there is a small subset of older services that don't support vector search. In this case, a new search service must be created to use it. +> + Vectors are added to fields in a search index. Internally, a *vector index* is created for each vector field, but indexing and queries target fields in a search index, and not the vector indexes directly. +> + Both new and existing search indexes support vector search. However, there is a small subset of older services that don't support vector search. In this case, a new search service must be created to use it. > + Updating an existing index to add vector fields requires `allowIndexDowntime` query parameter to be `true`. -1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/preview-api/create-or-update-index) to add vector fields. +1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/preview-api/create-or-update-index) to create the index. -1. Create a `vectorSearch` section in the index that specifies the algorithm used to create the embedding space. Currently, only `"hnsw"` is supported. +1. Add a `vectorSearch` section in the index that specifies the algorithm used to create the embedding space. Currently, only `"hnsw"` is supported. For "metric", valid values are `cosine`, `euclidean`, and `dotProduct`. The `cosine` metric is specified because it's the similarity metric that the Azure OpenAI models use to create embeddings. ```json "vectorSearch": { The schema must include fields for the document key, vector fields, and any othe } ``` -1. Add vector fields to the fields collection. You can store one generated embedding per document field. For each field: +1. Add vector fields to the fields collection. You can store one generated embedding per document field. For each vector field: - + Assign the `Collection(Edm.Single)` data type + + Assign the `Collection(Edm.Single)` data type. + + For `Collection(Edm.Single)`, the "filterable", "facetable", "sortable" attributes are "false" by default. Don't set them to "true" because those behaviors don't apply within the context of vector fields and the request will fail. + Provide the name of the vector search algorithm configuration. + Provide the number of dimensions generated by the embedding model. + "searchable" must be "true".- + "retrievable" set to "true" allows you to display the raw vectors (for example, as a verification step), but doing so will increase storage usage. Set to "false" if you don't need to return raw vectors. - + For `Collection(Edm.Single)`, the "filterable", "facetable", "sortable" attributes are "false" by default. Don't set them to "true" because those behaviors don't apply within the context of vector fields and the request will fail. + + "retrievable" set to "true" allows you to display the raw vectors (for example, as a verification step), but doing so will increase storage. Set to "false" if you don't need to return raw vectors. You don't need to return vectors for a query, but if you're passing a vector result to a downstream app then set "retrievable" to "true". ++1. Add other fields that define the substance and structure of the textual content you're indexing. At a minimum, you need a document key. ++ You should also add fields that are useful in the query or in its response. The example below shows vector fields for title and content ("titleVector", "contentVector") that are equivalent to vectors. It also provides fields for equivalent textual content ("title", "content") useful for sorting, filtering, and reading in a search result. ++ An index definition with the described elements looks like this: ```http PUT https://my-search-service.search.windows.net/indexes/my-index?api-version=2023-07-01-Preview&allowIndexDowntime=true The schema must include fields for the document key, vector fields, and any othe "name": "title", "type": "Edm.String", "searchable": true,+ "filterable": true, + "sortable": true, "retrievable": true }, { The schema must include fields for the document key, vector fields, and any othe "algorithmConfigurations": [ { "name": "vectorConfig",- "kind": "hnsw" + "kind": "hnsw", + "hnswParameters": { + "m": 4, + "efConstruction": 400, + "efSearch": 500, + "metric": "cosine" + } } ] } |
search | Vector Search How To Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md | Last updated 07/10/2023 In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to query those fields. It also explains how to combine vector queries with full text search and semantic search for hybrid query combination scenarios. +Query execution in Cognitive Search doesn't include vector conversion. Encoding (text-to-vector) of the query string requires that you pass the text to an embedding model for vectorization. You would then pass the output of the call to the embedding model to the search engine for similarity search over vector fields. + ## Prerequisites + Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created. In Azure Cognitive Search, if you added vector fields to a search index, this ar ## Check your index for vector fields -In the index schema, check for: +If you aren't sure whether your search index already has vector fields, look for: -+ A `vectorSearch` algorithm configuration. ++ A `vectorSearch` algorithm configuration embedded in the index schema. + In the fields collection, look for fields of type `Collection(Edm.Single)`, with a `dimensions` attribute and a `vectorSearchConfiguration` set to the name of the `vectorSearch` algorithm configuration used by the field. -Search documents containing vector data have fields containing many hundreds of floating point values. +You can also send an empty query (`search=*`) against the index. If the vector field is "retrievable", the response includes a vector field consisting of an array of floating point values. ## Convert query input into a vector api-key: {{admin-api-key}} } ``` -The expected response is 202 for a successful call to the deployed model. The body of the response provides the vector representation of the "input". The vector for the query is in the "embedding" field. For testing purposes, you would copy the embedding value into "vector.value" in a query request, using syntax from the next sections. Note that the actual response for this query included 1536 embeddings, trimmed here for brevity. +The expected response is 202 for a successful call to the deployed model. The body of the response provides the vector representation of the "input". The vector for the query is in the "embedding" field. For testing purposes, you would copy the value of the "embedding" array into "vector.value" in a query request, using syntax shown in the next several sections. The actual response for this call to the deployment model includes 1536 embeddings, trimmed here for brevity. ```json { The expected response is 202 for a successful call to the deployed model. The bo In this vector query, which is shortened for brevity, the "value" contains the vectorized text of the query input. The "fields" property specifies which vector fields are searched. The "k" property specifies the number of nearest neighbors to return as top hits. -Recall that the vector query was generated from this string: `"what Azure services support full text search"`. The search targets the "contentVector" field. +The sample vector query for this article is: `"what Azure services support full text search"`. The query targets the "contentVector" field. ```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} api-key: {{admin-api-key}} The response includes 5 matches, and each result provides a search score, title, content, and category. In a similarity search, the response always includes "k" matches, even if the similarity is weak. For indexes that have fewer than "k" documents, only those number of documents will be returned. +Notice that "select" returns textual fields from the index. Although the vector field is "retrievable" in this example, its content isn't usable as a search result. + ## Query syntax for hybrid search -A hybrid query combines full text search and vector search. The search engine runs full text and vector queries in parallel. All matches are evaluated for relevance using Reciprocal Rank Fusion (RRF) and a single result set is returned in the response. +A hybrid query combines full text search and vector search, where the `"search"` parameter takes a query string and `"vectors.value"` takes the vector query. The search engine runs full text and vector queries in parallel. All matches are evaluated for relevance using Reciprocal Rank Fusion (RRF) and a single result set is returned in the response. -You can also write queries that target just the vector fields, or just the text fields, within your search index. For example, besides vector queries, you might also want to write queries that filter by location or search over product names or titles, scenarios for which similarity search isn't a good fit. +Hybrid queries are useful because they add support for filters, orderby, and [semantic search](semantic-how-to-query-request.md) For example, in addition to the vector query, you could filter by location or search over product names or titles, scenarios for which similarity search isn't a good fit. The following example is from the [Postman collection of REST APIs](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) that demonstrate query configurations. It shows a complete request that includes vector search, full text search with filters, and semantic search with captions and answers. Semantic search is an optional premium feature. It's not required for vector search or hybrid search. For content that includes rich descriptive text *and* vectors, it's possible to benefit from all of the search modalities in one request. api-key: {{admin-api-key}} ## Query syntax for vector query over multiple fields -You can set "vector.fields" property to multiple vector fields. For example, the Postman collection has vector fields named titleVector and contentVector. Your vector query executes over both the titleVector and contentVector fields, which must have the same embedding space since they share the same query vector. +You can set the "vectors.fields" property to multiple vector fields. For example, the Postman collection has vector fields named "titleVector" and "contentVector". Your vector query executes over both the "titleVector" and "contentVector" fields, which must have the same embedding space since they share the same query vector. ```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} api-key: {{admin-api-key}} ## Query syntax for multiple vector queries -You can issue a search request containing multiple query vectors using the `vectors` query parameter. The queries execute concurrently in the search index, each one looking for similarities in the target vector fields. The result set is a union of the documents that matched both vector queries. A common example of this query request is when using models such as [CLIP](https://openai.com/research/clip) for a multi-modal vector search where the same model can vectorize image and non-image content. +You can issue a search request containing multiple query vectors using the "vectors" query parameter. The queries execute concurrently in the search index, each one looking for similarities in the target vector fields. The result set is a union of the documents that matched both vector queries. A common example of this query request is when using models such as [CLIP](https://openai.com/research/clip) for a multi-modal vector search where the same model can vectorize image and non-image content. You must use REST for this scenario. Currently, there isn't support for multiple vector queries in the alpha SDKs. You must use REST for this scenario. Currently, there isn't support for multiple Search results would include a combination of text and images, assuming your search index includes a field for the image file (a search index doesn't store images). +## Configure a query response ++When you're setting up the vector query, think about the response structure. The response is a flattened rowset. Parameters on the query determine which fields are in each row and how many rows are in the response. The search engine ranks the matching documents and returns the most relevant results. ++### Fields in a response ++Search results are composed of "retrievable" fields from your search index. A result is either: +++ All "retrievable" fields (a REST API default).++ Fields explicitly listed in a "select" parameter on the query. ++The examples in this article used a "select" statement to specify text (non-vector) fields in the response. ++> [!NOTE] +> Vectors aren't designed for readability, so avoid returning them in the response. Instead, choose non-vector fields that are representative of the search document. For example, if the query targets a "descriptionVector" field, return an equivalent text field if you have one ("description") in the response. ++### Number of results ++A query might match to any number of documents, as many as all of them if the search criteria are weak (for example "search=*" for a null query). Because it's seldom practical to return unbounded results, you should specify a maximum for the response: +++ `"k": n` results for vector-only queries++ `"top": n` results for hybrid queries that include a "search" parameter++Both "k" and "top" are optional. Unspecified, the default number of results in a response is 50. You can set "top" and "skip" to [page through more results](search-pagination-page-layout.md#paging-results) or change the default. ++### Ranking ++Ranking of results is computed by either: +++ The similarity metric specified in the index `vectorConfiguration` for a vector-only query. Valid values are `cosine` , `euclidean`, and `dotProduct`.++ Reciprocal Rank Fusion (RRF) if there are multiple sets of search results.++Azure OpenAI embedding models use cosine similarity, so if you're using Azure OpenAI embedding models, `cosine` is the recommended metric. Other supported ranking metrics include `euclidean` and `dotProduct`. ++Multiple sets are created if the query targets multiple vector fields, or if the query is a hybrid of vector and full text search, with or without the optional semantic reranking capabilities of [semantic search](semantic-search-overview.md). Within vector search, a vector query can only target one internal vector index. So for [multiple vector fields](#query-syntax-for-vector-query-over-multiple-fields) and [multiple vector queries](#query-syntax-for-multiple-vector-queries), the search engine generates multiple queries that target the respective vector indexes of each field. Output is a set of ranked results for each query, which are fused using RRF. For more information, see [Vector query execution and scoring](vector-search-ranking.md). + ## Next steps As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), or [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet).- |
search | Vector Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md | We recommend this article for background, but if you'd rather get started, follo ## What's vector search in Cognitive Search? -Vector search is a new capability for indexing, storing, and retrieving vector embeddings. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://arxiv.org/abs/2005.11401). +Vector search is a new capability for indexing, storing, and retrieving vector embeddings from a search index. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://arxiv.org/abs/2005.11401). -Support for vector search is in public preview and available through the [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview). To use vector search, define a *vector field* in the index definition and index documents with vector data. Then you can issue search request with a query vector, returning documents with the requested `k` nearest neighbors (kNN) according to the selected vector similarity metric. +Support for vector search is in public preview and available through the [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview). To use vector search, define a *vector field* in the index definition and index documents with vector data. Then you can issue a search request with a query vector, returning documents with the requested `k` nearest neighbors (kNN) according to the selected vector similarity metric. You can index vector data as fields in documents alongside textual and other types of content. Vector queries can be issued independently or in combination with other query types, including term queries (hybrid search) and filters in the same search request. Scenarios for vector search include: + **Multi-lingual search**. Use a multi-lingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in. -<!-- @Farzad, filterable is false on a vector field, so we need to explain what we mean here. I wonder if it goes with hybrid query? --> -+ **Filtered vector search**. Use [filters](search-filters.md) with vector queries to select a specific category of indexed documents, or to implement document-level security, geospatial search, and more. ++ **Hybrid search**. Vector search is implemented at the field level, which means you can build queries that include vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing. -+ **Hybrid search**. For text data, combine the best of vector retrieval and keyword retrieval to obtain the best results. Use with [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing. ++ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine processes the filter first, reducing the surface area of the search corpus before running the vector query. + **Vector database**. Use Cognitive Search as a vector store to serve as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications. For example, documents that talk about different species of dogs would be cluste Popular vector similarity metrics include the following, which are all supported by Azure Cognitive Search. + `euclidean` (also known as `L2 norm`): This measures the length of the vector difference between two vectors.-+ `cosine`: This measures the angle between two vectors, and is not affected by differing vector lengths. ++ `cosine`: This measures the angle between two vectors, and isn't affected by differing vector lengths. + `dotProduct`: This measures both the length of each of the pair of two vectors, and the angle between them. For normalized vectors, this is identical to `cosine` similarity, but slightly more performant. ### Approximate Nearest Neighbors |
search | Vector Search Ranking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md | Last updated 07/07/2023 > [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme). -This article is for developers who need a deeper understanding of ranking of vector queries in Azure Cognitive Search. +This article is for developers who need a deeper understanding of vector query execution and ranking in Azure Cognitive Search. ## Vector similarity -In a vector query, the search query is a vector as opposed to text in full-text queries. Documents which matched the vector query are ranked using vector similarity configured on the vector field defined in the index. A vector query specifies the `k` parameter which determines how many nearest neighbors of the query vector should be returned from the index. +In a vector query, the search query is a vector as opposed to text in full-text queries. Documents that match the vector query are ranked using vector similarity configured on the vector field defined in the index. A vector query specifies the `k` parameter, which determines how many nearest neighbors of the query vector should be returned from the index. > [!NOTE] > Full-text search queries could return fewer than the requested number of results if there are fewer or no matches, but vector search will return up to `k` matches as long as there are enough documents in the index. This is because with vector search, similarity is relative to the input query vector, not absolute. This means less relevant results have a worse similarity score, but they can still be the "nearest" vectors if there aren't any closer vectors. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low. In a typical application, the input data within a query request would be fed into the same machine learning model that generated the embedding space for the vector index. This model would output a vector in the same embedding space. Since similar data are clustered close together, finding matches is equivalent to finding the nearest vectors and returning the associated documents as the search result. -If a query request is about dogs, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about dogs. Finding the nearest vectors, or the most "similar" vector based on a similarity metric, would return those relevant documents. +If a query request is about dogs, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about dogs. Identifying which vectors are the most similar to the query, based on a similarity metric, determines which documents are the most relevant. Commonly used similarity metrics include `cosine`, `euclidean` (also known as `l2 norm`), and `dotProduct`, which are summarized here: -+ Cosine calculates the angle between two vectors. ++ `cosine` calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/cognitive-services/openai/concepts/understand-embeddings#cosine-similarity). -+ Euclidean calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors. ++ `euclidean` calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors. -+ Dot product is affected by both vectors' magnitudes and the angle between them. ++ `dotProduct` is affected by both vectors' magnitudes and the angle between them. -For normalized embedding spaces, dot product is equivalent to the cosine similarity, but is more efficient. +For normalized embedding spaces, dotProduct is equivalent to the cosine similarity, but is more efficient. ## Hybrid search |
sentinel | Best Practices Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-data.md | This section reviews best practices for collecting data using Microsoft Sentinel ## Prioritize your data connectors -If it's unclear to you which data connectors will best serve your environment, start by enabling all [free data connectors](billing.md#free-data-sources). --The free data connectors will start showing value from Microsoft Sentinel as soon as possible, while you continue to plan other data connectors and budgets. --For your [partner](data-connectors-reference.md) and [custom](create-custom-connector.md) data connectors, start by setting up [Syslog](connect-syslog.md) and [CEF](connect-common-event-format.md) connectors, with the highest priority first, as well as any Linux-based devices. --If your data ingestion becomes too expensive, too quickly, stop or filter the logs forwarded using the [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md). --> [!TIP] -> Custom data connectors enable you to ingest data into Microsoft Sentinel from data sources not currently supported by built-in functionality, such as via agent, Logstash, or API. For more information, see [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md). -> +Learn how to [prioritize your data connectors](prioritize-data-connectors.md) as part of the Microsoft Sentinel deployment process. ## Filter your logs before ingestion Filter your logs using one of the following methods: Standard configuration for data collection may not work well for your organization, due to various challenges. The following tables describe common challenges or requirements, and possible solutions and considerations. > [!NOTE]-> Many solutions listed below require a custom data connector. For more information, see [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md). +> Many solutions listed in the following sections require a custom data connector. For more information, see [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md). > ### On-premises Windows log collection Standard configuration for data collection may not work well for your organizati |Challenge / Requirement |Possible solutions |Considerations | ||||-|**Requires log filtering** | Use Logstash <br><br>Use Azure Functions <br><br> Use LogicApps <br><br> Use custom code (.NET, Python) | While filtering can lead to cost savings, and ingests only the required data, some Microsoft Sentinel features are not supported, such as [UEBA](identify-threats-with-entity-behavior-analytics.md), [entity pages](entity-pages.md), [machine learning](bring-your-own-ml.md), and [fusion](fusion.md). <br><br>When configuring log filtering, you'll need to make updates in resources such as threat hunting queries and analytics rules | +|**Requires log filtering** | Use Logstash <br><br>Use Azure Functions <br><br> Use LogicApps <br><br> Use custom code (.NET, Python) | While filtering can lead to cost savings, and ingests only the required data, some Microsoft Sentinel features aren't supported, such as [UEBA](identify-threats-with-entity-behavior-analytics.md), [entity pages](entity-pages.md), [machine learning](bring-your-own-ml.md), and [fusion](fusion.md). <br><br>When configuring log filtering, make updates in resources such as threat hunting queries and analytics rules | |**Agent cannot be installed** |Use Windows Event Forwarding, supported with the [Azure Monitor Agent](connect-windows-security-events.md#connector-options) | Using Windows Event forwarding lowers load-balancing events per second from the Windows Event Collector, from 10,000 events to 500-1000 events.| |**Servers do not connect to the internet** | Use the [Log Analytics gateway](../azure-monitor/agents/gateway.md) | Configuring a proxy to your agent requires extra firewall rules to allow the Gateway to work. | |**Requires tagging and enrichment at ingestion** |Use Logstash to inject a ResourceID <br><br>Use an ARM template to inject the ResourceID into on-premises machines <br><br>Ingest the resource ID into separate workspaces | Log Analytics doesn't support RBAC for custom tables <br><br>Microsoft Sentinel doesnΓÇÖt support row-level RBAC <br><br>**Tip**: You may want to adopt cross workspace design and functionality for Microsoft Sentinel. | |**Requires splitting operation and security logs** | Use the [Microsoft Monitor Agent or Azure Monitor Agent](connect-windows-security-events.md) multi-home functionality | Multi-home functionality requires more deployment overhead for the agent. |-|**Requires custom logs** | Collect files from specific folder paths <br><br>Use API ingestion <br><br>Use PowerShell <br><br>Use Logstash | You may have issues filtering your logs. <br><br>Custom methods are not supported. <br><br>Custom connectors may require developer skills. | +|**Requires custom logs** | Collect files from specific folder paths <br><br>Use API ingestion <br><br>Use PowerShell <br><br>Use Logstash | You may have issues filtering your logs. <br><br>Custom methods aren't supported. <br><br>Custom connectors may require developer skills. | ### On-premises Linux log collection |Challenge / Requirement |Possible solutions |Considerations | ||||-|**Requires log filtering** | Use Syslog-NG <br><br>Use Rsyslog <br><br>Use FluentD configuration for the agent <br><br> Use the Azure Monitor Agent/Microsoft Monitoring Agent <br><br> Use Logstash | Some Linux distributions may not be supported by the agent. <br> <br>Using Syslog or FluentD requires developer knowledge. <br><br>For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md) and [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md). | +|**Requires log filtering** | Use Syslog-NG <br><br>Use Rsyslog <br><br>Use FluentD configuration for the agent <br><br> Use the Azure Monitor Agent/Microsoft Monitoring Agent <br><br> Use Logstash | Some Linux distributions might not be supported by the agent. <br> <br>Using Syslog or FluentD requires developer knowledge. <br><br>For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md) and [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md). | |**Agent cannot be installed** | Use a Syslog forwarder, such as (syslog-ng or rsyslog. | | |**Servers do not connect to the internet** | Use the [Log Analytics gateway](../azure-monitor/agents/gateway.md) | Configuring a proxy to your agent requires extra firewall rules to allow the Gateway to work. |-|**Requires tagging and enrichment at ingestion** | Use Logstash for enrichment, or custom methods, such as API or EventHubs. | You may have extra effort required for filtering. | +|**Requires tagging and enrichment at ingestion** | Use Logstash for enrichment, or custom methods, such as API or Event Hubs. | You may have extra effort required for filtering. | |**Requires splitting operation and security logs** | Use the [Azure Monitor Agent](connect-windows-security-events.md) with the multi-homing configuration. | | |**Requires custom logs** | Create a custom collector using the Microsoft Monitoring (Log Analytics) agent. | | If you need to collect Microsoft Office data, outside of the standard connector |**Filter logs from other platforms** | Use Logstash <br><br>Use the Azure Monitor Agent / Microsoft Monitoring (Log Analytics) agent | Custom collection has extra ingestion costs. <br><br>You may have a challenge of collecting all Windows events vs only security events. | |**Agent cannot be used** | Use Windows Event Forwarding | You may need to load balance efforts across your resources. | |**Servers are in air-gapped network** | Use the [Log Analytics gateway](../azure-monitor/agents/gateway.md) | Configuring a proxy to your agent requires firewall rules to allow the Gateway to work. |-|**RBAC, tagging, and enrichment at ingestion** | Create custom collection via Logstash or the Log Analytics API. | RBAC is not supported for custom tables <br><br>Row-level RBAC is not supported for any tables. | +|**RBAC, tagging, and enrichment at ingestion** | Create custom collection via Logstash or the Log Analytics API. | RBAC isn't supported for custom tables <br><br>Row-level RBAC isn't supported for any tables. | ## Next steps |
sentinel | Best Practices Workspace Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-workspace-architecture.md | description: Learn about best practices for designing your Microsoft Sentinel wo Previously updated : 01/09/2023 Last updated : 06/28/2023 # Microsoft Sentinel workspace architecture best practices Don't apply a resource lock to a Log Analytics workspace you'll use for Microsof If you do need to work with multiple workspaces, simplify your incident management and investigation by [condensing and listing all incidents from each Microsoft Sentinel instance in a single location](multiple-workspace-view.md). -To reference data that's held in other Microsoft Sentinel workspaces, such as in [cross-workspace workbooks](extend-sentinel-across-workspaces-tenants.md#cross-workspace-workbooks), use [cross-workspace queries](extend-sentinel-across-workspaces-tenants.md). +To reference data that's held in other Microsoft Sentinel workspaces, such as in [cross-workspace workbooks](extend-sentinel-across-workspaces-tenants.md#use-cross-workspace-workbooks), use [cross-workspace queries](extend-sentinel-across-workspaces-tenants.md#query-multiple-workspaces). The best time to use cross-workspace queries is when valuable information is stored in a different workspace, subscription or tenant, and can provide value to your current action. For example, the following code shows a sample cross-workspace query: union Update, workspace("contosoretail-it").Update, workspace("WORKSPACE ID").Up For more information, see [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md). ## Next steps-> [!div class="nextstepaction"] -> >[Design your Microsoft Sentinel workspace architecture](design-your-workspace-architecture.md) -> [!div class="nextstepaction"] -> >[Microsoft Sentinel sample workspace designs](sample-workspace-designs.md) -> [!div class="nextstepaction"] -> >[On-board Microsoft Sentinel](quickstart-onboard.md) -> [!div class="nextstepaction"] -> >[Get visibility into alerts](get-visibility.md) +In this article, you learned about key decision factors to help you determine the right workspace architecture for your organizations. ++> [!div class="nextstepaction"] +> >[Design your Microsoft Sentinel workspace architecture](design-your-workspace-architecture.md) |
sentinel | Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md | For data connectors that include both free and paid data types, select which dat Learn more about how to [connect data sources](connect-data-sources.md), including free and paid data sources. --## Next steps +## Learn more - [Monitor costs for Microsoft Sentinel](billing-monitor-costs.md) - [Reduce costs for Microsoft Sentinel](billing-reduce-costs.md) Learn more about how to [connect data sources](connect-data-sources.md), includi - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. - For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).++## Next steps ++In this article, you learned how to plan costs and understand the billing for Microsoft Sentinel. ++> [!div class="nextstepaction"] +> >[Deploy Microsoft Sentinel](deploy-overview.md) |
sentinel | Configure Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-content.md | + + Title: Configure Microsoft Sentinel content +description: In this step of your deployment, you configure the Microsoft Sentinel security content, like your data connectors, analytics rules, automation rules, and more. ++ Last updated : 07/05/2023++#Customer intent: As a SOC analyst, I want to configure the Microsoft Sentinel security content, so I can protect my organization against threats. +++# Configure Microsoft Sentinel content ++In the previous deployment step, you enabled Microsoft Sentinel, health monitoring, and the required solutions. In this article, you learn how to configure the different types of Microsoft Sentinel security content, which allow you to detect, monitor, and respond to security threats across your systems. ++## Configure your security content ++|Step |Description | +||| +|**Set up data connectors** |Based on the [data sources you selected when you planned your deployment](prioritize-data-connectors.md), and after [enabling the relevant solutions](enable-sentinel-features-content.md), you can now install or set up your data connectors.<br><br>- If you're using an existing connector, [find your connector](data-connectors-reference.md) from this full list of data connectors.<br>- If you're creating a custom connector, use [these resources](create-custom-connector.md).<br>- If you're setting up a connector to ingest CEF or Syslog logs, review these [options](connect-cef-syslog-options.md). | +|**Set up analytics rules** |After you've set up Microsoft Sentinel to collect data from all over your organization, you can begin using threat detection rules or [analytics rules](detect-threats-built-in.md). Select the steps you need to set up and configure your analytics rules:<br><br>- [Create a scheduled query rule](detect-threats-custom.md): Create custom analytics rules to help discover threats and anomalous behaviors in your environment.<br>- [Map data fields to entities](map-data-fields-to-entities.md): Add or change entity mappings in an existing analytics rule.<br>- [Surface custom details in alerts](surface-custom-details-in-alerts.md): Add or change custom details in an existing analytics rule.<br>- [Customize alert details](customize-alert-details.md): Override the default properties of alerts with content from the underlying query results.<br>- [Export and import analytics rules](import-export-analytics-rules.md): Export your analytics rules to Azure Resource Manager (ARM) template files, and import rules from these files. The export action creates a JSON file in your browser's downloads location, that you can then rename, move, and otherwise handle like any other file.<br>- [Create near-real-time (NRT) detection analytics rules](create-nrt-rules.md): Create near-time analytics rules for up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart.<br>- [Work with anomaly detection analytics rules](work-with-anomaly-rules.md): Work with built-in anomaly templates that use thousands of data sources and millions of events, or change thresholds and parameters for the anomalies within the user interface.<br>- [Manage template versions for your scheduled analytics rules](manage-analytics-rule-templates.md): Track the versions of your analytics rule templates, and either revert active rules to existing template versions, or update them to new ones.<br>- [Handle ingestion delay in scheduled analytics rules](ingestion-delay.md): Learn how ingestion delay might impact your scheduled analytics rules and how you can fix them to cover these gaps. | +|**Set up automation rules** |[Create automation rules](create-manage-use-automation-rules.md). Define the triggers and conditions that determine when your [automation rule](automate-incident-handling-with-automation-rules.md) runs, the various actions that you can have the rule perform, and the remaining features and functionalities. | +|**Set up playbooks** |A [playbook](automate-responses-with-playbooks.md) is a collection of remediation actions that you run from Microsoft Sentinel as a routine, to help automate and orchestrate your threat response. To set up playbooks:<br><br>- Review these [steps for creating a playbook](automate-responses-with-playbooks.md#steps-for-creating-a-playbook)<br>- [Create playbooks from templates](use-playbook-templates.md): A playbook template is a prebuilt, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios. | +|**Set up workbooks** |[Workbooks](monitor-your-data.md) provide a flexible canvas for data analysis and the creation of rich visual reports within Microsoft Sentinel. Workbook templates allow you to quickly gain insights across your data as soon as you connect a data source. To set up workbooks:<br><br>- [Create custom workbooks across your data](monitor-your-data.md#create-new-workbook)<br>- [Use existing workbook templates available with packaged solutions](monitor-your-data.md#use-a-workbook-template) | +|**Set up watchlists** |[Watchlists](watchlists.md) allow you to correlate data from a data source you provide with the events in your Microsoft Sentinel environment. To set up watchlists:<br><br>- [Create watchlists](watchlists-create.md)<br>- [Build queries or detection rules with watchlists](watchlists-queries.md): Query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. When you create a watchlist, you define the SearchKey. The search key is the name of a column in your watchlist that you expect to use as a join with other data or as a frequent object of searches. | ++## Next steps ++In this article, you learned how to configure the different types of Microsoft Sentinel security content. ++> [!div class="nextstepaction"] +>>[Set up multiple workspaces](use-multiple-workspaces.md) |
sentinel | Configure Data Retention Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-retention-archive.md | + + Title: Configure data retention and archive in Microsoft Sentinel +description: Towards the end of your deployment procedure, you set up data retention to suit your organization's needs. ++ Last updated : 07/05/2023++#Customer intent: As a SOC analyst, I want to set up data retention and archive settings so I can retain the data that's important to my organization in the long term. +++# Configure data retention and archive in Microsoft Sentinel ++In the previous deployment step, you enabled the User and Entity Behavior Analytics (UEBA) feature to streamline your analysis process. In this article, you learn how to set up data retention and archive, to make sure your organization retains the data that's important in the long term. ++## Configure data retention and archive ++Retention policies define when to remove or archive data in a Log Analytics workspace. Archiving lets you keep older, less used data in your workspace at a reduced cost. To set up data retention, use one or both of these methods, depending on your use case: ++- [Configure data retention and archive for one or more tables](../azure-monitor/logs/data-retention-archive.md) (one table at a time) +- [Configure data retention and archive for multiple tables](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/Archive-Log-Tool) at once ++## Next steps ++In this article, you learned how to set up data retention and archive. ++> [!div class="nextstepaction"] +>>[Perform post-deployment steps](review-fine-tune-overview.md) |
sentinel | Deploy Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/deploy-overview.md | + + Title: Deploy Microsoft Sentinel +description: Learn about the steps for deploying Microsoft Sentinel. +++ Last updated : 07/05/2023++# Deploy Microsoft Sentinel ++This article introduces the activities that help you deploy Microsoft Sentinel. To plan for your deployment, review the [plan and prepare overview](prerequisites.md). ++The deployment phase is typically performed by a SOC analyst or related roles. ++## Deployment overview ++| Step | Details | +| | - | +| **1. Deployment overview** | **YOU ARE HERE** | +| [**2. Enable Microsoft Sentinel, health and audit, and content**](enable-sentinel-features-content.md) | Enable Microsoft Sentinel, enable the health and audit feature, and enable the solutions and content you've identified according to your organization's needs. | +| [**3. Configure content**](configure-content.md) | Configure the different types of Microsoft Sentinel security content, which allow you to detect, monitor, and respond to security threats across your systems: Data connectors, analytics rules, automation rules, playbooks, workbooks, and watchlists. | +| [**4. Set up a cross-workspace architecture**](use-multiple-workspaces.md) |If your environment requires multiple workspaces, you can now set them up as part of your deployment. In this article, you learn how to set up Microsoft Sentinel to extend across multiple workspaces and tenants. | +| [**5. Enable User and Entity Behavior Analytics (UEBA)**](enable-entity-behavior-analytics.md) | Enable and use the [UEBA](identify-threats-with-entity-behavior-analytics.md) feature to streamline the analysis process. | +| [**6. Set up data retention and archive**](configure-data-retention-archive.md) |Set up data retention and archive, to make sure your organization retains the data that's important in the long term. | ++## Next steps ++In this article, you reviewed the activities that help you deploy Microsoft Sentinel. ++> [!div class="nextstepaction"] +> >[Enable Microsoft Sentinel, health and audit, and content](enable-sentinel-features-content.md) |
sentinel | Design Your Workspace Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/design-your-workspace-architecture.md | Title: Design your Microsoft Sentinel workspace architecture | Microsoft Docs + Title: Design your Microsoft Sentinel workspace architecture description: Use a decision tree to understand how you might want to design your Microsoft Sentinel workspace architecture. Previously updated : 01/09/2023 Last updated : 06/28/2023 # Design your Microsoft Sentinel workspace architecture When planning to use resource-context or table level RBAC, consider the followin ## Next steps -For examples of this decision tree in practice, see [Microsoft Sentinel sample workspace designs](sample-workspace-designs.md). +In this article, you reviewed a decision tree to help you make key decisions about how to design your Microsoft Sentinel workspace architecture. -For more information, see: --- [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md)-- [Best practices for Microsoft Sentinel](best-practices.md)+> [!div class="nextstepaction"] +> >[Microsoft Sentinel sample workspace designs](sample-workspace-designs.md) |
sentinel | Enable Entity Behavior Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-entity-behavior-analytics.md | Title: Use entity behavior analytics to detect advanced threats | Microsoft Docs + Title: Enable entity behavior analytics to detect advanced threats description: Enable User and Entity Behavior Analytics in Microsoft Sentinel, and configure data sources Previously updated : 11/09/2021 Last updated : 07/05/2023 - # Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel +In the previous deployment step, you enabled the Microsoft Sentinel security content you need to protect your systems. In this article, you learn how to enable and use the UEBA feature to streamline the analysis process. ++As Microsoft Sentinel collects logs and alerts from all of its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as users, hosts, IP addresses, and applications) across time and peer group horizon. Using a variety of techniques and machine learning capabilities, Microsoft Sentinel can then identify anomalous activity and help you determine if an asset has been compromised. Learn more about [UEBA](identify-threats-with-entity-behavior-analytics.md). + [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## Prerequisites To enable or disable this feature (these prerequisites are not required to use t ## Next steps -In this document, you learned how to enable and configure User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel. For more information about UEBA: -- See the [list of anomalies](anomalies-reference.md#ueba-anomalies) detected using UEBA.-- Learn more about [how UEBA works](identify-threats-with-entity-behavior-analytics.md) and how to use it.+In this article, you learned how to enable and configure User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel. For more information about UEBA: -To learn more about Microsoft Sentinel, see the following articles: -- Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).+> [!div class="nextstepaction"] +>>[Configure data retention and archive](configure-data-retention-archive.md) |
sentinel | Enable Sentinel Features Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-sentinel-features-content.md | + + Title: Enable Microsoft Sentinel and initial features and content +description: As the first step of your deployment, you enable Microsoft Sentinel, and then enable the health and audit feature, solutions, and content. ++ Last updated : 07/05/2023++#Customer intent: As a SOC analyst, I want to enable the Microsoft Sentinel service and the key features and content, so I can get started with my deployment. +++# Enable Microsoft Sentinel and initial features and content ++To begin your deployment, you need to enable Microsoft Sentinel and set up key features and content. In this article, you learn how to enable Microsoft Sentinel, enable the health and audit feature, and enable the solutions and content you've identified according to your organization's needs. ++## Enable features and content ++|Step |Description | +||| +|1. [Enable the Microsoft Sentinel service](quickstart-onboard.md#enable) | In the Azure portal, enable Microsoft Sentinel to run on the Log Analytics workspace your organization planned as part of your workspace design. | +|2. [Enable health and audit](enable-monitoring.md) |Enable health and audit at this stage of your deployment to make sure that the service's many moving parts are always functioning as intended and that the service isn't being manipulated by unauthorized actions. Learn more about the [health and audit](health-audit.md) feature. | +|3. [Enable solutions and content](sentinel-solutions-deploy.md) |When you planned your deployment, you identified which data sources you need to ingest into Microsoft Sentinel. Now, you want to enable the relevant solutions and content so that the data you need can start flowing into Microsoft Sentinel. | ++## Next steps ++In this article, you learned how to enable Microsoft Sentinel, its health and audit feature, and required content. ++> [!div class="nextstepaction"] +>>[Configure content](configure-content.md) |
sentinel | Extend Sentinel Across Workspaces Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md | Title: Extend Microsoft Sentinel across workspaces and tenants description: How to use Microsoft Sentinel to query and analyze data across workspaces and tenants. - Previously updated : 07/14/2022+ Last updated : 06/28/2023 #Customer intent: As a security operator, I want to extend my workspace so I can query and analyze data across workspaces and tenants. # Extend Microsoft Sentinel across workspaces and tenants -## The need to use multiple Microsoft Sentinel workspaces +When you onboard Microsoft Sentinel, your first step is to select your Log Analytics workspace. While you can get the full benefit of the Microsoft Sentinel experience with a single workspace, in some cases, you might want to extend your workspace to query and analyze your data across workspaces and tenants. Learn more about [how Microsoft Sentinel can extend across multiple workspaces](prepare-multiple-workspaces.md). -When you onboard Microsoft Sentinel, your first step is to select your Log Analytics workspace. While you can get the full benefit of the Microsoft Sentinel experience with a single workspace, in some cases, you might want to extend your workspace to query and analyze your data across workspaces and tenants. --This table lists some of these scenarios and, when possible, suggests how you may use a single workspace for the scenario. --| Requirement | Description | Ways to reduce workspace count | -|-|-|--| -| Sovereignty and regulatory compliance | A workspace is tied to a specific region. To keep data in different [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/) to satisfy regulatory requirements, split up the data into separate workspaces. | | -| Data ownership | The boundaries of data ownership, for example by subsidiaries or affiliated companies, are better delineated using separate workspaces. | | -| Multiple Azure tenants | Microsoft Sentinel supports data collection from Microsoft and Azure SaaS resources only within its own Azure Active Directory (Azure AD) tenant boundary. Therefore, each Azure AD tenant requires a separate workspace. | | -| Granular data access control | An organization may need to allow different groups, within or outside the organization, to access some of the data collected by Microsoft Sentinel. For example:<br><ul><li>Resource owners' access to data pertaining to their resources</li><li>Regional or subsidiary SOCs' access to data relevant to their parts of the organization</li></ul> | Use [resource Azure RBAC](resource-context-rbac.md) or [table level Azure RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043) | -| Granular retention settings | Historically, multiple workspaces were the only way to set different retention periods for different data types. This is no longer needed in many cases, thanks to the introduction of table level retention settings. | Use [table level retention settings](https://techcommunity.microsoft.com/t5/azure-sentinel/new-per-data-type-retention-is-now-available-for-azure-sentinel/ba-p/917316) or automate [data deletion]([Managing personal data in Log Analytics and Application Insights](../azure-monitor/logs/personal-data-mgmt.md#exporting-and-deleting-personal-data) | -| Split billing | By placing workspaces in separate subscriptions, they can be billed to different parties. | Usage reporting and cross-charging | -| Legacy architecture | The use of multiple workspaces may stem from a historical design that took into consideration limitations or best practices which don't hold true anymore. It might also be an arbitrary design choice that can be modified to better accommodate Microsoft Sentinel.<br><br>Examples include:<br><ul><li>Using a per-subscription default workspace when deploying Microsoft Defender for Cloud</li><li>The need for granular access control or retention settings, the solutions for which are relatively new</li></ul> | Re-architect workspaces | --### Managed Security Service Provider (MSSP) --In case of an MSSP, many if not all of the above requirements apply, making multiple workspaces, across tenants, the best practice. The MSSP can use [Azure Lighthouse](../lighthouse/overview.md) to extend Microsoft Sentinel cross-workspace capabilities across tenants. --## Microsoft Sentinel multiple workspace architecture --As implied by the requirements above, there are cases where a single SOC needs to centrally manage and monitor multiple Microsoft Sentinel workspaces, potentially across Azure Active Directory (Azure AD) tenants. --- An MSSP Microsoft Sentinel Service.--- A global SOC serving multiple subsidiaries, each having its own local SOC.--- A SOC monitoring multiple Azure AD tenants within an organization.--To address these cases, Microsoft Sentinel offers multiple-workspace capabilities that enable central monitoring, configuration, and management, providing a single pane of glass across everything covered by the SOC. This diagram shows an example architecture for such use cases. ---This model offers significant advantages over a fully centralized model in which all data is copied to a single workspace: --- Flexible role assignment to the global and local SOCs, or to the MSSP its customers.--- Fewer challenges regarding data ownerships, data privacy and regulatory compliance.--- Minimal network latency and charges.--- Easy onboarding and offboarding of new subsidiaries or customers.--In the following sections, we'll explain how to operate this model, and particularly how to: --- Centrally monitor multiple workspaces, potentially across tenants, providing the SOC with a single pane of glass.--- Centrally configure and manage multiple workspaces, potentially across tenants, using automation.--## Cross-workspace monitoring --### Manage incidents on multiple workspaces +## Manage incidents on multiple workspaces Microsoft Sentinel supports a [multiple workspace incident view](./multiple-workspace-view.md) where you can centrally manage and monitor incidents across multiple workspaces. The centralized incident view lets you manage incidents directly or drill down transparently to the incident details in the context of the originating workspace. -### Cross-workspace querying +## Query multiple workspaces You can query [multiple workspaces](../azure-monitor/logs/cross-workspace-query.md), allowing you to search and correlate data from multiple workspaces in a single query. You can query [multiple workspaces](../azure-monitor/logs/cross-workspace-query. You can then write a query across both workspaces by beginning with `unionSecurityEvent | where ...` . -#### Cross-workspace analytics rules<a name="scheduled-alerts"></a> +### Include cross-workspace queries in scheduled analytics rules<a name="scheduled-alerts"></a> + <!-- Bookmark added for backward compatibility with old heading -->-You can now include cross-workspace queries in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC, and across tenants (using Azure Lighthouse), suitable for MSSPs. This use is subject to the following limitations: +You can include cross-workspace queries in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC, and across tenants (using Azure Lighthouse), suitable for MSSPs. This use is subject to the following limitations: - You can include **up to 20 workspaces** in a single query. However, for good performance, we recommend including no more than 5. - You must deploy Microsoft Sentinel **on every workspace** referenced in the query. Alerts and incidents created by cross-workspace analytics rules contain all the > [!NOTE] > Querying multiple workspaces in the same query might affect performance, and therefore is recommended only when the logic requires this functionality. -#### Cross-workspace workbooks<a name="using-cross-workspace-workbooks"></a> +### Use cross-workspace workbooks<a name="using-cross-workspace-workbooks"></a> <!-- Bookmark added for backward compatibility with old heading --> Workbooks provide dashboards and apps to Microsoft Sentinel. When working with multiple workspaces, workbooks provide monitoring and actions across workspaces. Workbooks can provide cross-workspace queries in one of three methods, suitable | Add a workspace selector to the workbook | The workbook creator can [implement a workspace selector as part of the workbook](https://techcommunity.microsoft.com/t5/azure-sentinel/making-your-azure-sentinel-workbooks-multi-tenant-or-multi/ba-p/1402357). | I want to allow the user to control the workspaces shown by the workbook, with an easy-to-use dropdown box. | | Edit the workbook interactively | An advanced user modifying an existing workbook can edit the queries in it, selecting the target workspaces using the workspace selector in the editor. | I want to allow a power user to easily modify existing workbooks to work with multiple workspaces. | -#### Cross-workspace hunting +### Hunt across multiple workspaces Microsoft Sentinel provides preloaded query samples designed to get you started and get you familiar with the tables and the query language. Microsoft security researchers constantly add new built-in queries and fine-tune existing queries. You can use these queries to look for new detections and identify signs of intrusion that your security tools may have missed. -Cross-workspace hunting capabilities enable your threat hunters to create new hunting queries, or adapt existing ones, to cover multiple workspaces, by using the union operator and the workspace() expression as shown [above](#cross-workspace-querying). +Cross-workspace hunting capabilities enable your threat hunters to create new hunting queries, or adapt existing ones, to cover multiple workspaces, by using the union operator and the workspace() expression as shown [above](#query-multiple-workspaces). -## Cross-workspace management using automation +## Manage multiple workspaces using automation To configure and manage multiple Microsoft Sentinel workspaces, you need to automate the use of the Microsoft Sentinel management API. When using Azure Lighthouse, it's recommended to create a group for each Microso In this article, you learned how Microsoft Sentinel's capabilities can be extended across multiple workspaces and tenants. For practical guidance on implementing Microsoft Sentinel's cross-workspace architecture, see the following articles: - Learn how to [work with multiple tenants](./multiple-tenants-service-providers.md) in Microsoft Sentinel, using Azure Lighthouse.-- Learn how to [view and manage incidents in multiple workspaces](./multiple-workspace-view.md) seamlessly.--+- Learn how to [view and manage incidents in multiple workspaces](./multiple-workspace-view.md) seamlessly. |
sentinel | Mssp Protect Intellectual Property | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/mssp-protect-intellectual-property.md | -The method you choose will depend on how each of your customers buy Azure; whether you act as a [Cloud Solutions Provider (CSP)](#cloud-solutions-providers-csp), or the customer has an [Enterprise Agreement (EA)/Pay-as-you-go (PAYG)](#enterprise-agreements-ea--pay-as-you-go-payg) account. The sections below describe each of these methods separately. +The method you choose depends on how each of your customers buys Azure; whether you act as a [Cloud Solutions Provider (CSP)](#cloud-solutions-providers-csp), or the customer has an [Enterprise Agreement (EA)/Pay-as-you-go (PAYG)](#enterprise-agreements-ea--pay-as-you-go-payg) account. The following sections describe each of these methods separately. ## Cloud Solutions Providers (CSP) For example: - Use this method to enable customers to view selected workbooks and playbooks, which are separate resources that can reside in their own resource group. -Even with granting access at the resource group level, customers will still have access to log data for the resources they can access, such as logs from a VM, even without access to Microsoft Sentinel. For more information, see [Manage access to Microsoft Sentinel data by resource](resource-context-rbac.md). +Even with granting access at the resource group level, customers have access to log data for the resources they can access, such as logs from a VM, even without access to Microsoft Sentinel. For more information, see [Manage access to Microsoft Sentinel data by resource](resource-context-rbac.md). > [!TIP] > If you need to provide your customers with access to the entire subscription, you may want to see the guidance in [Enterprise Agreements (EA) / Pay-as-you-go (PAYG)](#enterprise-agreements-ea--pay-as-you-go-payg). For more information, also see the [Azure Lighthouse documentation](../lighthous ## Enterprise Agreements (EA) / Pay-as-you-go (PAYG) -If your customer is buying directly from Microsoft, the customer already has full access to the Azure environment, and you cannot hide anything that's in the customer's Azure subscription. +If your customer is buying directly from Microsoft, the customer already has full access to the Azure environment, and you can't hide anything that's in the customer's Azure subscription. Instead, protect your intellectual property that you've developed in Microsoft Sentinel as follows, depending on the type of resource you need to protect: ### Analytics rules and hunting queries -Analytics rules and hunting queries are both contained within Microsoft Sentinel, and therefore cannot be separated from the Microsoft Sentinel workspace. +Analytics rules and hunting queries are both contained within Microsoft Sentinel, and therefore can't be separated from the Microsoft Sentinel workspace. -Even if a user only has Microsoft Sentinel Reader permissions, they'll still be able to view the query. In this case, we recommend hosting your Analytics rules and hunting queries in your own MSSP tenant, instead of the customer tenant. +Even if a user only has Microsoft Sentinel Reader permissions, they can view the query. In this case, we recommend hosting your Analytics rules and hunting queries in your own MSSP tenant, instead of the customer tenant. -To do this, you'll need a workspace in your own tenant with Microsoft Sentinel enabled, and you'll also need to see the customer workspace via [Azure Lighthouse](multiple-tenants-service-providers.md). +To do this, you need a workspace in your own tenant with Microsoft Sentinel enabled, and you also need to see the customer workspace via [Azure Lighthouse](multiple-tenants-service-providers.md). To create an analytic rule or hunting query in the MSSP tenant that references data in the customer tenant, you must use the `workspace` statement as follows: workspace('<customer-workspace>').SecurityEvent When adding a `workspace` statement to your analytics rules, consider the following: -- **No alerts in the customer workspace**. Rules created in this manner, wonΓÇÖt create alerts or incidents in the customer workspace. Both alerts and incidents will exist in your MSSP workspace only.+- **No alerts in the customer workspace**. Rules created in this manner, don't create alerts or incidents in the customer workspace. Both alerts and incidents exist in your MSSP workspace only. -- **Create separate alerts for each customer**. When you use this method, we also recommend that you use separate alert rules for each customer and detection, as the workspace statement will be different in each case.+- **Create separate alerts for each customer**. When you use this method, we also recommend that you use separate alert rules for each customer and detection, as the workspace statement is different in each case. You can add the customer name to the alert rule name to easily identify the customer where the alert is triggered. Separate alerts may result in a large number of rules, which you might want to manage using scripting, or [Microsoft Sentinel as Code](https://techcommunity.microsoft.com/t5/azure-sentinel/deploying-and-managing-azure-sentinel-as-code/ba-p/1131928). For example: :::image type="content" source="media/mssp-protect-intellectual-property/cross-workspace-workbook.png" alt-text="Cross-workspace workbooks"::: -For more information, see [Cross-workspace workbooks](extend-sentinel-across-workspaces-tenants.md#cross-workspace-workbooks). +For more information, see [Cross-workspace workbooks](extend-sentinel-across-workspaces-tenants.md#use-cross-workspace-workbooks). If you want the customer to be able to view the workbook visualizations, while still keeping the code secret, we recommend that you export the workbook to Power BI. You can protect your playbooks as follows, depending on where the analytic rule In both cases, if the playbook needs to access the customerΓÇÖs Azure environment, use a user or service principal that has that access via Lighthouse. -However, if the playbook needs to access non-Azure resources in the customerΓÇÖs tenant, such as Azure AD, Office 365, or Microsoft 365 Defender, you'll need to create a service principal with appropriate permissions in the customer tenant, and then add that identity in the playbook. +However, if the playbook needs to access non-Azure resources in the customerΓÇÖs tenant, such as Azure AD, Office 365, or Microsoft 365 Defender, create a service principal with appropriate permissions in the customer tenant, and then add that identity in the playbook. > [!NOTE] > If you use automation rules together with your playbooks, you must set the automation rule permissions on the resource group where the playbooks live. |
sentinel | Prepare Multiple Workspaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prepare-multiple-workspaces.md | + + Title: Prepare for multiple workspaces and tenants in Microsoft Sentinel +description: To prepare for your deployment, learn how Microsoft Sentinel can extend across multiple workspaces and tenants. ++ Last updated : 06/28/2023+++#Customer intent: As a SOC architect, I want to learn about how Microsoft Sentinel can extend across workspaces so I can determine whether I need this capability and prepare accordingly. +++# Prepare for multiple workspaces and tenants in Microsoft Sentinel ++To prepare for your deployment, you need to determine whether a multiple workspace architecture is relevant for your environment. In this article, you learn how Microsoft Sentinel can extend across multiple workspaces and tenants so you can determine whether this capability suits your organization's needs. ++If you've determined and set up your environment to extend across workspaces, you can [manage and monitor cross-workspace architecture](extend-sentinel-across-workspaces-tenants.md) or [manage multiple workspaces with workspace manager](workspace-manager.md). ++## The need to use multiple Microsoft Sentinel workspaces ++When you onboard Microsoft Sentinel, your first step is to select your Log Analytics workspace. While you can get the full benefit of the Microsoft Sentinel experience with a single workspace, in some cases, you might want to extend your workspace to query and analyze your data across workspaces and tenants. ++This table lists some of these scenarios and, when possible, suggests how you may use a single workspace for the scenario. ++| Requirement | Description | Ways to reduce workspace count | +|-|-|--| +| Sovereignty and regulatory compliance | A workspace is tied to a specific region. To keep data in different [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/) to satisfy regulatory requirements, split up the data into separate workspaces. | | +| Data ownership | The boundaries of data ownership, for example by subsidiaries or affiliated companies, are better delineated using separate workspaces. | | +| Multiple Azure tenants | Microsoft Sentinel supports data collection from Microsoft and Azure SaaS resources only within its own Azure Active Directory (Azure AD) tenant boundary. Therefore, each Azure AD tenant requires a separate workspace. | | +| Granular data access control | An organization may need to allow different groups, within or outside the organization, to access some of the data collected by Microsoft Sentinel. For example:<br><ul><li>Resource owners' access to data pertaining to their resources</li><li>Regional or subsidiary SOCs' access to data relevant to their parts of the organization</li></ul> | Use [resource Azure RBAC](resource-context-rbac.md) or [table level Azure RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043) | +| Granular retention settings | Historically, multiple workspaces were the only way to set different retention periods for different data types. This is no longer needed in many cases, thanks to the introduction of table level retention settings. | Use [table level retention settings](https://techcommunity.microsoft.com/t5/azure-sentinel/new-per-data-type-retention-is-now-available-for-azure-sentinel/ba-p/917316) or automate [data deletion](../azure-monitor/logs/personal-data-mgmt.md#exporting-and-deleting-personal-data) | +| Split billing | By placing workspaces in separate subscriptions, they can be billed to different parties. | Usage reporting and cross-charging | +| Legacy architecture | The use of multiple workspaces may stem from a historical design that took into consideration limitations or best practices which don't hold true anymore. It might also be an arbitrary design choice that can be modified to better accommodate Microsoft Sentinel.<br><br>Examples include:<br><ul><li>Using a per-subscription default workspace when deploying Microsoft Defender for Cloud</li><li>The need for granular access control or retention settings, the solutions for which are relatively new</li></ul> | Re-architect workspaces | ++### Managed Security Service Provider (MSSP) ++In case of an MSSP, many if not all of the above requirements apply, making multiple workspaces, across tenants, the best practice. The MSSP can use [Azure Lighthouse](../lighthouse/overview.md) to extend Microsoft Sentinel cross-workspace capabilities across tenants. ++## Microsoft Sentinel multiple workspace architecture ++As implied by the requirements above, there are cases where a single SOC needs to centrally manage and monitor multiple Microsoft Sentinel workspaces, potentially across Azure Active Directory (Azure AD) tenants. ++- An MSSP Microsoft Sentinel Service. ++- A global SOC serving multiple subsidiaries, each having its own local SOC. ++- A SOC monitoring multiple Azure AD tenants within an organization. ++To address these cases, Microsoft Sentinel offers multiple-workspace capabilities that enable central monitoring, configuration, and management, providing a single pane of glass across everything covered by the SOC. This diagram shows an example architecture for such use cases. +++This model offers significant advantages over a fully centralized model in which all data is copied to a single workspace: ++- Flexible role assignment to the global and local SOCs, or to the MSSP its customers. ++- Fewer challenges regarding data ownerships, data privacy and regulatory compliance. ++- Minimal network latency and charges. ++- Easy onboarding and offboarding of new subsidiaries or customers. ++In the following sections, we'll explain how to operate this model, and particularly how to: ++- Centrally monitor multiple workspaces, potentially across tenants, providing the SOC with a single pane of glass. ++- Centrally configure and manage multiple workspaces, potentially across tenants, using automation. ++## Next steps ++In this article, you learned how Microsoft Sentinel can extend across multiple workspaces and tenants. ++> [!div class="nextstepaction"] +>>[Prioritize data connectors](prioritize-data-connectors.md) |
sentinel | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prerequisites.md | Title: Prerequisites for deploying Microsoft Sentinel + Title: Plan and prepare for your Microsoft Sentinel deployment description: Learn about pre-deployment activities and prerequisites for deploying Microsoft Sentinel. Previously updated : 01/09/2023 Last updated : 06/29/2023 +# Plan and prepare for your Microsoft Sentinel deployment -# Pre-deployment activities and prerequisites for deploying Microsoft Sentinel +This article introduces the activities and prerequisites that help you plan and prepare before deploying Microsoft Sentinel. -This article introduces the pre-deployment activities and prerequisites for deploying Microsoft Sentinel. --## Pre-deployment activities +The plan and prepare phase is typically performed by a SOC architect or related roles. Before deploying Microsoft Sentinel, we recommend taking the following steps to help focus your deployment on providing maximum value, as soon as possible. -1. Determine which [data sources](connect-data-sources.md) you need and the data size requirements to help you accurately project your deployment's budget and timeline. -- You might determine this information during your business use case review, or by evaluating a current SIEM that you already have in place. If you already have a SIEM in place, analyze your data to understand which data sources provide the most value and should be ingested into Microsoft Sentinel. --1. Design your Microsoft Sentinel workspace. Consider parameters such as: -- - Whether you'll use a single tenant or multiple tenants - - Any compliance requirements you have for data collection and storage - - How to control access to Microsoft Sentinel data -- For more information, see [Workspace architecture best practices](best-practices-workspace-architecture.md) and [Sample workspace designs](sample-workspace-designs.md). --1. After the business use cases, data sources, and data size requirements have been identified, [start planning your budget](billing.md), considering cost implications for each planned scenario. -- Make sure that your budget covers the cost of data ingestion for both Microsoft Sentinel and Azure Log Analytics, any playbooks that will be deployed, and so on. +## Plan and prepare overview - For more information, see: +| Step | Details | +| | - | +| **1. Plan and prepare overview and prerequisites** | **YOU ARE HERE**<br><br>Review the [Azure tenant prerequisites](#azure-tenant-prerequisites). | +| **2. Plan workspace architecture** | Design your Microsoft Sentinel workspace. Consider parameters such as:<br><br>- Whether you'll use a single tenant or multiple tenants<br>- Any compliance requirements you have for data collection and storage<br>- How to control access to Microsoft Sentinel data<br><br>Review these articles:<br><br>1. [Review best practices](best-practices-workspace-architecture.md)<br>2. [Design workspace architecture](design-your-workspace-architecture.md)<br>3. [Review sample workspace designs](sample-workspace-designs.md)<br>4. [Prepare for multiple workspaces](prepare-multiple-workspaces.md) | +| **3. [Prioritize data connectors](prioritize-data-connectors.md)** | Determine which data sources you need and the data size requirements to help you accurately project your deployment's budget and timeline.<br><br>You might determine this information during your business use case review, or by evaluating a current SIEM that you already have in place. If you already have a SIEM in place, analyze your data to understand which data sources provide the most value and should be ingested into Microsoft Sentinel. | +| **4. [Plan roles and permissions](roles.md)** |Use Azure role based access control (RBAC) to create and assign roles within your security operations team to grant appropriate access to Microsoft Sentinel. The different roles give you fine-grained control over what Microsoft Sentinel users can see and do. Azure roles can be assigned in the Microsoft Sentinel workspace directly, or in a subscription or resource group that the workspace belongs to, which Microsoft Sentinel inherits. | +| **5. [Plan costs](billing.md)** |Start planning your budget, considering cost implications for each planned scenario.<br><br> Make sure that your budget covers the cost of data ingestion for both Microsoft Sentinel and Azure Log Analytics, any playbooks that will be deployed, and so on. | - - [Microsoft Sentinel costs and billing](billing.md) - - [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel/) - - [Log Analytics pricing](https://azure.microsoft.com/pricing/details/monitor/) - - [Logic apps (playbooks) pricing](https://azure.microsoft.com/pricing/details/logic-apps/) - - [Integrating Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md) --1. Nominate an engineer or architect lead the deployment, based on requirements and timelines. This individual should lead the deployment and be the main point of contact on your team. --## Azure tenant requirements +## Azure tenant prerequisites Before deploying Microsoft Sentinel, make sure that your Azure tenant has the following requirements: Before deploying Microsoft Sentinel, make sure that your Azure tenant has the fo - A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) is required to house all of the data that Microsoft Sentinel will be ingesting and using for its detections, analytics, and other features. For more information, see [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md). Microsoft Sentinel doesn't support Log Analytics workspaces with a resource lock applied. -We recommend that when you set up your Microsoft Sentinel workspace, [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) that's dedicated to Microsoft Sentinel and the resources that Microsoft Sentinel uses, including the Log Analytics workspace, any playbooks, workbooks, and so on. +- We recommend that when you set up your Microsoft Sentinel workspace, [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) that's dedicated to Microsoft Sentinel and the resources that Microsoft Sentinel uses, including the Log Analytics workspace, any playbooks, workbooks, and so on. -A dedicated resource group allows for permissions to be assigned once, at the resource group level, with permissions automatically applied to any relevant resources. Managing access via a resource group helps to ensure that you're using Microsoft Sentinel efficiently without potentially issuing improper permissions. Without a resource group for Microsoft Sentinel, where resources are scattered among multiple resource groups, a user or service principal may find themselves unable to perform a required action or view data due to insufficient permissions. -To implement more access control to resources by tiers, use extra resource groups to house the resources that should be accessed only by those groups. Using multiple tiers of resource groups enables you to separate access between those tiers. + A dedicated resource group allows for permissions to be assigned once, at the resource group level, with permissions automatically applied to any relevant resources. Managing access via a resource group helps to ensure that you're using Microsoft Sentinel efficiently without potentially issuing improper permissions. Without a resource group for Microsoft Sentinel, where resources are scattered among multiple resource groups, a user or service principal may find themselves unable to perform a required action or view data due to insufficient permissions. ++ To implement more access control to resources by tiers, use extra resource groups to house the resources that should be accessed only by those groups. Using multiple tiers of resource groups enables you to separate access between those tiers. ## Next steps-> [!div class="nextstepaction"] -> >[On-board Microsoft Sentinel](quickstart-onboard.md) -> [!div class="nextstepaction"] -> >[Get visibility into alerts](get-visibility.md) +In this article, you reviewed the activities and prerequisites that help you plan and prepare before deploying Microsoft Sentinel. ++> [!div class="nextstepaction"] +> >[Review workspace architecture best practices](best-practices-workspace-architecture.md) |
sentinel | Prioritize Data Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prioritize-data-connectors.md | + + Title: Prioritize data connectors for Microsoft Sentinel +description: Learn how to plan and prioritize which data sources to use for your Microsoft Sentinel deployment. ++ Last updated : 06/29/2023+++++# Prioritize your data connectors for Microsoft Sentinel ++In this article, you learn how to plan and prioritize which data sources to use for your Microsoft Sentinel deployment. ++## Determine which connectors you need ++Check which data connectors are relevant to your environment, in the following order: ++1. Review this list of [free data connectors](billing.md#free-data-sources). The free data connectors will start showing value from Microsoft Sentinel as soon as possible, while you continue to plan other data connectors and budgets. +1. Review the [custom](create-custom-connector.md) data connectors. +1. Review the [partner](data-connectors-reference.md) data connectors. ++For the custom and partner connectors, we recommend that you start by setting up [CEF/Syslog](connect-cef-syslog-options.md) connectors, with the highest priority first, as well as any Linux-based devices. ++If your data ingestion becomes too expensive, too quickly, stop or filter the logs forwarded using the [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md). ++> [!TIP] +> Custom data connectors enable you to ingest data into Microsoft Sentinel from data sources not currently supported by built-in functionality, such as via agent, Logstash, or API. For more information, see [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md). +> ++## Alternative data ingestion requirements ++If the standard configuration for data collection doesn't work well for your organization, review these and possible [alternative solutions and considerations](best-practices-data.md#alternative-data-ingestion-requirements). ++## Filter your logs ++If you choose to filter your collected logs or log content before the data is ingested into Microsoft Sentinel, [review these best practices](best-practices-data.md#filter-your-logs-before-ingestion). ++## Next steps ++In this article, you learned how to prioritize data connectors to prepare for your Microsoft Sentinel deployment. ++> [!div class="nextstepaction"] +>>[Plan roles and permissions](roles.md) |
sentinel | Review Fine Tune Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/review-fine-tune-overview.md | + + Title: Fine tune and review your Microsoft Sentinel deployment process and content +description: This article includes a checklist to help you fine tune and review your deployed content and deployment process. +++ Last updated : 07/05/2023++# Fine tune and review your Microsoft Sentinel deployment process and content ++In previous steps, you planned and prepared for your deployment, and then you enabled the Microsoft solution and deployed key security content. In this article, you review a post-deployment checklist that helps you make sure that your deployment process is working as expected, and that the security content you deployed is working and protecting your organization according to your needs and use cases. ++The fine tune and review phase is typically performed by a SOC engineer or related roles. ++## Fine tune and review: Checklist for post-deployment ++|Step |Actions | +| | - | +|✅ **Review incidents and incident process** |- Check whether the incidents and the number of incidents you're seeing reflect what's actually happening in your environment.<br>- Check whether your SOC's incident process is working to efficiently handle incidents: Have you assigned different types of incidents to different layers/tiers of the SOC?<br><br>Learn more about how to [navigate and investigate](investigate-incidents.md) incidents and how to [work with incident tasks](work-with-tasks.md). | +|✅ **Review and fine-tune analytics rules** | - Based on your incident review, check whether your analytics rules are triggered as expected, and whether the rules reflect the types of incidents you're interested in.<br>- [Handle false positives](false-positives.md), either by using automation or by modifying scheduled analytics rules.<br>- Microsoft Sentinel provides built-in fine-tuning capabilities to help you analyze your analytics rules. [Review these built-in insights and implement relevant recommendations](detection-tuning.md). | +|✅ **Review automation rules and playbooks** |- Similar to analytics rules, check that your automation rules are working as expected, and reflect the incidents you're concerned about and are interested in.<br>- Check whether your playbooks are responding to alerts and incidents as expected. | +|✅ **Add data to watchlists** |Check that your watchlists are up to date. If any changes have occurred in your environment, such as new users or use cases, [update your watchlists accordingly](watchlists-manage.md). | +|✅ **Review commitment tiers** | [Review the commitment tiers](billing.md#analytics-logs) you initially set up, and verify that these tiers reflect your current configuration. | +|✅ **Keep track of ingestion costs** |To keep track of ingestion costs, use one of these workbooks:<br>- The [**Workspace Usage Report** workbook](billing-monitor-costs.md#deploy-a-workbook-to-visualize-data-ingestion) provides your workspace's data consumption, cost, and usage statistics. The workbook gives the workspace's data ingestion status and amount of free and billable data. You can use the workbook logic to monitor data ingestion and costs, and to build custom views and rule-based alerts.<br>- The **Microsoft Sentinel Cost** workbook gives a more focused view of Microsoft Sentinel costs, including ingestion and retention data, ingestion data for eligible data sources, Logic Apps billing information, and more. | +|✅ **Fine-tune Data Collection Rules (DCRs)** |- Check that your [DCRs](../azure-monitor/essentials/data-collection-rule-overview.md) reflect your data ingestion needs and use cases.<br>- If needed, [implement ingestion-time transformation](data-transformation.md#filtering) to filter out irrelevant data even before it's first stored in your workspace. | +|✅ **Check analytics rules against MITRE framework** |[Check your MITRE coverage in the Microsoft Sentinel MITRE page](mitre-coverage.md): View the detections already active in your workspace, and those available for you to configure, to understand your organization's security coverage, based on the tactics and techniques from the MITRE ATT&CK┬« framework. | +|✅ **Hunt for suspicious activity** |Make sure that your SOC has a process in place for [proactive threat hunting](hunts.md). Hunting is a process where security analysts seek out undetected threats and malicious behaviors. By creating a hypothesis, searching through data, and validating that hypothesis, they determine what to act on. Actions can include creating new detections, new threat intelligence, or spinning up a new incident. | ++## Next steps ++In this article, you reviewed a checklist of post-deployment steps. You're now finished your deployment of Microsoft Sentinel. ++To continue exploring Microsoft Sentinel capabilities, review these tutorials with common Microsoft Sentinel tasks: ++- [Forward Syslog data to a Log Analytics workspace with Microsoft Sentinel by using Azure Monitor Agent](forward-syslog-monitor-agent.md) +- [Configure data retention policy](configure-data-retention.md) +- [Detect threats using analytics rules](tutorial-log4j-detection.md) +- [Automatically check and record IP address reputation information in incidents](tutorial-enrich-ip-information.md) +- [Respond to threats using automation](tutorial-respond-threats-playbook.md) +- [Extract incident entities with non-native action](tutorial-extract-incident-entities.md) +- [Investigate with UEBA](investigate-with-ueba.md) +- [Build and monitor Zero Trust](sentinel-solution.md) |
sentinel | Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md | After understanding how roles and permissions work in Microsoft Sentinel, you ca > More roles may be required depending on the data you ingest or monitor. For example, Azure AD roles may be required, such as the global admin or security admin roles, to set up data connectors for services in other Microsoft portals. > +## Resource-based access control ++You may have some users who need to access only specific data in your Microsoft Sentinel workspace, but shouldn't have access to the entire Microsoft Sentinel environment. For example, you may want to provide a non-security operations (non-SOC) team with access to the Windows event data for the servers they own. ++In such cases, we recommend that you configure your role-based access control (RBAC) based on the resources that are allowed to your users, instead of providing them with access to the Microsoft Sentinel workspace or specific Microsoft Sentinel features. This method is also known as setting up resource-context RBAC. [Learn more about RBAC](resource-context-rbac.md) + ## Next steps In this article, you learned how to work with roles for Microsoft Sentinel users and what each role enables users to do. -Find blog posts about Azure security and compliance at the [Microsoft Sentinel Blog](https://aka.ms/azuresentinelblog). +> [!div class="nextstepaction"] +> >[Plan costs](billing.md) |
sentinel | Sample Workspace Designs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sample-workspace-designs.md | Contoso expects to ingest around 300 GB/day from all of their data sources. ### Contoso access requirements -ContosoΓÇÖs Azure environment already has a single existing Log Analytics workspace used by the Operations team to monitor the infrastructure. This workspace is located in Contoso AAD tenant, within EU North region, and is being used to collect logs from Azure VMs in all regions. They currently ingest around 50 GB/day. +ContosoΓÇÖs Azure environment already has a single existing Log Analytics workspace used by the Operations team to monitor the infrastructure. This workspace is located in Contoso Azure AD tenant, within EU North region, and is being used to collect logs from Azure VMs in all regions. They currently ingest around 50 GB/day. -The Contoso Operations team needs to have access to all the logs that they currently have in the workspace, which include several data types not needed by the SOC, such as **Perf**, **InsightsMetrics**, **ContainerLog**, and more. The Operations team must *not* have access to the new logs that will be collected in Microsoft Sentinel. +The Contoso Operations team needs to have access to all the logs that they currently have in the workspace, which include several data types not needed by the SOC, such as **Perf**, **InsightsMetrics**, **ContainerLog**, and more. The Operations team must *not* have access to the new logs that are collected in Microsoft Sentinel. ### Contoso's solution The following steps apply the [Microsoft Sentinel workspace design decision tree 1. Contoso has two different Azure AD tenants, and collects from tenant-level data sources, like Office 365 and Azure AD Sign-in and Audit logs, so we need at least one workspace per tenant. -1. Contoso does not need [charge-back](design-your-workspace-architecture.md#step-4-splitting-billing--charge-back), so we can continue with [step 5](design-your-workspace-architecture.md#step-5-collecting-any-non-soc-data). +1. Contoso doesn't need [charge-back](design-your-workspace-architecture.md#step-4-splitting-billing--charge-back), so we can continue with [step 5](design-your-workspace-architecture.md#step-5-collecting-any-non-soc-data). 1. Contoso does need to collect non-SOC data, although there isn't any overlap between SOC and non-SOC data. Also, SOC data accounts for approximately 250 GB/day, so they should use separate workspaces for the sake of cost efficiency. -1. The majority of Contoso's VMs are the EU North region, where they already have a workspace. Therefore, in this case, bandwidth costs are not a concern. +1. Most Contoso's VMs are the EU North region, where they already have a workspace. Therefore, in this case, bandwidth costs aren't a concern. 1. Contoso has a single SOC team that will be using Microsoft Sentinel, so no extra separation is needed. Fabrikam has a single Azure AD tenant. ### Fabrikam compliance and regional deployment -Fabrikam has no compliance requirements. Fabrikam has resources in several Azure regions located in the US, but bandwidth costs across regions is not a major concern. +Fabrikam has no compliance requirements. Fabrikam has resources in several Azure regions located in the US, but bandwidth costs across regions aren't a major concern. ### Fabrikam resource types and collection requirements The Fabrikam Operations team needs to access: - All Azure Activity data The Fabrikam SOC team needs to access:-- Azure AD Signin and Audit logs+- Azure AD Sign-in and Audit logs - All Azure Activity data - Security events, from both on-premises and Azure VM sources - AWS CloudTrail logs The following steps apply the [Microsoft Sentinel workspace design decision tree 1. Fabrikam will need separate workspaces for their SOC and Operations teams: - The Fabrikam Operations team needs to collect performance data, from both VMs and AKS. Since AKS is based on diagnostic settings, they can select specific logs to send to specific workspaces. Fabrikam can choose to send AKS audit logs to the Microsoft Sentinel workspace, and all AKS logs to a separate workspace, where Microsoft Sentinel is not enabled. In the workspace where Microsoft Sentinel is not enabled, Fabrikam will enable the Container Insights solution. + The Fabrikam Operations team needs to collect performance data, from both VMs and AKS. Since AKS is based on diagnostic settings, they can select specific logs to send to specific workspaces. Fabrikam can choose to send AKS audit logs to the Microsoft Sentinel workspace, and all AKS logs to a separate workspace, where Microsoft Sentinel isn't enabled. In the workspace where Microsoft Sentinel isn't enabled, Fabrikam will enable the Container Insights solution. For Windows VMs, Fabrikam can use the [Azure Monitoring Agent (AMA)](connect-windows-security-events.md#connector-options) to split the logs, sending security events to the Microsoft Sentinel workspace, and performance and Windows events to the workspace without Microsoft Sentinel. Fabrikam chooses to consider their overlapping data, such as security events and Azure activity events, as SOC data only, and sends this data to the workspace with Microsoft Sentinel. -1. Bandwidth costs are not a major concern for Fabrikam, so continue with [step 7](design-your-workspace-architecture.md#step-7-segregating-data-or-defining-boundaries-by-ownership). +1. Bandwidth costs aren't a major concern for Fabrikam, so continue with [step 7](design-your-workspace-architecture.md#step-7-segregating-data-or-defining-boundaries-by-ownership). 1. Fabrikam has already decided to use separate workspaces for the SOC and Operations teams. No further separation is needed. -1. Fabrikam does need to control access for overlapping data, including security events and Azure activity events, but there is no row-level requirement. +1. Fabrikam does need to control access for overlapping data, including security events and Azure activity events, but there's no row-level requirement. - Neither security events nor Azure activity events are custom logs, so Fabrikam can use table-level RBAC to grant access to these two tables for the Operations team. + Security events and Azure activity events aren't custom logs, so Fabrikam can use table-level RBAC to grant access to these two tables for the Operations team. The resulting Microsoft Sentinel workspace design for Fabrikam is illustrated in the following image, including only key log sources for the sake of design simplicity: The suggested solution includes: ## Sample 3: Multiple tenants and regions and centralized security -Adventure Works is a multinational company with headquarters in Tokyo. Adventure Works has 10 different sub-entities ,based in different countries/regions around the world. +Adventure Works is a multinational company with headquarters in Tokyo. Adventure Works has 10 different sub-entities , based in different countries/regions around the world. Adventure Works is Microsoft 365 E5 customer, and already has workloads in Azure. Adventure Works needs to collect the following data sources for each sub-entity: - Security and windows Events from Azure VMs - CEF logs from on-premises network devices -Azure VMs are scattered across the three continents, but bandwidth costs are not a concern. +Azure VMs are scattered across the three continents, but bandwidth costs aren't a concern. ### Adventure Works access requirements Adventure Works has a single, centralized SOC team that oversees security operations for all the different sub-entities. -Adventure Works also has three independent SOC teams, one for each of the continents. Each continent's SOC team should be able to access only the data generated within its region, without seeing data from other continents. For example, the Asia SOC team should only access data from Azure resources deployed in Asia, AAD Sign-ins from the Asia tenant, and Defender for Endpoint logs from itΓÇÖs the Asia tenant. +Adventure Works also has three independent SOC teams, one for each of the continents. Each continent's SOC team should be able to access only the data generated within its region, without seeing data from other continents. For example, the Asia SOC team should only access data from Azure resources deployed in Asia, Azure AD Sign-ins from the Asia tenant, and Defender for Endpoint logs from itΓÇÖs the Asia tenant. Each continent's SOC team needs to access the full Microsoft Sentinel portal experience. Adventure WorksΓÇÖ Operations team runs independently, and has its own workspace The following steps apply the [Microsoft Sentinel workspace design decision tree](design-your-workspace-architecture.md) to determine the best workspace design for Adventure Works: -1. Adventure Works' Operations team has it's own workspaces, so continue to [step 2](design-your-workspace-architecture.md#step-2-keeping-data-in-different-azure-geographies). +1. Adventure Works' Operations team has its own workspaces, so continue to [step 2](design-your-workspace-architecture.md#step-2-keeping-data-in-different-azure-geographies). 1. Adventure Works has no regulatory requirements, so continue to [step 3](design-your-workspace-architecture.md#step-3-do-you-have-multiple-azure-tenants). The following steps apply the [Microsoft Sentinel workspace design decision tree 1. Since Adventure Works' Operations team has its own workspaces, all data considered in this decision will be used by the Adventure Works SOC team. -1. Bandwidth costs are not a major concern for Adventure Works, so continue with [step 7](design-your-workspace-architecture.md#step-7-segregating-data-or-defining-boundaries-by-ownership). +1. Bandwidth costs aren't a major concern for Adventure Works, so continue with [step 7](design-your-workspace-architecture.md#step-7-segregating-data-or-defining-boundaries-by-ownership). 1. Adventure Works does need to segregate data by ownership, as each content's SOC team needs to access only data that is relevant to that content. However, each continent's SOC team also needs access to the full Microsoft Sentinel portal. -1. Adventure Works does not need to control data access by table. +1. Adventure Works doesn't need to control data access by table. The resulting Microsoft Sentinel workspace design for Adventure Works is illustrated in the following image, including only key log sources for the sake of design simplicity: The suggested solution includes: The suggested solution includes: - Each continent's SOC team has access only to the workspace in its own tenant, ensuring that only logs generated within the tenant boundary are accessible by each SOC team. -- The central SOC team can still operate from a separate Azure AD tenant, using Azure Lighthouse to access each of the different Microsoft Sentinel environments. If there is no additional tenant, the central SOC team can still use Azure Lighthouse to access the remote workspaces.--- The central SOC team can also create an additional workspace if it needs to store artifacts that remain hidden from the continent SOC teams, or if it wants to ingest other data that is not relevant to the continent SOC teams.-+- The central SOC team can still operate from a separate Azure AD tenant, using Azure Lighthouse to access each of the different Microsoft Sentinel environments. If there's no other tenant, the central SOC team can still use Azure Lighthouse to access the remote workspaces. +- The central SOC team can also create another workspace if it needs to store artifacts that remain hidden from the continent SOC teams, or if it wants to ingest other data that isn't relevant to the continent SOC teams. ## Next steps -> [!div class="nextstepaction"] ->[On-board Microsoft Sentinel](quickstart-onboard.md) +In this article, you reviewed a set of suggested workspace designs for organizations. > [!div class="nextstepaction"]->[Get visibility into alerts](get-visibility.md) +>>[Prepare for multiple workspaces](prepare-multiple-workspaces.md) |
sentinel | Use Multiple Workspaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-multiple-workspaces.md | + + Title: Set up multiple workspaces and tenants in Microsoft Sentinel +description: If you've defined that your environment needs multiple workspaces, you now set up your multiple workspace architecture in Microsoft Sentinel. ++ Last updated : 07/05/2023++#Customer intent: As a SOC architect, I want to learn about how Microsoft Sentinel can extend across workspaces so I can determine whether I need this capability and prepare accordingly. +++# Set up multiple workspaces and tenants in Microsoft Sentinel ++When you planned your deployment, you [determined whether a multiple workspace architecture is relevant for your environment](prepare-multiple-workspaces.md). If your environment requires multiple workspaces, you can now set them up as part of your deployment. In this article, you learn how to set up Microsoft Sentinel to extend across multiple workspaces and tenants. ++## Options for using multiple workspaces ++If you've determined and set up your environment to extend across workspaces, you can: ++- [Manage and monitor cross-workspace architecture](extend-sentinel-across-workspaces-tenants.md): Query and analyze your data across workspaces and tenants. +- [Manage multiple workspaces with workspace manager](workspace-manager.md): Centrally manage multiple workspaces within one or more Azure tenants. ++## Next steps ++In this article, you learned how to set up Microsoft Sentinel to extend across multiple workspaces and tenants. ++> [!div class="nextstepaction"] +>>[Enable User and Entity Behavior Analytics (UEBA)](enable-entity-behavior-analytics.md) |
storage | Storage Account Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md | Every Resource Manager resource, including an Azure storage account, must belong To create an Azure storage account with the Azure portal, follow these steps: -1. From the left portal menu, select **Storage accounts** to display a list of your storage accounts. If the portal menu isn't visible, click the menu button to toggle it on. +1. From the left portal menu, select **Storage accounts** to display a list of your storage accounts. If the portal menu isn't visible, select the menu button to toggle it on. :::image type="content" source="media/storage-account-create/menu-expand-sml.png" alt-text="Image of the Azure portal homepage showing the location of the Menu button near the top left corner of the browser." lightbox="media/storage-account-create/menu-expand-lrg.png"::: If you try to delete a storage account associated with an Azure virtual machine, # [Portal](#tab/azure-portal) 1. Navigate to the storage account in the [Azure portal](https://portal.azure.com).-1. Click **Delete**. +1. Select **Delete**. # [PowerShell](#tab/azure-powershell) |
storage | Storage Files Identity Ad Ds Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md | Set-AzStorageAccount ` To enable AES-256 encryption, follow the steps in this section. If you plan to use RC4 encryption, skip this section. > [!IMPORTANT]-> In order to enable AES-256 encryption, the domain object that represents your storage account must be a computer account (default) or service logon account in the on-premises AD domain. If your domain object doesn't meet this requirement, delete it and create a new domain object that does. +> In order to enable AES-256 encryption, the domain object that represents your storage account must be a computer account (default) or service logon account in the on-premises AD domain. If your domain object doesn't meet this requirement, delete it and create a new domain object that does. Also, you must have write access to the `msDS-SupportedEncryptionTypes` attribute of the object. The cmdlet you'll run to configure AES-256 support depends on whether the domain object that represents your storage account is a computer account or service logon account (user account). Either way, you must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges. |
virtual-wan | Virtual Wan Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md | There are several limitations with the virtual hub router upgrade If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup. ->[!NOTE] -> The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version. -> If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you will need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label). +Additional things to note: +* The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version. ++* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you will need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label). ### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway? |