Updates from: 01/20/2022 02:05:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
Now that you've obtained the SPA sample, update the code with your Azure AD B2C
|||| |authConfig.js|clientId| The SPA ID from [step 2.3](#step-23-register-the-spa).| |policies.js| names| The user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow).|
-|policies.js|authorities|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso.onmicrosoft.com`). Then, replace with the user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow) (for example, `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`).|
-|policies.js|authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso.onmicrosoft.com`).|
+|policies.js|authorities|Your Azure AD B2C user flows or custom policies authorities such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`. Replace `your-sign-in-sign-up-policy` with user flow or custom policy you created in [step 1](#sign-in-flow)|
+|policies.js|authorityDomain|Your Azure AD B2C authority domain such as `<your-tenant-name>.b2clogin.com`.|
|apiConfig.js|b2cScopes|The web API scopes you created in [step 2.2](#step-22-configure-scopes) (for example, `b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/tasks-api/tasks.read"]`).|
-|apiConfig.js|webApi|The URL of the web API, `http://localhost:5000/tasks`.|
+|apiConfig.js|webApi|The URL of the web API, `http://localhost:5000/hello`.|
| | | | Your resulting code should look similar to following sample:
const b2cPolicies = {
```javascript const apiConfig = { b2cScopes: ["https://your-tenant-name.onmicrosoft.com/tasks-api/tasks.read"],
- webApi: "http://localhost:5000/tasks"
+ webApi: "http://localhost:5000/hello"
}; ```
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
- Title: Azure Services that support managed identities - Azure AD
-description: List of services that support managed identities for Azure resources and Azure AD authentication
--- Previously updated : 11/09/2021--------
-# Services that support managed identities for Azure resources
-
-Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. Using a managed identity, you can authenticate to any service that supports Azure AD authentication without having credentials in your code. We are in the process of integrating managed identities for Azure resources and Azure AD authentication across Azure. Check back often for updates.
-
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
-
-> [!NOTE]
-> Managed identities for Azure resources is the new name for the service formerly known as Managed Service Identity (MSI).
--
-## Azure services that support managed identities for Azure resources
-
-The following Azure services support managed identities for Azure resources:
-
-### Azure API Management
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-
-Refer to the following list to configure managed identity for Azure API Management (in regions where available):
--- [Azure Resource Manager template](../../api-management/api-management-howto-use-managed-service-identity.md)-
-### Azure App Configuration
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not Available | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | Not Available | ![Available][check] |
-
-Refer to the following list to configure managed identity for Azure App Configuration (in regions where available):
--- [Azure CLI](../../azure-app-configuration/overview-managed-identity.md)-
-### Azure App Service
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-
-Refer to the following list to configure managed identity for Azure App Service (in regions where available):
--- [Azure portal](../../app-service/overview-managed-identity.md#using-the-azure-portal)-- [Azure CLI](../../app-service/overview-managed-identity.md#using-the-azure-cli)-- [Azure PowerShell](../../app-service/overview-managed-identity.md#using-azure-powershell)-- [Azure Resource Manager template](../../app-service/overview-managed-identity.md#using-an-azure-resource-manager-template)-
-### Azure Arc-enabled Kubernetes
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Preview | Not available | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-Azure Arc-enabled Kubernetes currently [supports system assigned identity](../../azure-arc/kubernetes/quickstart-connect-cluster.md). The managed service identity certificate is used by all Azure Arc-enabled Kubernetes agents for communication with Azure.
-
-### Azure Arc-enabled servers
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-All Azure Arc-enabled servers have a system assigned identity. You cannot disable or change the system assigned identity on an Azure Arc-enabled server. Refer to the following resources to learn more about how to consume managed identities on Azure Arc-enabled servers:
--- [Authenticate against Azure resources with Azure Arc-enabled servers](../../azure-arc/servers/managed-identity-authentication.md)-- [Using a managed identity with Azure Arc-enabled servers](../../azure-arc/servers/security-overview.md#using-a-managed-identity-with-azure-arc-enabled-servers)-
-### Azure Arc resource bridge
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | Not available | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-Azure Arc resource bridge currently [supports system assigned identity](../../azure-arc/resource-bridge/security-overview.md). The managed service identity is used by agents in the resource bridge for communication with Azure.
-
-### Azure Automanage
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Preview | Not available | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-Refer to the following document to reconfigure a managed identity if you have moved your subscription to a new tenant:
-
-* [Repair a broken Automanage Account](../../automanage/repair-automanage-account.md)
-
-### Azure Automation
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][Check]| ![Available][Check] | Not available | ![Available][Check] |
-| User assigned | ![Available][Check] | ![Available][Check] | Not available | ![Available][Check] |
-
-Refer to the following documents to use managed identity with [Azure Automation](../../automation/automation-intro.md):
-
-* [Automation account authentication overview - Managed identities](../../automation/automation-security-overview.md#managed-identities)
-* [Enable and use managed identity for Automation](../../automation/enable-managed-identity-for-automation.md)
-
-### Azure Blueprints
-
-|Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | Not available |
-| User assigned | ![Available][check] | ![Available][check] | Not available | Not available |
-
-Refer to the following list to use a managed identity with [Azure Blueprints](../../governance/blueprints/overview.md):
--- [Azure portal - blueprint assignment](../../governance/blueprints/create-blueprint-portal.md#assign-a-blueprint)-- [REST API - blueprint assignment](../../governance/blueprints/create-blueprint-rest-api.md#assign-a-blueprint)-
-### Azure Cognitive Search
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | Not available | Not available | Not available | Not available |
-
-### Azure Cognitive Services
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | Not available | Not available | Not available | Not available |
-
-### Azure Container Instances
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Linux: Preview<br>Windows: Not available | Not available | Not available | Not available |
-| User assigned | Linux: Preview<br>Windows: Not available | Not available | Not available | Not available |
-
-Refer to the following list to configure managed identity for Azure Container Instances (in regions where available):
--- [Azure CLI](~/articles/container-instances/container-instances-managed-identity.md)-- [Azure Resource Manager template](~/articles/container-instances/container-instances-managed-identity.md#enable-managed-identity-using-resource-manager-template)-- [YAML](~/articles/container-instances/container-instances-managed-identity.md#enable-managed-identity-using-yaml-file)-
-### Azure Container Registry Tasks
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | Preview | Not available | Preview |
-| User assigned | Preview | Preview | Not available | Preview |
-
-Refer to the following list to configure managed identity for Azure Container Registry Tasks (in regions where available):
--- [Azure CLI](~/articles/container-registry/container-registry-tasks-authentication-managed-identity.md)-
-### Azure Data Explorer
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | Not available | Not available | Not available | Not available |
-
-### Azure Data Factory V2
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | Not available | Not available | Not available | Not available |
-
-Refer to the following list to configure managed identity for Azure Data Factory V2 (in regions where available):
--- [Azure portal](~/articles/data-factory/data-factory-service-identity.md#generate-managed-identity)-
-### Azure Digital Twins
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | Not available | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-Refer to the following list to configure managed identity for Azure Digital Twins (in regions where available):
--- [Azure portal](../../digital-twins/how-to-route-with-managed-identity.md)-
-### Azure Event Grid
-
-Managed identity type |All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Preview | Preview | Not available | Preview |
-| User assigned | Preview | Preview | Not available | Preview |
-
-### Azure Firewall Policy
-
-Managed identity type |All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Not available | Not available | Not available | Not available |
-| User assigned | Preview | Not available | Not available | Not available |
-
-### Azure Functions
-
-Managed identity type |All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-
-Refer to the following list to configure managed identity for Azure Functions (in regions where available):
--- [Azure portal](../../app-service/overview-managed-identity.md#using-the-azure-portal)-- [Azure CLI](../../app-service/overview-managed-identity.md#using-the-azure-cli)-- [Azure PowerShell](../../app-service/overview-managed-identity.md#using-azure-powershell)-- [Azure Resource Manager template](../../app-service/overview-managed-identity.md#using-an-azure-resource-manager-template)-
-### Azure IoT Hub
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | ![Available][check] | Not available | Not available | Not available |
-
-Refer to the following list to configure managed identity for Azure IoT Hub (in regions where available):
--- For more information, please see [Azure IoT Hub support for managed identities](../../iot-hub/iot-hub-managed-identity.md).-
-### Azure Import/Export
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | | | | |
-| System assigned | Available in the region where Azure Import Export service is available | Preview | Available | Available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-### Azure Kubernetes Service (AKS)
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | Not available |
-| User assigned | Preview | ![Available][check] | Not available | Not available |
-
-For more information, see [Use managed identities in Azure Kubernetes Service](../../aks/use-managed-identity.md).
-
-### Azure Log Analytics cluster
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-
-For more information, see [how identity works in Azure Monitor](../../azure-monitor/logs/customer-managed-keys.md)
-
-### Azure Logic Apps
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-
-Refer to the following list to configure managed identity for Azure Logic Apps (in regions where available):
--- [Azure portal](../../logic-apps/create-managed-service-identity.md#enable-system-assigned-identity-in-azure-portal)-- [Azure Resource Manager template](../../logic-apps/logic-apps-azure-resource-manager-templates-overview.md)-
-### Azure Machine Learning
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Preview | Not Available | Not available | Not available |
-| User assigned | Preview | Not available | Not available | Not available |
-
-For more information, see [Use managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md).
-
-### Azure Maps
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Preview | Preview | Not available | Not available |
-| User assigned | Preview | Preview | Not available | Not available |
-
-For more information, see [Authentication on Azure Maps](../../azure-maps/azure-maps-authentication.md).
--
-### Azure Media Services
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not Available | ![Available][check] |
-| User assigned | Not Available | Not Available | Not Available | Not Available |
-
-Refer to the following list to configure managed identity for Azure Media Services (in regions where available):
--- [Azure CLI](../../media-services/latest/security-access-storage-managed-identity-cli-tutorial.md)-
-### Azure Policy
-
-|Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-| User assigned | Not available | Not available | Not available | Not available |
-
-Refer to the following list to configure managed identity for Azure Policy (in regions where available):
--- [Azure portal](../../governance/policy/tutorials/create-and-manage.md#assign-a-policy)-- [PowerShell](../../governance/policy/how-to/remediate-resources.md#create-managed-identity-with-powershell)-- [Azure CLI](/cli/azure/policy/assignment#az_policy_assignment_create)-- [Azure Resource Manager templates](/azure/templates/microsoft.authorization/policyassignments)-- [REST](/rest/api/policy/policyassignments/create)-
-### Azure Service Fabric
-
-[Managed Identity for Service Fabric Applications](../../service-fabric/concepts-managed-identity.md) is available in all regions.
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | Not Available | Not Available | not Available |
-| User assigned | ![Available][check] | Not Available | Not Available |Not Available |
-
-Refer to the following list to configure managed identity for Azure Service Fabric applications in all regions:
--- [Azure Resource Manager template](https://github.com/Azure-Samples/service-fabric-managed-identity/tree/anmenard-docs)-
-### Azure Spring Cloud
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | Not Available | Not Available | ![Available][check] |
-| User assigned | Not Available | Not Available | Not Available | Not Available |
--
-For more information, see [How to enable system-assigned managed identity for applications in Azure Spring Cloud](../../spring-cloud/how-to-enable-system-assigned-managed-identity.md).
-
-### Azure Stack Edge
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | | | | |
-| System assigned | Available in the region where Azure Stack Edge service is available | Not available | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-### Azure Virtual Machine Scale Sets
-
-|Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-
-Refer to the following list to configure managed identity for Azure Virtual Machine Scale Sets (in regions where available):
--- [Azure portal](qs-configure-portal-windows-vm.md)-- [PowerShell](qs-configure-powershell-windows-vm.md)-- [Azure CLI](qs-configure-cli-windows-vm.md)-- [Azure Resource Manager templates](qs-configure-template-windows-vm.md)-- [REST](qs-configure-rest-vm.md)-
-### Azure Virtual Machines
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-| User assigned | ![Available][check] | ![Available][check] | ![Available][check] | ![Available][check] |
-
-Refer to the following list to configure managed identity for Azure Virtual Machines (in regions where available):
--- [Azure portal](qs-configure-portal-windows-vm.md)-- [PowerShell](qs-configure-powershell-windows-vm.md)-- [Azure CLI](qs-configure-cli-windows-vm.md)-- [Azure Resource Manager templates](qs-configure-template-windows-vm.md)-- [REST](qs-configure-rest-vm.md)-- [Azure SDKs](qs-configure-sdk-windows-vm.md)-
-### Azure VM Image Builder
-
-| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Not Available | Not Available | Not Available | Not Available |
-| User assigned | [Available in supported regions](../../virtual-machines/image-builder-overview.md#regions) | Not Available | Not Available | Not Available |
-
-To learn how to configure managed identity for Azure VM Image Builder (in regions where available), see the [Image Builder overview](../../virtual-machines/image-builder-overview.md#permissions).
-### Azure SignalR Service
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Preview | Preview | Not available | Preview |
-| User assigned | Preview | Preview | Not available | Preview |
-
-Refer to the following list to configure managed identity for Azure SignalR Service (in regions where available):
--- [Azure Resource Manager template](../../azure-signalr/howto-use-managed-identity.md)-
-### Azure Resource Mover
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | Available in the regions where Azure Resource Mover service is available | Not available | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
-
-Refer to the following document to use Azure Resource Mover:
--- [Azure Resource Mover](../../resource-mover/overview.md)-
-## Azure services that support Azure AD authentication
-
-The following services support Azure AD authentication, and have been tested with client services that use managed identities for Azure resources.
-
-### Azure Resource Manager
-
-Refer to the following list to configure access to Azure Resource
--- [Assign access via Azure portal](howto-assign-access-portal.md)-- [Assign access via PowerShell](howto-assign-access-powershell.md)-- [Assign access via Azure CLI](howto-assign-access-CLI.md)-- [Assign access via Azure Resource Manager template](../../role-based-access-control/role-assignments-template.md)-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://management.azure.com/`| ![Available][check] |
-| Azure Government | `https://management.usgovcloudapi.net/` | ![Available][check] |
-| Azure Germany | `https://management.microsoftazure.de/` | ![Available][check] |
-| Azure China 21Vianet | `https://management.chinacloudapi.cn` | ![Available][check] |
-
-### Azure Key Vault
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://vault.azure.net`| ![Available][check] |
-| Azure Government | `https://vault.usgovcloudapi.net` | ![Available][check] |
-| Azure Germany | `https://vault.microsoftazure.de` | ![Available][check] |
-| Azure China 21Vianet | `https://vault.azure.cn` | ![Available][check] |
-
-### Azure Data Lake
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://datalake.azure.net/` | ![Available][check] |
-| Azure Government | | Not Available |
-| Azure Germany | | Not Available |
-| Azure China 21Vianet | | Not Available |
-
-### Azure Cosmos DB
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://<account>.documents.azure.com/`<br/><br/>`https://cosmos.azure.com` | ![Available][check] |
-| Azure Government | `https://<account>.documents.azure.us/`<br/><br/>`https://cosmos.azure.us` | ![Available][check] |
-| Azure Germany | `https://<account>.documents.microsoftazure.de/`<br/><br/>`https://cosmos.microsoftazure.de` | ![Available][check] |
-| Azure China 21Vianet | `https://<account>.documents.azure.cn/`<br/><br/>`https://cosmos.azure.cn` | ![Available][check] |
-
-### Azure SQL
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://database.windows.net/` | ![Available][check] |
-| Azure Government | `https://database.usgovcloudapi.net/` | ![Available][check] |
-| Azure Germany | `https://database.cloudapi.de/` | ![Available][check] |
-| Azure China 21Vianet | `https://database.chinacloudapi.cn/` | ![Available][check] |
-
-### Azure Data Explorer
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://<account>.<region>.kusto.windows.net` | ![Available][check] |
-| Azure Government | `https://<account>.<region>.kusto.usgovcloudapi.net` | ![Available][check] |
-| Azure Germany | `https://<account>.<region>.kusto.cloudapi.de` | ![Available][check] |
-| Azure China 21Vianet | `https://<account>.<region>.kusto.chinacloudapi.cn` | ![Available][check] |
-
-### Azure Event Hubs
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://eventhubs.azure.net` | ![Available][check] |
-| Azure Government | `https://eventhubs.azure.net` | ![Available][check] |
-| Azure Germany | `https://eventhubs.azure.net` | ![Available][check] |
-| Azure China 21Vianet | `https://eventhubs.azure.net` | ![Available][check] |
-
-### Azure Service Bus
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://servicebus.azure.net` | ![Available][check] |
-| Azure Government | `https://servicebus.azure.net` | ![Available][check] |
-| Azure Germany | `https://servicebus.azure.net` | ![Available][check] |
-| Azure China 21Vianet | `https://servicebus.azure.net` | ![Available][check] |
-
-### Azure Storage blobs, queues, and tables
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://storage.azure.com/` <br /><br />`https://<account>.blob.core.windows.net` <br /><br />`https://<account>.queue.core.windows.net` <br /><br />`https://<account>.table.core.windows.net`| ![Available][check] |
-| Azure Government | `https://storage.azure.com/`<br /><br />`https://<account>.blob.core.usgovcloudapi.net` <br /><br />`https://<account>.queue.core.usgovcloudapi.net` <br /><br />`https://<account>.table.core.usgovcloudapi.net`| ![Available][check] |
-| Azure Germany | `https://storage.azure.com/`<br /><br />`https://<account>.blob.core.cloudapi.de` <br /><br />`https://<account>.queue.core.cloudapi.de` <br /><br />`https://<account>.table.core.cloudapi.de`| ![Available][check] |
-| Azure China 21Vianet | `https://storage.azure.com/`<br /><br />`https://<account>.blob.core.chinacloudapi.cn` <br /><br />`https://<account>.queue.core.chinacloudapi.cn` <br /><br />`https://<account>.table.core.chinacloudapi.cn`| ![Available][check] |
-
-### Azure Analysis Services
-
-| Cloud | Resource ID | Status |
-|--||:-:|
-| Azure Global | `https://*.asazure.windows.net` | ![Available][check] |
-| Azure Government | `https://*.asazure.usgovcloudapi.net` | ![Available][check] |
-| Azure Germany | `https://*.asazure.cloudapi.de` | ![Available][check] |
-| Azure China 21Vianet | `https://*.asazure.chinacloudapi.cn` | ![Available][check] |
-
-### Azure Communication Services
-
-Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
-| | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | Not available | Not available | Not available |
-| User assigned | ![Available][check] | Not available | Not available | Not available |
-
-> [!NOTE]
-> You can use Managed Identities to authenticate an [Azure Stream analytics job to Power BI](../../stream-analytics/powerbi-output-managed-identity.md).
-
-[check]: media/services-support-managed-identities/check.png "Available"
active-directory Autodesk Sso Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/autodesk-sso-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Autodesk SSO for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Autodesk SSO.
++
+writer: twimmers
++
+ms.assetid: 07782ca6-955c-441e-b28c-5e7f3c3775ac
++++ Last updated : 01/10/2022+++
+# Tutorial: Configure Autodesk SSO for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Autodesk SSO and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Autodesk SSO](https://autodesk.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Autodesk SSO.
+> * Remove users in Autodesk SSO when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Autodesk SSO.
+> * Provision groups and group memberships in Autodesk SSO.
+> * [Single sign-on](autodesk-sso-tutorial.md) to Autodesk SSO (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account with either Primary admin or SSO admin role to access [Autodesk management portal](https://manage.autodesk.com/).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Autodesk SSO](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Autodesk SSO to support provisioning with Azure AD
+1. Login to [Autodesk management portal](https://manage.autodesk.com/).
+1. From the left navigation menu, navigate to **User Management > By Group**. Select the required team from the drop-down list and click the team settings gear icon.
+
+ [![Navigation](media/autodesk-sso-provisioning-tutorial/step2-1-navigation.png)](media/autodesk-sso-provisioning-tutorial/step2-1-navigation.png#lightbox)
+
+2. Click the Set up directory sync button and select Azure AD SCIM as the directory environment. Click Next to access the Azure admin credentials. If you set up Directory Sync before, click on the Access Credential instead.
+
+ ![Set Up Directory Sync](media/autodesk-sso-provisioning-tutorial/step2-2-set-up-directory-sync.png)
+
+3. Copy and save the Base URL and API token. These values will be entered in the Tenant URL * field and Secret Token * field respectively in the Provisioning tab of your Autodesk application in the Azure portal.
+
+ ![Get Credentials](media/autodesk-sso-provisioning-tutorial/step2-3-get-credentials.png)
+
+## Step 3. Add Autodesk SSO from the Azure AD application gallery
+
+Add Autodesk SSO from the Azure AD application gallery to start managing provisioning to Autodesk SSO. If you have previously setup Autodesk SSO for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Autodesk SSO, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Autodesk SSO
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Autodesk SSO based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Autodesk SSO in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Autodesk SSO**.
+
+ ![The Autodesk SSO link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Autodesk SSO Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Autodesk SSO. If the connection fails, ensure your Autodesk SSO account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Autodesk SSO**.
+
+1. Review the user attributes that are synchronized from Azure AD to Autodesk SSO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Autodesk SSO for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Autodesk SSO API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Autodesk SSO|
+ |||||
+ |userName|String|&check;|&check;|
+ |active|Boolean||&check;|
+ |name.givenName|String||&check;|
+ |name.familyName|String||&check;|
+ |urn:ietf:params:scim:schemas:extension:AdskUserExt:2.0:User:objectGUID|String||&check;|
++
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Autodesk SSO**.
+
+1. Review the group attributes that are synchronized from Azure AD to Autodesk SSO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Autodesk SSO for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Autodesk SSO|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference|||
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Autodesk SSO, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and groups that you would like to provision to Autodesk SSO by choosing the appropriate values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to execute than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Moveittransfer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/moveittransfer-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with MOVEit Transfer - Azure AD integration | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with MOVEit Transfer - Azure AD integration'
description: Learn how to configure single sign-on between Azure Active Directory and MOVEit Transfer - Azure AD integration.
Previously updated : 01/27/2021 Last updated : 01/19/2022
-# Tutorial: Azure Active Directory integration with MOVEit Transfer - Azure AD integration
+# Tutorial: Azure AD SSO integration with MOVEit Transfer - Azure AD integration
In this tutorial, you'll learn how to integrate MOVEit Transfer - Azure AD integration with Azure Active Directory (Azure AD). When you integrate MOVEit Transfer - Azure AD integration with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* MOVEit Transfer - Azure AD integration supports **SP** initiated SSO
+* MOVEit Transfer - Azure AD integration supports **SP** initiated SSO.
## Add MOVEit Transfer - Azure AD integration from the gallery
To configure and test Azure AD SSO with MOVEit Transfer - Azure AD integration,
1. **[Create MOVEit Transfer - Azure AD integration test user](#create-moveit-transferazure-ad-integration-test-user)** - to have a counterpart of B.Simon in MOVEit Transfer - Azure AD integration that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
![choose metadata file](common/browse-upload-metadata.png)
- c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** value gets auto populated in **Basic SAML Configuration** section:
+ c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** value gets auto populated in **Basic SAML Configuration** section.
- ![MOVEit Transfer - Azure AD integration Domain and URLs single sign-on information](common/sp-identifier-reply.png)
-
- In the **Sign-on URL** text box, type the URL:
+ d. In the **Sign-on URL** text box, type the URL:
`https://contoso.com` > [!NOTE]
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure MOVEit Transfer - Azure AD integration SSO
+## Configure MOVEit Transfer - Azure AD integration SSO
1. Sign on to your MOVEit Transfer tenant as an administrator. 2. On the left navigation pane, click **Settings**.
- ![Settings Section On App side](./media/moveittransfer-tutorial/settings.png)
+ ![Settings Section On App side.](./media/moveittransfer-tutorial/settings.png)
3. Click **Single Signon** link, which is under **Security Policies -> User Auth**.
- ![Security Policies On App side](./media/moveittransfer-tutorial/sso.png)
+ ![Security Policies On App side.](./media/moveittransfer-tutorial/security.png)
4. Click the Metadata URL link to download the metadata document.
- ![Service Provider Metadata URL](./media/moveittransfer-tutorial/metadata.png)
+ ![Service Provider Metadata URL.](./media/moveittransfer-tutorial/metadata.png)
- * Verify **entityID** matches **Identifier** in the **Basic SAML Configuration** section .
- * Verify **AssertionConsumerService** Location URL matches **REPLY URL** in the **Basic SAML Configuration** section.
+ a. Verify **entityID** matches **Identifier** in the **Basic SAML Configuration** section .
+
+ b. Verify **AssertionConsumerService** Location URL matches **REPLY URL** in the **Basic SAML Configuration** section.
- ![Configure Single Sign-On On App side](./media/moveittransfer-tutorial/xml.png)
+ :::image type="content" source="./media/moveittransfer-tutorial/file.png" alt-text="Screenshot of Configure Single Sign-On On App side." lightbox="./media/moveittransfer-tutorial/file.png":::
5. Click **Add Identity Provider** button to add a new Federated Identity Provider.
- ![Add Identity Provider](./media/moveittransfer-tutorial/idp.png)
+ ![Add Identity Provider.](./media/moveittransfer-tutorial/provider.png)
6. Click **Browse...** to select the metadata file which you downloaded from Azure portal, then click **Add Identity Provider** to upload the downloaded file.
- ![SAML Identity Provider](./media/moveittransfer-tutorial/saml.png)
+ ![SAML Identity Provider.](./media/moveittransfer-tutorial/azure.png)
7. Select "**Yes**" as **Enabled** in the **Edit Federated Identity Provider Settings...** page and click **Save**.
- ![Federated Identity Provider Settings](./media/moveittransfer-tutorial/save.png)
+ ![Federated Identity Provider Settings.](./media/moveittransfer-tutorial/save.png)
8. In the **Edit Federated Identity Provider User Settings** page, perform the following actions:
- ![Edit Federated Identity Provider Settings](./media/moveittransfer-tutorial/attributes.png)
+ ![Edit Federated Identity Provider Settings.](./media/moveittransfer-tutorial/attributes.png)
a. Select **SAML NameID** as **Login name**.
The objective of this section is to create a user called Britta Simon in MOVEit
>[!NOTE] >If you need to create a user manually, you need to contact the [MOVEit Transfer - Azure AD integration Client support team](https://community.ipswitch.com/s/support).
-### Test SSO
+## Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure MOVEit Transfer - Azure AD integration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure MOVEit Transfer - Azure AD integration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Navex One Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/navex-one-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with NAVEX One | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with NAVEX One'
description: Learn how to configure single sign-on between Azure Active Directory and NAVEX One.
Previously updated : 01/28/2021 Last updated : 01/19/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with NAVEX One
+# Tutorial: Azure AD SSO integration with NAVEX One
In this tutorial, you'll learn how to integrate NAVEX One with Azure Active Directory (Azure AD). When you integrate NAVEX One with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* NAVEX One supports **SP** initiated SSO
+* NAVEX One supports **SP** initiated SSO.
-## Adding NAVEX One from the gallery
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add NAVEX One from the gallery
To configure the integration of NAVEX One into Azure AD, you need to add NAVEX One from the gallery to your list of managed SaaS apps.
To configure the integration of NAVEX One into Azure AD, you need to add NAVEX O
1. In the **Add from the gallery** section, type **NAVEX One** in the search box. 1. Select **NAVEX One** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for NAVEX One Configure and test Azure AD SSO with NAVEX One using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in NAVEX One.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign-on URL** text box, type a URL using one of the following patterns:
-
- | Sign-on URL |
- |--|
- | `https://<CLIENT_KEY>.navexglobal.com` |
- | `https://<CLIENT_KEY>.navexglobal.eu` |
- |
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier** text box, type one of the following URLs:
+ a. In the **Identifier** text box, type one of the following URLs:
| Identifier | |--|
Follow these steps to enable Azure AD SSO in the Azure portal.
| `https://doorman.navexglobal.eu/Shibboleth` | |
- c. In the **Reply URL** text box, type one of the following URLs:
+ b. In the **Reply URL** text box, type one of the following URLs:
| Reply URL | |--|
Follow these steps to enable Azure AD SSO in the Azure portal.
| `https://doorman.navexglobal.eu/Shibboleth.sso/SAML2/POST` | |
+ c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | Sign-on URL |
+ |--|
+ | `https://<CLIENT_KEY>.navexglobal.com` |
+ | `https://<CLIENT_KEY>.navexglobal.eu` |
+ |
+ > [!NOTE] > The Sign-on URL value is not real. Update the value with the actual Sign-on URL. Contact [NAVEX One Client support team](mailto:ethicspoint@navexglobal.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure NAVEX One you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure NAVEX One you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Showpad Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/showpad-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Showpad | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Showpad'
description: Learn how to configure single sign-on between Azure Active Directory and Showpad.
Previously updated : 03/25/2019 Last updated : 01/19/2022
-# Tutorial: Azure Active Directory integration with Showpad
+# Tutorial: Azure AD SSO integration with Showpad
-In this tutorial, you learn how to integrate Showpad with Azure Active Directory (Azure AD).
-Integrating Showpad with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Showpad with Azure Active Directory (Azure AD). When you integrate Showpad with Azure AD, you can:
-* You can control in Azure AD who has access to Showpad.
-* You can enable your users to be automatically signed-in to Showpad (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Showpad.
+* Enable your users to be automatically signed-in to Showpad with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Showpad, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Showpad single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Showpad single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Showpad supports **SP** initiated SSO
-* Showpad supports **Just In Time** user provisioning
+* Showpad supports **SP** initiated SSO.
+* Showpad supports **Just In Time** user provisioning.
-## Adding Showpad from the gallery
+## Add Showpad from the gallery
To configure the integration of Showpad into Azure AD, you need to add Showpad from the gallery to your list of managed SaaS apps.
-**To add Showpad from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Showpad**, select **Showpad** from result panel then click **Add** button to add the application.
-
- ![Showpad in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Showpad based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Showpad needs to be established.
-
-To configure and test Azure AD single sign-on with Showpad, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Showpad Single Sign-On](#configure-showpad-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Showpad test user](#create-showpad-test-user)** - to have a counterpart of Britta Simon in Showpad that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Showpad** in the search box.
+1. Select **Showpad** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-### Configure Azure AD single sign-on
+## Configure and test Azure AD SSO for Showpad
-In this section, you enable Azure AD single sign-on in the Azure portal.
+Configure and test Azure AD SSO with Showpad using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Showpad.
-To configure Azure AD single sign-on with Showpad, perform the following steps:
+To configure and test Azure AD SSO with Showpad, perform the following steps:
-1. In the [Azure portal](https://portal.azure.com/), on the **Showpad** application integration page, select **Single sign-on**.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Showpad SSO](#configure-showpad-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Showpad test user](#create-showpad-test-user)** - to have a counterpart of B.Simon in Showpad that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Configure single sign-on link](common/select-sso.png)
+## Configure Azure AD SSO
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Single sign-on select mode](common/select-saml-option.png)
+1. In the Azure portal, on the **Showpad** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Showpad Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<comapany-name>.showpad.biz/login`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<company-name>.showpad.biz`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<comapany-name>.showpad.biz/login`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Showpad Client support team](https://help.showpad.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Showpad Client support team](https://help.showpad.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with Showpad, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
- b. Azure AD Identifier
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Showpad.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Showpad**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Showpad Single Sign-On
+## Configure Showpad SSO
1. Sign in to your Showpad tenant as an administrator. 1. In the menu on the top, click the **Settings**.
- ![Screenshot shows Settings selected from the Settings menu.](./media/showpad-tutorial/tutorial_showpad_001.png)
+ ![Screenshot shows Settings selected from the Settings menu.](./media/showpad-tutorial/settings.png)
1. Navigate to **Single Sign-On** and click **Enable**.
- ![Screenshot shows Single Sign-On selected with an Enable option.](./media/showpad-tutorial/tutorial_showpad_002.png)
+ ![Screenshot shows Single Sign-On selected with an Enable option.](./media/showpad-tutorial/profile.png)
1. On the **Add a SAML 2.0 Service** dialog, perform the following steps:
- ![Screenshot shows the Add a SAML 2.0 Service dialog box where you can enter the values described.](./media/showpad-tutorial/tutorial_showpad_003.png)
+ ![Screenshot shows the Add a SAML 2.0 Service dialog box where you can enter the values described.](./media/showpad-tutorial/user.png)
a. In the **Name** textbox, type the name of Identifier Provider (for example: your company name).
To configure Azure AD single sign-on with Showpad, perform the following steps:
e. Click **Submit**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Showpad.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Showpad**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Showpad**.
-
- ![The Showpad link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Showpad test user In this section, a user called Britta Simon is created in Showpad. Showpad supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Showpad, a new one is created after authentication.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Showpad tile in the Access Panel, you should be automatically signed in to the Showpad for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Showpad Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Showpad Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Showpad tile in the My Apps, this will redirect to Showpad Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Showpad you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Terratrue Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/terratrue-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure TerraTrue for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to TerraTrue.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 80547381-3f42-4e18-b737-20b43402e31e
+++
+ms.devlang: na
+ Last updated : 12/16/2021+++
+# Tutorial: Configure TerraTrue for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both TerraTrue and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [TerraTrue](https://terratruehq.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in TerraTrue.
+> * Remove users in TerraTrue when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and TerraTrue.
+> * [Single sign-on](terratrue-tutorial.md) to TerraTrue.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [TerraTrue](https://terratruehq.com/) tenant.
+* A user account in TerraTrue with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and TerraTrue](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure TerraTrue to support provisioning with Azure AD
+
+1. In TerraTrue, navigate to **Organization Settings > Authentication > SCIM** or visit `https://launch.terratrue.com/settings/auth/scim`.
+1. Next, enable the **ΓÇ£SCIM ConfigurationΓÇ¥** toggle and click **ΓÇ£Copy API KeyΓÇ¥** to copy the SCIM API Key.
++
+ ![Generate Token](media/terratrue-provisioning-tutorial/generate-token.png)
+
+## Step 3. Add TerraTrue from the Azure AD application gallery
+
+Add TerraTrue from the Azure AD application gallery to start managing provisioning to TerraTrue. If you have previously setup TerraTrue for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to TerraTrue, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to TerraTrue
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TerraTrue based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for TerraTrue in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **TerraTrue**.
+
+ ![The TerraTrue link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your TerraTrue Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to TerraTrue. If the connection fails, ensure your TerraTrue account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to TerraTrue**.
+
+1. Review the user attributes that are synchronized from Azure AD to TerraTrue in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in TerraTrue for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the TerraTrue API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by TerraTrue|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean|
+ |name.givenName|String
+ |name.familyName|String
+ |name.formatted|String
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for TerraTrue, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to TerraTrue by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Troubleshooting and Tips
+Reach out to `hello@terratrue.com` for help ensuring that your provisioning is working correctly.
+
+TerraTrue provides a revision history of all changes to a user's account visible to any TerraTrue administrator at the link below. All user changes made as a result of SCIM provisioning will be shown with the Actor column being "Scim System User".
+
+`https://launch.terratrue.com/settings/history`
+
+Lastly, TerraTrue sets the user's Display Name based on the first name and last name received during the first user sync. Subsequent changes to the user's Display Name may be made by an administrator from within TerraTrue under the User Organization Settings at the link below:
+
+`https://launch.terratrue.com/settings/users`
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Timeclock 365 Saml Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/timeclock-365-saml-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure TimeClock 365 SAML for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to TimeClock 365 SAML.
++
+writer: twimmers
+
+ms.assetid: 7f87db7f-ee99-4798-bca9-e281508e6b76
++++ Last updated : 01/17/2022+++
+# Tutorial: Configure TimeClock 365 SAML for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both TimeClock 365 SAML and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [TimeClock 365 SAML](https://timeclock365.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in TimeClock 365 SAML
+> * Remove users in TimeClock 365 SAML when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and TimeClock 365 SAML
+> * [Single sign-on](timeclock-365-saml-tutorial.md) to TimeClock 365 SAML (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [TimeClock 365 SAML](https://timeclock365.com/) tenant.
+* A user account in TimeClock 365 SAML with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and TimeClock 365 SAML](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure TimeClock 365 SAML to support provisioning with Azure AD
+
+1. Login to [Timeclock365 admin console](https://live.timeclock365.com).
+
+1. Navigate to **Settings > Company profile > General**.
+
+ [![Generate Token Page](media/timeclock-365-saml-provisioning-tutorial/generate-token-page.png)](media/timeclock-365-saml-provisioning-tutorial/generate-token-page.png#lightbox)
+
+1. Scroll down to **Azure user synchronization**.Copy and save the **Azure AD token**. This value will be entered in the **Secret Token** * field in the Provisioning tab of your TimeClock 365 SAML application in the Azure portal.
+
+ [![Generate Token](media/timeclock-365-saml-provisioning-tutorial/generate-token.png)](media/timeclock-365-saml-provisioning-tutorial/generate-token.png#lightbox)
+
+1. `https://live.timeclock365.com/scim` will be entered in the **Tenant URL** field in the Provisioning tab of your TimeClock 365 SAML application in the Azure portal.
+
+## Step 3. Add TimeClock 365 SAML from the Azure AD application gallery
+
+Add TimeClock 365 SAML from the Azure AD application gallery to start managing provisioning to TimeClock 365 SAML. If you have previously setup TimeClock 365 SAML for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to TimeClock 365 SAML, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to TimeClock 365 SAML
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TimeClock 365 SAML based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for TimeClock 365 SAML in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **TimeClock 365 SAML**.
+
+ ![The TimeClock 365 SAML link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your TimeClock 365 **Tenant URL** and **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to TimeClock 365. If the connection fails , ensure your TimeClock 365 account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to TimeClock 365 SAML**.
+
+1. Review the user attributes that are synchronized from Azure AD to TimeClock 365 SAML in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in TimeClock 365 SAML for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the TimeClock 365 SAML API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for TimeClock 365 SAML, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to TimeClock 365 SAML by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/node-pool-snapshot.md
SNAPSHOT_ID=$(az aks snapshot show --name MySnapshot --resource-group myResource
Now, we can use this command to create this cluster off of the snapshot configuration. ```azurecli-interactive
-az aks cluster create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-id $SNAPSHOT_ID
+az aks create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-id $SNAPSHOT_ID
``` ## Next steps
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
The following parameters can be leveraged to configure Private DNS Zone.
```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone [system|none] ```
-### Create a private AKS cluster with a BYO Private DNS SubZone (Preview)
+### Create a private AKS cluster with a BYO Private DNS SubZone
Prerequisites:
-* Azure CLI >= 2.29.0 or Azure CLI with aks-preview extension 0.5.34 or later.
-
-### Register the `EnablePrivateClusterSubZone` preview feature
--
-To create an AKS private cluster with SubZone, you must enable the `EnablePrivateClusterSubZone` feature flag on your subscription.
-
-Register the `EnablePrivateClusterSubZone` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnablePrivateClusterSubZone"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnablePrivateClusterSubZone')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-
-### Install the aks-preview CLI extension
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
+* Azure CLI >= 2.32.0 or later.
### Create a private AKS cluster with Custom Private DNS Zone or Private DNS SubZone
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview.md
With App Service, you pay for the Azure compute resources you use. The compute r
## Why use App Service?
-Here are some key features of App Service:
+Azure App Service is a fully managed platform as a service (PaaS) offering for developers. Here are some key features of App Service:
* **Multiple languages and frameworks** - App Service has first-class support for ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can also run [PowerShell and other scripts or executables](webjobs-create.md) as background services. * **Managed production environment** - App Service automatically [patches and maintains the OS and language frameworks](overview-patch-os-runtime.md) for you. Spend time writing great apps and let Azure worry about the platform.
Create your first web app.
> [HTML (on Windows or Linux)](quickstart-html.md) > [!div class="nextstepaction"]
-> [Custom container (Windows or Linux)](tutorial-custom-container.md)
+> [Custom container (Windows or Linux)](tutorial-custom-container.md)
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-auth-aad.md
In the Cloud Shell, run the following commands on the front-end app to add the `
```azurecli-interactive authSettings=$(az webapp auth show -g myAuthResourceGroup -n <front-end-app-name>)
-authSettings=$(echo "$authSettingsΓÇ¥ | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope=openid profile email offline_access api://<back-end-client-id>/user_impersonation"]}')
+authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope=openid profile email offline_access api://<back-end-client-id>/user_impersonation"]}')
az webapp auth set --resource-group myAuthResourceGroup --name <front-end-app-name> --body "$authSettings" ```
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
request contains the string **unauthorized**, it will be marked as Healthy. Othe
1. Verify that the response body in the Application Gateway custom probe configuration matches what's configured.
-1. If they don't match, change the probe configuration so that is has the correct string value to accept.
+1. If they don't match, change the probe configuration so that it has the correct string value to accept.
Learn more about [Application Gateway probe matching](./application-gateway-probe-overview.md#probe-matching).
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-app-configuration-event.md
You've triggered the event, and Event Grid sent the message to the endpoint you
"subject": "https://{appconfig-name}.azconfig.io/kv/Foo", "data": { "key": "Foo",
- "etag": "a1LIDdNEIV6wCnfv3xaip7fMXD3"
+ "etag": "a1LIDdNEIV6wCnfv3xaip7fMXD3",
+ "syncToken":"zAJw6V16=Njo1IzMzMjE3MzA=;sn=3321730"
}, "eventType": "Microsoft.AppConfiguration.KeyValueModified", "eventTime": "2019-05-31T18:59:54Z",
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-app-configuration Quickstart Aspnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-aspnet-core-app.md
ms.devlang: csharp Previously updated : 09/25/2020 Last updated : 1/3/2022 #Customer intent: As an ASP.NET Core developer, I want to learn how to manage all my app settings in one place.
dotnet new mvc --no-https --output TestAppConfig
Access this secret using the .NET Core Configuration API. A colon (`:`) works in the configuration name with the Configuration API on all supported platforms. For more information, see [Configuration keys and values](/aspnet/core/fundamentals/configuration#configuration-keys-and-values).
-1. In *Program.cs*, add a reference to the .NET Core Configuration API namespace:
+1. Select the correct syntax based on your environment.
- ```csharp
- using Microsoft.Extensions.Configuration;
- ```
-
-1. Update the `CreateWebHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
-
- > [!IMPORTANT]
- > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.x. Select the correct syntax based on your environment.
-
- #### [.NET 5.x](#tab/core5x)
+ #### [.NET 6.x](#tab/core6x)
+ In *Program.cs*, replace its content with the following code:
```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration(config =>
- {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
- config.AddAzureAppConfiguration(connection);
- }).UseStartup<Startup>());
+ var builder = WebApplication.CreateBuilder(args);
+ //Retrieve the Connection String from the secrets manager
+ var connectionString = builder.Configuration["AppConfig"];
+
+ builder.Host.ConfigureAppConfiguration(builder =>
+ {
+ //Connect to your App Config Store using the connection string
+ builder.AddAzureAppConfiguration(connectionString);
+ })
+ .ConfigureServices(services =>
+ {
+ services.AddControllersWithViews();
+ });
+
+ var app = builder.Build();
+
+ // Configure the HTTP request pipeline.
+ if (!app.Environment.IsDevelopment())
+ {
+ app.UseExceptionHandler("/Home/Error");
+ }
+ app.UseStaticFiles();
+
+ app.UseRouting();
+
+ app.UseAuthorization();
+
+ app.MapControllerRoute(
+ name: "default",
+ pattern: "{controller=Home}/{action=Index}/{id?}");
+
+ app.Run();
```
+
+ #### [.NET 5.x](#tab/core5x)
+
+ 1. In *Program.cs*, add a reference to the .NET Core Configuration API namespace:
+
+ ```csharp
+ using Microsoft.Extensions.Configuration;
+ ```
+
+ 1. Update the `CreateHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
+
+ ```csharp
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ webBuilder.ConfigureAppConfiguration(config =>
+ {
+ var settings = config.Build();
+ var connection = settings.GetConnectionString("AppConfig");
+ config.AddAzureAppConfiguration(connection);
+ }).UseStartup<Startup>());
+ ```
#### [.NET Core 3.x](#tab/core3x)
- ```csharp
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration(config =>
+ > [!IMPORTANT]
+ > `CreateHostBuilder` in .NET 3.x replaces `CreateWebHostBuilder` in .NET Core 2.x.
+
+ 1. In *Program.cs*, add a reference to the .NET Core Configuration API namespace:
+
+ ```csharp
+ using Microsoft.Extensions.Configuration;
+ ```
+ 1. Update the `CreateHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
+
+ ```csharp
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ webBuilder.ConfigureAppConfiguration(config =>
+ {
+ var settings = config.Build();
+ var connection = settings.GetConnectionString("AppConfig");
+ config.AddAzureAppConfiguration(connection);
+ }).UseStartup<Startup>());
+ ```
+
+ #### [.NET Core 2.x](#tab/core2x)
+
+ 1. In *Program.cs*, add a reference to the .NET Core Configuration API namespace:
+
+ ```csharp
+ using Microsoft.Extensions.Configuration;
+ ```
+
+ 1. Update the `CreateWebHostBuilder` method to use App Configuration by calling the `AddAzureAppConfiguration` method.
+
+ ```csharp
+ public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
+ WebHost.CreateDefaultBuilder(args)
+ .ConfigureAppConfiguration(config =>
{ var settings = config.Build(); var connection = settings.GetConnectionString("AppConfig"); config.AddAzureAppConfiguration(connection);
- }).UseStartup<Startup>());
- ```
-
- #### [.NET Core 2.x](#tab/core2x)
-
- ```csharp
- public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
- WebHost.CreateDefaultBuilder(args)
- .ConfigureAppConfiguration(config =>
- {
- var settings = config.Build();
- var connection = settings.GetConnectionString("AppConfig");
- config.AddAzureAppConfiguration(connection);
- })
- .UseStartup<Startup>();
- ```
-
-
+ })
+ .UseStartup<Startup>();
+ ```
+
- With the preceding change, the [configuration provider for App Configuration](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration) has been registered with the .NET Core Configuration API.
+This code will connect to your App Configuration store using a connection string and load all keys that have the *TestApp* prefix from a previous step. For more information on the configuration provider APIs, reference the [configuration provider for App Configuration docs](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration).
## Read from the App Configuration store
In the preceding code, the App Configuration store's keys are used as follows:
dotnet run ```
-1. If you're working on your local machine, use a browser to navigate to `http://localhost:5000`. This address is the default URL for the locally hosted web app. If you're working in the Azure Cloud Shell, select the **Web Preview** button followed by **Configure**.
+1. If you're working on your local machine, use a browser to navigate to `http://localhost:5000` or as specified in the command output. This address is the default URL for the locally hosted web app. If you're working in the Azure Cloud Shell, select the **Web Preview** button followed by **Configure**.
![Locate the Web Preview button](./media/quickstarts/cloud-shell-web-preview.png)
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022 #
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 11/03/2021 Last updated : 01/19/2022
Azure Arc-enabled servers *does not* support installing the agent on virtual mac
The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:
-* Windows Server 2008 R2 SP1, Windows Server 2012 R2, 2016, 2019, and 2022 (including Server Core)
+* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022
+ * Both Desktop and Server Core experiences are supported
+ * Azure Editions are supported when running as a virtual machine on Azure Stack HCI
+* Azure Stack HCI
* Ubuntu 16.04, 18.04, and 20.04 LTS (x64) * CentOS Linux 7 and 8 (x64) * SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) * Red Hat Enterprise Linux (RHEL) 7 and 8 (x64) * Amazon Linux 2 (x64)
-* Oracle Linux 7
+* Oracle Linux 7 (x64)
> [!WARNING] > The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. See [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md) for a list of the reserved words.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-deployment-technologies.md
You can deploy a Linux container image that contains your function app.
>* Create a Linux function app on an Azure App Service plan in the Azure portal. For **Publish**, select **Docker Image**, and then configure the container. Enter the location where the image is hosted. >* Create a Linux function app on an App Service plan by using the Azure CLI. To learn how, see [Create a function on Linux by using a custom image](functions-create-function-linux-custom-image.md#create-supporting-azure-resources-for-your-function). >
->To deploy to an existing app by using a custom container, in [Azure Functions Core Tools](functions-run-local.md), use the [`func deploy`](functions-run-local.md#publish) command.
+>To deploy to a Kubernetes cluster as a custom container, in [Azure Functions Core Tools](functions-run-local.md), use the [`func kubernetes deploy`](functions-core-tools-reference.md#func-kubernetes-deploy) command.
>__When to use it:__ Use the Docker container option when you need more control over the Linux environment where your function app runs. This deployment mechanism is available only for Functions running on Linux.
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
The following considerations apply to this kind of deployment:
### Kubernetes cluster
-Functions also lets you define your Functions project to run in a Docker container. Use the [`--docker` option][func init] of `func init` to generate a Dockerfile for your specific language. This file is then used when creating a container to deploy.
+Functions also lets you define your Functions project to run in a Docker container. Use the [`--docker` option][func init] of `func init` to generate a Dockerfile for your specific language. This file is then used when creating a container to deploy. To learn how to publish a custom container to Azure without Kubernetes, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md).
-Core Tools can be used to deploy your project as a custom container image to a Kubernetes cluster. The command you use depends on the type of scaler used in the cluster.
+Core Tools can be used to deploy your project as a custom container image to a Kubernetes cluster.
The following command uses the Dockerfile to generate a container and deploy it to a Kubernetes cluster.
-# [KEDA](#tab/keda)
- ```command func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME> ``` To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
-# [Default/KNative](#tab/default)
-
-```command
-func deploy --name <FUNCTION_APP> --platform kubernetes --registry <REGISTRY_USERNAME>
-```
-
-In the example above, replace `<FUNCTION_APP>` with the name of the function app in Azure and `<REGISTRY_USERNAME>` with your registry account name, such as you Docker username. The container is built locally and pushed to your Docker registry account with an image name based on `<FUNCTION_APP>`. You must have the Docker command line tools installed.
-
-To learn more, see the [`func deploy` command](functions-core-tools-reference.md#func-deploy).
---
-To learn how to publish a custom container to Azure without Kubernetes, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md).
- ## Monitoring functions The recommended way to monitor the execution of your functions is by integrating with Azure Application Insights. You can also stream execution logs to your local computer. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
Application Insights Java 3.x is already listening for telemetry that's sent to
</dependency> ```
-1. Use the Micrometer [global registry](https://micrometer.io/docs/concepts#_global_registry) to create a meter:
+2. Use the Micrometer [global registry](https://micrometer.io/docs/concepts#_global_registry) to create a meter:
```java
- static final Counter counter = Metrics.counter("test_counter");
+ static final Counter counter = Metrics.counter("test.counter");
```
-1. Use the counter to record metrics:
+3. Use the counter to record metrics:
```java counter.increment(); ```
+4. The metrics will be ingested into the
+ [customMetrics](/azure/azure-monitor/reference/tables/custommetrics) table, with tags captured in the
+ `customDimensions` column. You can also view the metrics in the
+ [Metrics explorer](../essentials/metrics-getting-started.md) under the "Log-based metrics" metric namespace.
+
+ > [!NOTE]
+ > Application Insights Java replaces all non-alphanumeric characters (except dashes) in the Micrometer metric name
+ > with underscores, so the `test.counter` metric above will show up as `test_counter`.
+ ### Send custom traces and exceptions by using your favorite logging framework Log4j, Logback, and java.util.logging are auto-instrumented. Logging performed via these logging frameworks is autocollected as trace and exception telemetry.
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-transition-solution.md
+
+ Title: "Transition from the Container Monitoring Solution to using Container Insights"
Last updated : 1/18/2022+++
+description: "Learn how to migrate from using the legacy OMS solution to monitoring your containers using Container Insights"
++
+# Transition from the Container Monitoring Solution to using Container Insights
+
+With both the underlying platform and agent deprecations, on March 1, 2025 the [Container Monitoring Solution](./containers.md) will be retired. If you use the Container Monitoring Solution to ingest data to your Log Analytics workspace, make sure to transition to using [Container Insights](./container-insights-overview.md) prior to that date.
+
+## Steps to complete the transition
+
+To transition to Container Insights, we recommend the following approach.
+
+1. Learn about the feature differences between the Container Monitoring Solution and Container Insights to determine which option suits your needs.
+
+2. To use Container Insights, you will need to migrate your workload to Kubernetes. You can find more information on the compatible Kubernetes platforms from [Azure Kubernetes Services (AKS)](../../aks/intro-kubernetes.md) or [Azure Arc enabled Kubernetes](../../azure-arc/kubernetes/overview.md). If using AKS, you can choose to [deploy Container Insights](./container-insights-enable-new-cluster.md) as a part of the process.
+
+3. Disable the existing monitoring of the Container Monitoring Solution using one of the following options: [Azure portal](../insights/solutions.md?tabs=portal#remove-a-monitoring-solution), [PowerShell](/powershell/module/az.monitoringsolutions/remove-azmonitorloganalyticssolution), or [Azure CLI](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-delete)
+4. If you elected to not onboard to Container Insights earlier, you can then deploy Container Insights using Azure CLI, ARM, or Portal following the instructions for [AKS](./container-insights-enable-existing-clusters.md) or [Arc enabled Kubernetes](./container-insights-enable-arc-enabled-clusters.md)
+5. Validate that the installation was successful for either your [AKS](./container-insights-enable-existing-clusters.md#verify-agent-and-solution-deployment) or [Arc](./container-insights-enable-arc-enabled-clusters.md#verify-extension-installation-status) cluster.
++
+## Container Monitoring Solution vs Container Insights
+
+The following table highlights the key differences between monitoring using the Container Monitoring Solution versus Container Insights. Container Insights to that of the Container Monitoring Solution.
+
+| Feature Differences | Container Monitoring Solution | Container Insights |
+| - | -- | - |
+| Onboarding | Multi-step installation using Azure Marketplace & configuring Log Analytics Agent | Single step onboarding via Azure portal, CLI, or ARM |
+| Agent | Log Analytics Agent (deprecated in 2024) | [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md)
+| Alerting | Log based alerts tied to Log Analytics Workspace | Log based alerting and [recommended metric-based](./container-insights-metric-alerts.md) alerts |
+| Metrics | Does not support Azure Monitor metrics | Supports Azure Monitor metrics |
+| Consumption | Viewable only from Log Analytics Workspace | Accessible from both Azure Monitor and AKS/Arc resource blade |
+| Agent | Manual agent upgrades | Automatic updates for monitoring agent with version control through Azure Arc cluster extensions |
+
+## Next steps
+
+- [Disable Container Monitoring Solution](./containers.md#removing-solution-from-your-workspace)
+- [Deploy an Azure Kubernetes Service](./container-insights-enable-new-cluster.md)
+- [Connect your cluster](../../azure-arc/kubernetes/quickstart-connect-cluster.md) to the Azure Arc enabled Kubernetes platform
+- Configure Container Insights for [Azure Kubernetes Service](./container-insights-enable-existing-clusters.md) or [Arc enabled Kubernetes](./container-insights-enable-arc-enabled-clusters.md)
azure-monitor Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/containers.md
The following table outlines the Docker orchestration and operating system monit
Use the following information to install and configure the solution.
-1. Add the Container Monitoring solution to your Log Analytics workspace from [Azure marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ContainersOMS?tab=Overview) or by using the process described in [Add monitoring solutions from the Solutions Gallery](../insights/solutions.md).
+1. Add the Container Monitoring solution to your Log Analytics workspace from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ContainersOMS?tab=Overview) or by using the process described in [Add monitoring solutions from the Solutions Gallery](../insights/solutions.md).
2. Install and use Docker with a Log Analytics agent. Based on your operating system and Docker orchestrator, you can use the following methods to configure your agent. - For standalone hosts:
Saving queries is a standard feature in Azure Monitor. By saving them, you'll ha
After you create a query that you find useful, save it by clicking **Favorites** at the top of the Log Search page. Then you can easily access it later from the **My Dashboard** page.
+## Removing solution from your workspace
+To remove the Container Monitoring Solution, follow the instructions for removing solutions using one of the following: [Azure portal](../insights/solutions.md?tabs=portal#remove-a-monitoring-solution), [PowerShell](/powershell/module/az.monitoringsolutions/remove-azmonitorloganalyticssolution), or [Azure CLI](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-delete)
+ ## Next steps [Query logs](../logs/log-query-overview.md) to view detailed container data records.
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
description: Enable SQL insights in Azure Monitor
Previously updated : 1/6/2022 Last updated : 1/18/2022 # Enable SQL insights (preview)
The connection string specifies the login name that SQL insights should use when
The connections string will vary for each type of SQL resource: #### Azure SQL Database
+TCP connections from the monitoring machine to the IP address and port used by the database must be allowed by any firewalls or [network security groups](../../virtual-network/network-security-groups-overview.md) (NSGs) that may exist on the network path. For details on IP addresses and ports, see [Azure SQL Database connectivity architecture](../../azure-sql/database/connectivity-architecture.md).
+ Enter the connection string in the form: ```
Get the details from the **Connection strings** menu item for the database.
:::image type="content" source="media/sql-insights-enable/connection-string-sql-database.png" alt-text="SQL database connection string" lightbox="media/sql-insights-enable/connection-string-sql-database.png":::
-To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string. SQL Insights supports monitoring a single secondary. The collected data will be tagged to reflect primary or secondary.
+To monitor a readable secondary, append `;ApplicationIntent=ReadOnly` to the connection string. SQL Insights supports monitoring a single secondary. The collected data will be tagged to reflect primary or secondary.
#### Azure SQL Managed Instance
+TCP connections from the monitoring machine to the IP address and port used by the managed instance must be allowed by any firewalls or [network security groups](../../virtual-network/network-security-groups-overview.md) (NSGs) that may exist on the network path. For details on IP addresses and ports, see [Azure SQL Managed Instance connection types](../../azure-sql/managed-instance/connection-types-overview.md).
+ Enter the connection string in the form: ```
Enter the connection string in the form:
"Server= mysqlserver.database.windows.net;Port=1433;User Id=$username;Password=$password;" ] ```
-Get the details from the **Connection strings** menu item for the managed instance.
-
+Get the details from the **Connection strings** menu item for the managed instance. If using managed instance [public endpoint](../../azure-sql/managed-instance/public-endpoint-configure.md), replace port 1433 with 3342.
:::image type="content" source="media/sql-insights-enable/connection-string-sql-managed-instance.png" alt-text="SQL Managed Instance connection string" lightbox="media/sql-insights-enable/connection-string-sql-managed-instance.png":::
-To monitor a readable secondary, include the key-value `ApplicationIntent=ReadOnly` in the connection string. SQL Insights supports monitoring of a single secondary. Collected data will be tagged to reflect Primary or Secondary.
+To monitor a readable secondary, append `;ApplicationIntent=ReadOnly` to the connection string. SQL Insights supports monitoring of a single secondary. Collected data will be tagged to reflect Primary or Secondary.
#### SQL Server
+The TCP/IP protocol must be enabled for the SQL Server instance you want to monitor. TCP connections from the monitoring machine to the IP address and port used by the SQL Server instance must be allowed by any firewalls or [network security groups](../../virtual-network/network-security-groups-overview.md) (NSGs) that may exist on the network path.
+
+If you want to monitor SQL Server configured for high availability (using either availability groups or failover cluster instances), we recommend monitoring each SQL Server instance in the cluster individually rather than connecting via an availability group listener or a failover cluster name. This ensures that monitoring data is collected regardless of the current instance role (primary or secondary).
+ Enter the connection string in the form: ``` "sqlVmConnections": [
- "Server=MyServerIPAddress;Port=1433;User Id=$username;Password=$password;"
+ "Server=SQLServerInstanceIPAddress;Port=1433;User Id=$username;Password=$password;"
] ```
-If your monitoring virtual machine is in the same VNET, use the private IP address of the Server. Otherwise, use the public IP address. If you're using Azure SQL virtual machine, you can see which port to use here on the **Security** page for the resource.
+Use the IP address that the SQL Server instance listens on.
+
+If your SQL Server instance is configured to listen on a non-default port, replace 1433 with that port number in the connection string. If you're using Azure SQL virtual machine, you can see which port to use on the **Security** page for the resource.
:::image type="content" source="media/sql-insights-enable/sql-vm-security.png" alt-text="SQL virtual machine security" lightbox="media/sql-insights-enable/sql-vm-security.png":::
+For any SQL Server instance, you can determine all IP addresses and ports it is listening on by connecting to the instance and executing the following T-SQL query, as long as there is at least one TCP connection to the instance:
+
+```sql
+SELECT DISTINCT local_net_address, local_tcp_port
+FROM sys.dm_exec_connections
+WHERE net_transport = 'TCP'
+ AND
+ protocol_type = 'TSQL';
+```
+ ## Monitoring profile created Select **Add monitoring virtual machine** to configure the virtual machine to collect data from your SQL resources. Do not return to the **Overview** tab. In a few minutes, the Status column should change to read "Collecting", you should see data for the SQL resources you have chosen to monitor.
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
adx('https://help.kusto.windows.net/Samples').StormEvents
>* Database names are case sensitive. >* Cross-resource query as an alert is not supported. >* Identifying the Timestamp column in the cluster is not supported, Log Analytics query API will not pass along the time filter
+> * The cross-service query ability is used for data retrieval only. For more information, see [Function supportability](#function-supportability).
+
+## Function supportability
+
+The Azure Monitor cross-service queries support functions for both Application Insights, Log Analytics and Azure Data Explorer.
+This capability enables cross-cluster queries to reference an Azure Monitor/Azure Data Explorer tabular function directly.
+The following commands are supported with the cross-service query:
+
+* `.show functions`
+* `.show function {FunctionName}`
+* `.show database {DatabaseName} schema as json`
## Combine Azure Data Explorer cluster tables with a Log Analytics workspace
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data from a Log Analyti
- All tables will be supported in export, but support is currently limited to those specified in the [supported tables](#supported-tables) section below. - The current custom log tables wonΓÇÖt be supported in export. A new version of custom log preview available February 2022, will be supported in export. - You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled. -- Storage account must be unique across all export rules in your workspace. - Destinations must be in the same region as the Log Analytics workspace. - Tables names can be no longer than 60 characters when exporting to storage account and 47 characters to event hub. Tables with longer names will not be exported. - Data export isn't supported in these regions currently:
If you have configured your storage account to allow access from selected networ
### Destinations monitoring > [!IMPORTANT]
-> Export destinations have limits and should be monitored to minimize throttling, failures, and latency. See [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [event hub namespace quota](../../event-hubs/event-hubs-quotas.md).
+> Export destinations have limits and should be monitored to minimize throttling, failures, and latency. See [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [event hub namespace quota](../../event-hubs/event-hubs-quotas.md).
**Monitoring storage account**
If you have configured your storage account to allow access from selected networ
- Use 'Premium' or 'Dedicated' tiers for higher throughput ### Create or update data export rule
-Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage account destination must be unique across all export rules in workspace, but multiple rules can export to the same event hub namespace in separate event hubs.
+Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. You can use the same storage account and event hub namespace in multiple rules in the same workspace. When event hub names are provided in rules, they must be unique in workspace.
> [!NOTE] > - You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-dedicated-clusters.md
After you create your cluster resource and it is fully provisioned, you can edit
>Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations. > [!NOTE]
-> The *billingType* property is not supported in PowerShell.
+> The *billingType* property is not supported in CLI.
## Get all clusters in resource group
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-netapp-files Faq Application Volume Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/faq-application-volume-group.md
Previously updated : 11/19/2021 Last updated : 01/19/2022 # Application volume group FAQs
Creating a volume group involves many different steps, and not all of them can b
In the current implementation, the application volume group has a focus on the initial creation and deletion of a volume group only.
+## What are the rules behind the proposed throughput for my HANA data and log volumes?
+
+SAP defines the Key Performance Indicators (KPIs) for the HANA data and log volume as 400 MiB/s for the data and 250 MiB/s for the log volume. This definition is independent of the size or the workload of the HANA database. Application volume group scales the throughput values in a way that even the smallest database meets the SAP HANA KPIs, and larger database will benefit from a higher throughput level, scaling the proposal based on the entered HANA database size.
+
+The following table describes the memory range and proposed throughput ***for the HANA data volume***:
+
+<table><thead><tr><th colspan="2">Memory range (in TB)</th><th rowspan="2">Proposed throughput</th></tr><tr><th>Minimum</th><th>Maximum</th></tr></thead><tbody><tr><td>0</td><td>1</td><td>400</td></tr><tr><td>1</td><td>2</td><td>600</td></tr><tr><td>2</td><td>4</td><td>800</td></tr><tr><td>4</td><td>6</td><td>1000</td></tr><tr><td>6</td><td>8</td><td>1200</td></tr><tr><td>8</td><td>10</td><td>1400</td></tr><tr><td>10</td><td>unlimited</td><td>1500</td></tr></tbody></table>
+
+The following table describes the memory range and proposed throughput ***for the HANA log volume***:
+
+<table><thead><tr><th colspan="2">Memory range (in TB)</th><th rowspan="2">Proposed throughput</th></tr><tr><th>Minimum</th><th>Maximum</th></tr></thead><tbody><tr><td>0</td><td>4</td><td>250</td></tr><tr><td>4</td><td>unlimited</td><td>500</td></tr></tbody></table>
+
+Higher throughput for the database volume is most important for the database startup of larger databases when reading data into memory. At runtime, most of the I/O is write I/O, where even the KPIs show lower values. User experience shows that, for smaller databases, HANA KPI values may be higher than whatΓÇÖs required for most of the time.
+
+Azure NetApp Files performance of each volume can be adjusted at runtime. As such, at any time, you can adjust the performance of your database by adjusting the data and log volume throughput to your specific requirements. For instance, you can fine-tune performance and reduce costs by allowing higher throughput at startup while reducing to KPIs for normal operation.
+ ## Next steps * [Understand Azure NetApp Files application volume group for SAP HANA](application-volume-group-introduction.md)
azure-percept Overview 8020 Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-8020-integration.md
Title: Azure Percept DK 80/20 integration description: Learn more about how Azure Percept DK integrates with the 80/20 railing system. -+ Last updated 03/24/2021
The Azure Percept DK and Audio Accessory were designed to integrate with the [80
## 80/20 features
-The Azure Percept DK carrier board, Azure Percept Vision device, and Azure Percept Audio accessory are manufactured with built-in 80/20 1010 connectors, which allow for endless mounting configurations with 80/20 rails. This integration enables customers and solution builders to more easily extend their proof of concepts to production environments.
+The Azure Percept DK carrier board, Azure Percept Vision device, and Azure Percept Audio accessory are manufactured with integrated 80/20 1010 extrusion connections, which allow for endless mounting configurations with 80/20 rails. This integration enables customers and solution builders to more easily extend their proof of concepts to production environments.
Check out this video for more information on how to use Azure Percept DK with 80/20:
Check out this video for more information on how to use Azure Percept DK with 80
> [!VIDEO https://www.youtube.com/embed/Dg6mtD9psLU] +
+To accelerate your prototype creation, we have also designed a few examples of 80/20 mounting assemblies.
+We have included the technical drawings of these options below so they can be easily ordered and built by
+your local 80/20 distributor: https://8020.net/distributorlookup/
++
+| Design Name | Overall Design | CAD Design 1 | CAD Design 2 |
+|--|--|||
+| Wall Mounts| ![Wall Mount Image](./media/overview-8020-integration-images/wall-mount.png) | [ ![Horizontal Wall Mount Image](./media/overview-8020-integration-images/azure-percept-8020-horizontal-wall-mount-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-horizontal-wall-mount.png#lightbox) | [ ![Vertical Wall Mount Image](./media/overview-8020-integration-images/azure-percept-8020-vertical-wall-mount-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-vertical-wall-mount.png#lightbox)|
+| Ceiling Mounts| ![Ceiling Mount Image](./media/overview-8020-integration-images/ceiling-mount.png) | [ ![Ceiling Mount Small Image](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-small-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-small.png#lightbox) | [ ![Ceiling Mount Large Image](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-large-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-large.png#lightbox) |
+| Arm Mounts | ![Arm Mount Image](./media/overview-8020-integration-images/arm-mount.png) | [ ![Clamp Bracket Image](./media/overview-8020-integration-images/azure-percept-8020-clamp-bracket-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-clamp-bracket.png#lightbox)
++ ## Next steps > [!div class="nextstepaction"]
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-10-15/notebook-workspaces/list-connection-info) | | Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) |
-| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/version2021-12-01/domains/list-shared-access-keys) |
-| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/version2021-12-01/topics/list-shared-access-keys) |
+| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/domains/list-shared-access-keys) |
+| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/topics/list-shared-access-keys) |
| Microsoft.EventHub/namespaces/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listkeys](/rest/api/eventhub) |
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Different from the PowerShell deployment script, CLI/bash support doesn't expose
Deployment script outputs must be saved in the `AZ_SCRIPTS_OUTPUT_PATH` location, and the outputs must be a valid JSON string object. The contents of the file must be saved as a key-value pair. For example, an array of strings is stored as `{ "MyResult": [ "foo", "bar"] }`. Storing just the array results, for example `[ "foo", "bar" ]`, is invalid. [jq](https://stedolan.github.io/jq/) is used in the previous sample. It comes with the container images. See [Configure development environment](#configure-development-environment).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 01/03/2022 Last updated : 01/19/2022 # What is Bicep?
Bicep provides the following advantages over other infrastructure-as-code option
- **Support for all resource types and API versions**: Bicep immediately supports all preview and GA versions for Azure services. As soon as a resource provider introduces new resources types and API versions, you can use them in your Bicep file. You don't have to wait for tools to be updated before using the new services. - **Simple syntax**: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages. Bicep syntax is declarative and specifies which resources and resource properties you want to deploy.+
+ The following examples show the difference between a Bicep file and the equivalent JSON template. Both examples deploy a storage account.
+
+ # [Bicep](#tab/bicep)
+
+ ```bicep
+ param location string = resourceGroup().location
+ param storageAccountName string = 'toylaunch${uniqueString(resourceGroup().id)}'
+
+ resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ accessTier: 'Hot'
+ }
+ }
+ ```
+
+ # [JSON](#tab/json)
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "storageAccountName": {
+ "type": "string",
+ "defaultValue": "[format('toylaunch{0}', uniqueString(resourceGroup().id))]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-06-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2",
+ "properties": {
+ "accessTier": "Hot"
+ }
+ }
+ ]
+ }
+ ```
+
+
+ - **Authoring experience**: When you use VS Code to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation. - **Modularity**: You can break your Bicep code into manageable parts by using [modules](./modules.md). The module deploys a set of related resources. Modules enable you to reuse code and simplify development. Add the module to a Bicep file anytime you need to deploy those resources. - **Integration with Azure services**: Bicep is integrated with Azure services such as Azure Policy, template specs, and Blueprints.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 01/12/2022 Last updated : 01/19/2022 # Azure Resource Manager template specs in Bicep
-A template spec is a resource type for storing an Azure Resource Manager template (ARM template) or a Bicep file in Azure for later deployment. Bicep files are transpiled into ARM JSON templates before they are stored. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec.
+A template spec is a resource type for storing an Azure Resource Manager template (ARM template) for later deployment. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec. You can use Azure CLI or Azure PowerShell to create template specs by providing Bicep files. The Bicep files are transpiled into ARM JSON templates before they are stored. Currently, you can't import a Bicep file from the Azure portal to create a template spec resource.
[**Microsoft.Resources/templateSpecs**](/azure/templates/microsoft.resources/templatespecs) is the resource type for template specs. It consists of a main template and any number of linked templates. Azure securely stores template specs in resource groups. Both the main template and the linked templates must be in JSON. Template Specs support [versioning](#versioning).
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-10-15/notebook-workspaces/list-connection-info) | | Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) |
-| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/version2021-12-01/domains/list-shared-access-keys) |
-| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/version2021-12-01/topics/list-shared-access-keys) |
+| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/domains/list-shared-access-keys) |
+| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/topics/list-shared-access-keys) |
| Microsoft.EventHub/namespaces/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listkeys](/rest/api/eventhub) |
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
Azure SQL Database provides several methods for creating a copy of an existing [
A database copy is a transactionally consistent snapshot of the source database as of a point in time after the copy request is initiated. You can select the same server or a different server for the copy. Also you can choose to keep the backup redundancy, service tier and compute size of the source database, or use a different backup storage redundancy and/or compute size within the same or a different service tier. After the copy is complete, it becomes a fully functional, independent database. The logins, users, and permissions in the copied database are managed independently from the source database. The copy is created using the geo-replication technology. Once replica seeding is complete, the geo-replication link is automatically terminated. All the requirements for using geo-replication apply to the database copy operation. See [Active geo-replication overview](active-geo-replication-overview.md) for details.
-> [!NOTE]
-> Azure SQL Database Configurable Backup Storage Redundancy is currently available in public preview in Brazil South and generally available in Southeast Asia Azure region only. In the preview, if the source database is created with locally-redundant or zone-redundant backup storage redundancy, database copy to a server in a different Azure region is not supported.
- ## Database Copy for Azure SQL Hyperscale For Azure SQL Hyperscale the target database determines whether the copy will be a fast copy or a size of data copy.
azure-sql Elastic Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-query-overview.md
Elastic query is included in the cost of Azure SQL Database. Note that topologie
* Running your first elastic query can take up to a few minutes on smaller resources and Standard and General Purpose service tier. This time is necessary to load the elastic query functionality; loading performance improves with higher service tiers and compute sizes. * Scripting of external data sources or external tables from SSMS or SSDT is not yet supported. * Import/Export for SQL Database does not yet support external data sources and external tables. If you need to use Import/Export, drop these objects before exporting and then re-create them after importing.
-* Elastic query currently only supports read-only access to external tables. You can, however, use full T-SQL functionality on the database where the external table is defined. This can be useful to, e.g., persist temporary results using, for example, SELECT <column_list> INTO <local_table>, or to define stored procedures on the elastic query database that refer to external tables.
+* Elastic query currently only supports read-only access to external tables. You can, however, use full Transact-SQL functionality on the database where the external table is defined. This can be useful to, e.g., persist temporary results using, for example, SELECT <column_list> INTO <local_table>, or to define stored procedures on the elastic query database that refer to external tables.
* Except for nvarchar(max), LOB types (including spatial types) are not supported in external table definitions. As a workaround, you can create a view on the remote database that casts the LOB type into nvarchar(max), define your external table over the view instead of the base table and then cast it back into the original LOB type in your queries. * Columns of nvarchar(max) data type in result set disable advanced batching technics used in Elastic Query implementation and may affect performance of query for an order of magnitude, or even two orders of magnitude in non-canonical use cases where large amount of non-aggregated data is being transferred as a result of query. * Column statistics over external tables are currently not supported. Table statistics are supported, but need to be created manually.
+* Cursors are not supported for external tables in Azure SQL Database.
* Elastic query works with Azure SQL Database only. You cannot use it for querying a SQL Server instance. ## Share your Feedback
azure-sql Elastic Query Vertical Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-query-vertical-partitioning.md
You can use regular SQL Server connection strings to connect your BI and data in
## Next steps * For an overview of elastic query, see [Elastic query overview](elastic-query-overview.md).
+* For limitations of elastic query, see [Preview limitations](elastic-query-overview.md#preview-limitations)
* For a vertical partitioning tutorial, see [Getting started with cross-database query (vertical partitioning)](elastic-query-getting-started-vertical.md). * For a horizontal partitioning (sharding) tutorial, see [Getting started with elastic query for horizontal partitioning (sharding)](elastic-query-getting-started.md). * For syntax and sample queries for horizontally partitioned data, see [Querying horizontally partitioned data)](elastic-query-horizontal-partitioning.md)
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/link-feature.md
Previously updated : 11/05/2021 Last updated : 01/19/2022 # Link feature for Azure SQL Managed Instance (limited preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
The new link feature in Azure SQL Managed Instance connects your SQL Servers h
After a disastrous event, you can continue running your read-only workloads on SQL Managed Instance in Azure. You can also choose to migrate one or more applications from SQL Server to SQL Managed Instance at the same time, at your own pace, and with the best possible minimum downtime compared to other solutions in Azure today.
-> [!NOTE]
-> The link feature is released in limited public preview with support for currently only SQL Server 2019 Enterprise Edition CU13 (or above). [Sign-up now](https://aka.ms/mi-link-signup) to participate in the limited public preview.
+## Sign-up for link
+
+To use the link feature, you will need:
+
+- SQL Server 2019 Enterprise Edition with [CU13 (or above)](https://support.microsoft.com/topic/kb5005679-cumulative-update-13-for-sql-server-2019-5c1be850-460a-4be4-a569-fe11f0adc535) installed on-premises, or on an Azure VM.
+- Network connectivity between your SQL Server and managed instance is required. If your SQL Server is running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two separate subnets.
+- Azure SQL Managed Instance provisioned on any service tier.
+
+Use the following link to sign-up for the limited preview of the link feature.
+
+> [!div class="nextstepaction"]
+> [Sign up for link feature preview](https://aka.ms/mi-link-signup)
## Overview
Secure connectivity, such as VPN or Express Route is used between an on-premises
There could exist up to 100 links from the same, or various SQL Server sources to a single SQL Managed Instance. This limit is governed by the number of databases that could be hosted on a managed instance at this time. Likewise, a single SQL Server can establish multiple parallel database replication links with several managed instances in different Azure regions in a 1 to 1 relationship between a database and a managed instance . The feature requires CU13 or higher to be installed on SQL Server 2019.
-## Sign-up for link
-
-To use the link feature, you will need:
--- SQL Server 2019 Enterprise Edition with [CU13 (or above)](https://support.microsoft.com/topic/kb5005679-cumulative-update-13-for-sql-server-2019-5c1be850-460a-4be4-a569-fe11f0adc535) installed on-premises, or on an Azure VM.-- Network connectivity between your SQL Server and managed instance is required. If your SQL Server is running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two separate subnets. -- Azure SQL Managed Instance provisioned on any service tier.-
-Use the following link to sign-up for the limited preview of the link feature.
-
-> [!div class="nextstepaction"]
-> [Sign up for link feature preview](https://aka.ms/mi-link-signup)
+> [!NOTE]
+> The link feature is released in limited public preview with support for currently only SQL Server 2019 Enterprise Edition CU13 (or above). [Sign-up now](https://aka.ms/mi-link-signup) to participate in the limited public preview.
## Next steps
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware generations (public preview) is currentl
| Australia East | Yes | Yes | | Canada Central | Yes | | | Canada East | Yes | |
-| Central US | Yes | |
-| East US | Yes | Yes |
-| East US 2 | Yes | Yes |
+| East US | | Yes |
+| East US 2 | Yes | |
| France Central | | Yes | | Germany West Central | | Yes | | Japan East | Yes | |
backup Back Up Azure Stack Hyperconverged Infrastructure Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md
Last updated 07/27/2021
# Back up Azure Stack HCI virtual machines with Azure Backup Server This article explains how to back up virtual machines on Azure Stack HCI using Microsoft Azure Backup Server (MABS).
+
+> [!NOTE]
+> This support applies to Azure Stack HCI version 20H2. Backup of virtual machines on Azure Stack HCI version 21H2 is not supported.
## Supported scenarios
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-apis-tools.md
The Azure Resource Manager APIs for Batch provide programmatic access to Batch a
| **Batch Management REST** |[Azure REST API - Docs](/rest/api/batchmanagement/) |- |- |[GitHub](https://github.com/Azure-Samples/batch-dotnet-manage-batch-accounts) | | **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) | | **Batch Management Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/batch/management) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- |
-| **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch/management) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- |
+| **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/arm-batch-readme) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- |
| **Batch Management Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch/management) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |- | ## Batch command-line tools
batch Batch Compute Node Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-compute-node-environment-variables.md
The command lines executed by tasks on compute nodes don't run under a shell. Th
| AZ_BATCH_TASK_ID | The ID of the current task. | All tasks except start task. | task001 | | AZ_BATCH_TASK_SHARED_DIR | A directory path that is identical for the primary task and every subtask of a [multi-instance task](batch-mpi.md). The path exists on every node on which the multi-instance task runs, and is read/write accessible to the task commands running on that node (both the [coordination command](batch-mpi.md#coordination-command) and the [application command](batch-mpi.md#application-command). Subtasks or a primary task that execute on other nodes do not have remote access to this directory (it is not a "shared" network directory). | Multi-instance primary and subtasks. | C:\user\tasks\workitems\multiinstancesamplejob\job-1\multiinstancesampletask | | AZ_BATCH_TASK_WORKING_DIR | The full path of the [task working directory](files-and-directories.md) on the node. The currently running task has read/write access to this directory. | All tasks. | C:\user\tasks\workitems\batchjob001\job-1\task001\wd |
-| AZ_BATCH_TASK_WORKING_DIR | The full path of the [task working directory](files-and-directories.md) on the node. The currently running task has read/write access to this directory. | All tasks. | C:\user\tasks\workitems\batchjob001\job-1\task001\wd |
| AZ_BATCH_TASK_RESERVED_EPHEMERAL_DISK_SPACE_BYTES | The current threshold for disk space upon which the VM will be marked as `DiskFull`. | All tasks. | 1000000 | | CCP_NODES | The list of nodes and number of cores per node that are allocated to a [multi-instance task](batch-mpi.md). Nodes and cores are listed in the format `numNodes<space>node1IP<space>node1Cores<space>`<br/>`node2IP<space>node2Cores<space> ...`, where the number of nodes is followed by one or more node IP addresses and the number of cores for each. | Multi-instance primary and subtasks. |`2 10.0.0.4 1 10.0.0.5 1` |
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
cognitive-services Multivariate How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/How-to/multivariate-how-to.md
Next you need to prepare your training data (and inference data with asynchronou
## Train an MVAD model
+In this process, you should upload your data to blob storage and generate a SAS url used for training dataset.
+
+For training data size, the maximum number of timestamps is `1000000`, and a recommended minimum number is `15000` timestamps.
+ Here is a sample request body and the sample code in Python to train an MVAD model. ```json
You could choose the asynchronous API, or the synchronous API for inference.
| - | - | | More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
-To perform asynchronous inference, provide the blob source path to the zip file containing the inference data, the start time, and end time.
+To perform asynchronous inference, provide the blob source path to the zip file containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most `20000` timestamps.
This inference is asynchronous, so the results are not returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
The response contains the result status, variable information, inference paramet
With the synchronous API, you can get inference results point by point in real time, and no need for compressing and uploading task like training and asynchronous inference. Here are some requirements for the synchronous API: * Need to put data in **JSON format** into the API request body. * The inference results are limited to up to 10 data points, which means you could detect **1 to 10 timestamps** with one synchronous API call.
-* Due to payload limitation, the size of inference data in the request body is limited, which support at most `2880` timestamps * `300` variables.
+* Due to payload limitation, the size of inference data in the request body is limited, which support at most `2880` timestamps * `300` variables, and at least `1 sliding window length`.
### Request schema
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/translator-overview.md
The following features are supported by the Translator service. Use the links in
## Try the Translator service for free
-First, you'll need a Microsoft account; if you do not one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
+First, you'll need a Microsoft account; if you do not have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
Next, you'll need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials.
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-use-python.md
Title: Use Azure Cosmos DB Table API and Azure Table storage using Python
-description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB Table API by using Python.
+ Title: Use the Azure Tables client library for Python
+description: Store structured data in the cloud using the Azure Tables client library for Python.
ms.devlang: python
-# Get started with Azure Table storage and the Azure Cosmos DB Table API using Python
+# Get started with Azure Tables client library using Python
[!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)] [!INCLUDE [storage-selector-table-include](../../../includes/storage-selector-table-include.md)]
You can use the Table storage or the Azure Cosmos DB to store flexible datasets
### About this sample
-This sample shows you how to use the [Azure Cosmos DB Table SDK for Python](https://pypi.python.org/pypi/azure-cosmosdb-table/) in common Azure Table storage scenarios. The name of the SDK indicates it is for use with Azure Cosmos DB, but it works with both Azure Cosmos DB and Azure Tables storage, each service just has a unique endpoint. These scenarios are explored using Python examples that illustrate how to:
+This sample shows you how to use the [Azure Data Tables SDK for Python](https://pypi.org/project/azure-data-tables/) in common Azure Table storage scenarios. The name of the SDK indicates it is for use with Azure Tables storage, but it works with both Azure Cosmos DB and Azure Tables storage, each service just has a unique endpoint. These scenarios are explored using Python examples that illustrate how to:
* Create and delete tables * Insert and query entities * Modify entities
-While working through the scenarios in this sample, you may want to refer to the [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb).
+While working through the scenarios in this sample, you may want to refer to the [Azure Data Tables SDK for Python API reference](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/tables/azure-data-tables).
## Prerequisites You need the following to complete this sample successfully: * [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
-* [Azure Cosmos DB Table SDK for Python](https://pypi.python.org/pypi/azure-cosmosdb-table/). This SDK connects with both Azure Table storage and the Azure Cosmos DB Table API.
+* [Azure Data Tables SDK for Python](https://pypi.python.org/pypi/azure-data-tables/). This SDK connects with both Azure Table storage and the Azure Cosmos DB Table API.
* [Azure Storage account](../../storage/common/storage-account-create.md) or [Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/). ## Create an Azure service account
You need the following to complete this sample successfully:
[!INCLUDE [cosmos-db-create-tableapi-account](../includes/cosmos-db-create-tableapi-account.md)]
-## Install the Azure Cosmos DB Table SDK for Python
+## Install the Azure Data Tables SDK for Python
-After you've created a Storage account, your next step is to install the [Microsoft Azure Cosmos DB Table SDK for Python](https://pypi.python.org/pypi/azure-cosmosdb-table/). For details on installing the SDK, refer to the [README.rst](https://github.com/Azure/azure-cosmosdb-python/blob/master/azure-cosmosdb-table/README.rst) file in the Cosmos DB Table SDK for Python repository on GitHub.
+After you've created a Storage account, your next step is to install the [Microsoft Azure Data Tables SDK for Python](https://pypi.python.org/pypi/azure-data-tables/). For details on installing the SDK, refer to the [README.md](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/tables/azure-data-tables/README.md) file in the Data Tables SDK for Python repository on GitHub.
-## Import the TableService and Entity classes
+## Import the TableServiceClient and TableEntity classes
-To work with entities in the Azure Table service in Python, you use the [TableService][py_TableService] and [Entity][py_Entity] classes. Add this code near the top your Python file to import both:
+To work with entities in the Azure Data Tables service in Python, you use the `TableServiceClient` and `TableEntity` classes. Add this code near the top your Python file to import both:
```python
-from azure.cosmosdb.table.tableservice import TableService
-from azure.cosmosdb.table.models import Entity
+from azure.data.tables import TableServiceClient
+from azure.data.tables import TableEntity
``` ## Connect to Azure Table service
+You can either connect to the Azure Storage account or the Azure Cosmos DB Table API account. Get the shared key or connection string based on the type of account you are using.
-To connect to Azure Storage Table service, create a [TableService][py_TableService] object, and pass in your Storage account name and account key. Replace `myaccount` and `mykey` with your account name and key.
+### Creating the Table service client from a shared key
+
+Create a `TableServiceClient` object, and pass in your Cosmos DB or Storage account name, account key and table endpoint. Replace `myaccount`, `mykey` and `mytableendpoint` with your Cosmos DB or Storage account name, key and table endpoint.
```python
-table_service = TableService(account_name='myaccount', account_key='mykey')
+from azure.core.credentials import AzureNamedKeyCredential
+
+credential = AzureNamedKeyCredential("myaccount", "mykey")
+table_service = TableServiceClient(endpoint="mytableendpoint", credential=credential)
```
-## Connect to Azure Cosmos DB
+### Creating the Table service client from a connection string
-To connect to Azure Cosmos DB, copy your primary connection string from the Azure portal, and create a [TableService][py_TableService] object using your copied connection string:
+Copy your Cosmos DB or Storage account connection string from the Azure portal, and create a `TableServiceClient` object using your copied connection string:
```python
-table_service = TableService(connection_string='DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;TableEndpoint=myendpoint;')
+table_service = TableServiceClient.from_connection_string(conn_str='DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;TableEndpoint=mytableendpoint;')
``` ## Create a table
-Call [create_table][py_create_table] to create the table.
+Call `create_table` to create the table.
```python table_service.create_table('tasktable')
table_service.create_table('tasktable')
## Add an entity to a table
-To add an entity, you first create an object that represents your entity, then pass the object to the [TableService.insert_entity method][py_TableService]. The entity object can be a dictionary or an object of type [Entity][py_Entity], and defines your entity's property names and values. Every entity must include the required [PartitionKey and RowKey](#partitionkey-and-rowkey) properties, in addition to any other properties you define for the entity.
+Create a table in your account and get a `TableClient` to perform operations on the newly created table. To add an entity, you first create an object that represents your entity, then pass the object to the `TableClient.create_entity` method. The entity object can be a dictionary or an object of type `TableEntity`, and defines your entity's property names and values. Every entity must include the required [PartitionKey and RowKey](#partitionkey-and-rowkey) properties, in addition to any other properties you define for the entity.
-This example creates a dictionary object representing an entity, then passes it to the [insert_entity][py_insert_entity] method to add it to the table:
+This example creates a dictionary object representing an entity, then passes it to the `create_entity` method to add it to the table:
```python
-task = {'PartitionKey': 'tasksSeattle', 'RowKey': '001',
- 'description': 'Take out the trash', 'priority': 200}
-table_service.insert_entity('tasktable', task)
+table_client = table_service.get_table_client(table_name="tasktable")
+task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'001',
+ u'description': u'Take out the trash', u'priority': 200}
+table_client.create_entity(entity=task)
```
-This example creates an [Entity][py_Entity] object, then passes it to the [insert_entity][py_insert_entity] method to add it to the table:
+This example creates an `TableEntity` object, then passes it to the `create_entity` method to add it to the table:
```python
-task = Entity()
-task.PartitionKey = 'tasksSeattle'
-task.RowKey = '002'
-task.description = 'Wash the car'
-task.priority = 100
-table_service.insert_entity('tasktable', task)
+task = TableEntity()
+task[u'PartitionKey'] = u'tasksSeattle'
+task[u'RowKey'] = u'002'
+task[u'description'] = u'Wash the car'
+task[u'priority'] = 100
+table_client.create_entity(task)
``` ### PartitionKey and RowKey
The Table service uses **PartitionKey** to intelligently distribute table entiti
## Update an entity
-To update all of an entity's property values, call the [update_entity][py_update_entity] method. This example shows how to replace an existing entity with an updated version:
+To update all of an entity's property values, call the `update_entity` method. This example shows how to replace an existing entity with an updated version:
```python
-task = {'PartitionKey': 'tasksSeattle', 'RowKey': '001',
- 'description': 'Take out the garbage', 'priority': 250}
-table_service.update_entity('tasktable', task)
+task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'001',
+ u'description': u'Take out the garbage', u'priority': 250}
+table_client.update_entity(task)
```
-If the entity that is being updated doesn't already exist, then the update operation will fail. If you want to store an entity whether it exists or not, use [insert_or_replace_entity][py_insert_or_replace_entity]. In the following example, the first call will replace the existing entity. The second call will insert a new entity, since no entity with the specified PartitionKey and RowKey exists in the table.
+If the entity that is being updated doesn't already exist, then the update operation will fail. If you want to store an entity whether it exists or not, use `upsert_entity`. In the following example, the first call will replace the existing entity. The second call will insert a new entity, since no entity with the specified PartitionKey and RowKey exists in the table.
```python # Replace the entity created earlier
-task = {'PartitionKey': 'tasksSeattle', 'RowKey': '001',
- 'description': 'Take out the garbage again', 'priority': 250}
-table_service.insert_or_replace_entity('tasktable', task)
+task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'001',
+ u'description': u'Take out the garbage again', u'priority': 250}
+table_client.upsert_entity(task)
# Insert a new entity
-task = {'PartitionKey': 'tasksSeattle', 'RowKey': '003',
- 'description': 'Buy detergent', 'priority': 300}
-table_service.insert_or_replace_entity('tasktable', task)
+task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'003',
+ u'description': u'Buy detergent', u'priority': 300}
+table_client.upsert_entity(task)
``` > [!TIP]
-> The [update_entity][py_update_entity] method replaces all properties and values of an existing entity, which you can also use to remove properties from an existing entity. You can use the [merge_entity][py_merge_entity] method to update an existing entity with new or modified property values without completely replacing the entity.
+> The **mode=UpdateMode.REPLACE** parameter in `update_entity` method replaces all properties and values of an existing entity, which you can also use to remove properties from an existing entity. The **mode=UpdateMode.MERGE** parameter is used by default to update an existing entity with new or modified property values without completely replacing the entity.
## Modify multiple entities
-To ensure the atomic processing of a request by the Table service, you can submit multiple operations together in a batch. First, use the [TableBatch][py_TableBatch] class to add multiple operations to a single batch. Next, call [TableService][py_TableService].[commit_batch][py_commit_batch] to submit the operations in an atomic operation. All entities to be modified in batch must be in the same partition.
+To ensure the atomic processing of a request by the Table service, you can submit multiple operations together in a batch. First, add multiple operations to a list. Next, call `Table_client.submit_transaction` to submit the operations in an atomic operation. All entities to be modified in batch must be in the same partition.
This example adds two entities together in a batch: ```python
-from azure.cosmosdb.table.tablebatch import TableBatch
-batch = TableBatch()
-task004 = {'PartitionKey': 'tasksSeattle', 'RowKey': '004',
- 'description': 'Go grocery shopping', 'priority': 400}
-task005 = {'PartitionKey': 'tasksSeattle', 'RowKey': '005',
- 'description': 'Clean the bathroom', 'priority': 100}
-batch.insert_entity(task004)
-batch.insert_entity(task005)
-table_service.commit_batch('tasktable', batch)
-```
-
-Batches can also be used with the context manager syntax:
-
-```python
-task006 = {'PartitionKey': 'tasksSeattle', 'RowKey': '006',
- 'description': 'Go grocery shopping', 'priority': 400}
-task007 = {'PartitionKey': 'tasksSeattle', 'RowKey': '007',
- 'description': 'Clean the bathroom', 'priority': 100}
-
-with table_service.batch('tasktable') as batch:
- batch.insert_entity(task006)
- batch.insert_entity(task007)
+task004 = {u'PartitionKey': u'tasksSeattle', u'RowKey': '004',
+ 'description': u'Go grocery shopping', u'priority': 400}
+task005 = {u'PartitionKey': u'tasksSeattle', u'RowKey': '005',
+ u'description': u'Clean the bathroom', u'priority': 100}
+operations = [("create", task004), ("create", task005)]
+table_client.submit_transaction(operations)
``` ## Query for an entity
-To query for an entity in a table, pass its PartitionKey and RowKey to the [TableService][py_TableService].[get_entity][py_get_entity] method.
+To query for an entity in a table, pass its PartitionKey and RowKey to the `Table_client.get_entity` method.
```python
-task = table_service.get_entity('tasktable', 'tasksSeattle', '001')
-print(task.description)
-print(task.priority)
+task = table_client.get_entity('tasksSeattle', '001')
+print(task['description'])
+print(task['priority'])
``` ## Query a set of entities
-You can query for a set of entities by supplying a filter string with the **filter** parameter. This example finds all tasks in Seattle by applying a filter on PartitionKey:
+You can query for a set of entities by supplying a filter string with the **query_filter** parameter. This example finds all tasks in Seattle by applying a filter on PartitionKey:
```python
-tasks = table_service.query_entities(
- 'tasktable', filter="PartitionKey eq 'tasksSeattle'")
+tasks = table_client.query_entities(query_filter="PartitionKey eq 'tasksSeattle'")
for task in tasks:
- print(task.description)
- print(task.priority)
+ print(task['description'])
+ print(task['priority'])
``` ## Query a subset of entity properties
The query in the following code returns only the descriptions of entities in the
> The following snippet works only against the Azure Storage. It is not supported by the Storage Emulator. ```python
-tasks = table_service.query_entities(
- 'tasktable', filter="PartitionKey eq 'tasksSeattle'", select='description')
+tasks = table_client.query_entities(
+ query_filter="PartitionKey eq 'tasksSeattle'", select='description')
for task in tasks:
- print(task.description)
+ print(task['description'])
``` ## Query for an entity without partition and row keys
-You can also query for entities within a table without using the partition and row keys. Use the `table_service.query_entities` method without the "filter" and "select" parameters as show in the following example:
+You can also list entities within a table without using the partition and row keys. Use the `table_client.list_entities` method as show in the following example:
```python print("Get the first item from the table")
-tasks = table_service.query_entities(
- 'tasktable')
+tasks = table_client.list_entities()
lst = list(tasks) print(lst[0]) ``` ## Delete an entity
-Delete an entity by passing its **PartitionKey** and **RowKey** to the [delete_entity][py_delete_entity] method.
+Delete an entity by passing its **PartitionKey** and **RowKey** to the `delete_entity` method.
```python
-table_service.delete_entity('tasktable', 'tasksSeattle', '001')
+table_client.delete_entity('tasksSeattle', '001')
``` ## Delete a table
-If you no longer need a table or any of the entities within it, call the [delete_table][py_delete_table] method to permanently delete the table from Azure Storage.
+If you no longer need a table or any of the entities within it, call the `delete_table` method to permanently delete the table from Azure Storage.
```python table_service.delete_table('tasktable')
table_service.delete_table('tasktable')
## Next steps * [FAQ - Develop with the Table API](table-api-faq.yml)
-* [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb)
+* [Azure Data Tables SDK for Python API reference](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/tables/azure-data-tables)
* [Python Developer Center](https://azure.microsoft.com/develop/python/) * [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md): A free, cross-platform application for working visually with Azure Storage data on Windows, macOS, and Linux. * [Working with Python in Visual Studio (Windows)](/visualstudio/python/overview-of-python-tools-for-visual-studio) --
-[py_commit_batch]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_create_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_delete_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_get_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_insert_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_insert_or_replace_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_Entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.models.entity
-[py_merge_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_update_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_delete_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_TableService]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_TableBatch]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 12/15/2021 Last updated : 01/18/2022 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/policy-reference.md
na Previously updated : 12/15/2021 Last updated : 01/18/2022
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud
devtest-labs Devtest Lab Create Custom Image From Vm Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal.md
You can create a custom image from a provisioned VM, and afterwards use that cus
:::image type="content" source="./media/devtest-lab-create-template/custom-image-available-as-base.png" alt-text="custom image available in list of base images":::
-## Related blog posts
--- [Custom images or formulas?](./devtest-lab-faq.yml#blog-post)-- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs) ## Next steps -- [Add a VM to your lab](devtest-lab-add-vm.md)
+- [Add a VM to your lab](devtest-lab-add-vm.md)
devtest-labs Devtest Lab Developer Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-developer-lab.md
In this article, you learn about various Azure DevTest Labs features that can be
| | | | [Configure Azure Marketplace images](devtest-lab-configure-marketplace-images.md) |Learn how you can allow Azure Marketplace images, making available for selection only the images you want for the developers.| | [Create a custom image](devtest-lab-create-template.md) |Create a custom image by pre-installing the software you need so that developers can quickly create a VM using the custom image.|
- | [Learn about image factory](./devtest-lab-faq.yml#blog-post) |Watch a video that describes how to set up and use an image factory.|
3. **Create reusable templates for developer machines**
devtest-labs Devtest Lab Enable Licensed Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-enable-licensed-images.md
The first step to allowing users to create VMs from a licensed image is to make
- **Terms review needed:** the licensed image is not currently available to users. The terms and conditions of the license must be accepted before lab users can use it to create VMs. ## Making a licensed image available to lab users
-To make sure a licensed image is available to lab users, a lab owner with admin permissions must first accept the terms and conditions for that licensed image. Enabling programmatic deployment for the subscription associated with a licensed image automatically accepts the legal terms and privacy statements for that image. [Working with Marketplace Images on Azure Resource Manager](https://azure.microsoft.com/blog/working-with-marketplace-images-on-azure-resource-manager/) provides additional information about programmatic deployment of marketplace images.
+To make sure a licensed image is available to lab users, a lab owner with admin permissions must first accept the terms and conditions for that licensed image. Enabling programmatic deployment for the subscription associated with a licensed image automatically accepts the legal terms and privacy statements for that image.
You can enable programmatic deployment for a licensed image by following these steps:
You can enable programmatic deployment for a licensed image by following these s
> [!NOTE] > Users can create a custom image from a licensed image. See [Create a custom image from a VHD file](devtest-lab-create-template.md) for more information.
->
->
-## Related blog posts
--- [Custom images or formulas?](./devtest-lab-faq.yml#blog-post)-- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs)- ## Next steps - [Create a custom image from a VM](devtest-lab-create-custom-image-from-vm-using-portal.md) - [Create a custom image from a VHD file](devtest-lab-create-template.md)-- [Add a VM to your lab](devtest-lab-add-vm.md)
+- [Add a VM to your lab](devtest-lab-add-vm.md)
devtest-labs Devtest Lab Guidance Governance Application Migration Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-governance-application-migration-integration.md
Another factor is the frequency of changes to your software package. If you run
## Use custom organizational images
-### Question
-How can I set up an easily repeatable process to bring my custom organizational images into a DevTest Labs environment?
-
-### Answer
-See [this video on Image Factory pattern](./devtest-lab-faq.yml#blog-post). This scenario is an advanced scenario, and the scripts provided are sample scripts only. If any changes are required, you need to manage and maintain the scripts used in your environment.
-
-Using DevTest Labs to create a custom image pipeline in Azure Pipelines:
--- [Introduction: Get VMs ready in minutes by setting up an image factory in Azure DevTest Labs](./devtest-lab-faq.yml#blog-post)-- [Image Factory ΓÇô Part 2! Setup Azure Pipelines and Factory Lab to Create VMs](./devtest-lab-faq.yml#blog-post)-- [Image Factory ΓÇô Part 3: Save Custom Images and Distribute to Multiple Labs](./devtest-lab-faq.yml#blog-post)-- [Video: Custom Image Factory with Azure DevTest Labs](./devtest-lab-faq.yml#blog-post)
+This scenario is an advanced scenario, and the scripts provided are sample scripts only. If any changes are required, you need to manage and maintain the scripts used in your environment.
## Patterns to set up network configuration
This scenario may not be useful if you're using DevTest Labs to host development
The number of virtual machines per lab or per user option only limits the number of machines natively created in the lab itself. This option doesn't limit creation by any environments with Resource Manager templates. ## Next steps
-See [Use environments in DevTest Labs](devtest-lab-test-env.md).
+See [Use environments in DevTest Labs](devtest-lab-test-env.md).
devtest-labs Devtest Lab Test Env https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-test-env.md
In this article, you learn about various Azure DevTest Labs features used to mee
| | | | [Configure Azure Marketplace images](devtest-lab-configure-marketplace-images.md) |Learn how you can allow Azure Marketplace images, making available for selection only the images you want for the testers.| | [Create a custom image](devtest-lab-create-template.md) |Create a custom image by pre-installing the software you need so that testers can quickly create a VM using the custom image.|
- | [Learn about image factory](./devtest-lab-faq.yml#blog-post) |Watch a video that describes how to set up and use an image factory.|
3. **Create reusable templates for test machines**
devtest-labs Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/lab-services-overview.md
Last updated 11/15/2021
You can use two different Azure services to set up lab environments in the cloud: -- [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab) provides development or test cloud environments for your team.
+- [Azure DevTest Labs](devtest-lab-overview.md) provides development or test cloud environments for your team.
In DevTest Labs, a lab owner [creates a lab](devtest-lab-create-lab.md) and makes it available to lab users. The owner provisions the lab with Windows or Linux virtual machines (VMs) that have all necessary software and tools. Lab users connect to lab VMs for daily work and short-term projects. Lab administrators can analyze resource usage and costs across multiple labs, and set overarching policies to optimize organization or team costs. -- [Azure Lab Services](https://azure.microsoft.com/services/lab-services) provides managed classroom labs.
+- [Azure Lab Services](../lab-services/lab-services-overview.md) provides managed classroom labs.
Lab Services does all infrastructure management, from spinning up VMs and scaling infrastructure to handling errors. After an IT administrator creates a Lab Services lab account, instructors can [create classroom labs](../lab-services/how-to-manage-classroom-labs.md#create-a-classroom-lab) in the account. An instructor specifies the number and type of VMs they need for the class, and adds users to the class. Once users register in the class, they can access the VMs to do class exercises and homework.
Here are some use cases for DevTest Labs:
The following table compares the two types of Azure lab environments:
-| Feature | Lab Services | DevTest Labs |
-| -- | -- | - |
-| Azure infrastructure management. | Service automatically manages. | You manage.  |
-| Infrastructure resiliency. | Service automatically handles. | You handle.  |
-| Subscription management. | Service handles resource allocation in internal subscriptions. | You manage in your own Azure subscription. |
-| Autoscaling. | Service automatically handles. | No autoscaling. |
-| Azure Resource Manager deployments. | Not available. | Available. |
+| Feature | Azure Lab Services | Azure DevTest Labs
+| -- | -- | -- |
+| Management of Azure infrastructure | Automatically infrastructure management | You manage the infrastructure manually |
+| Built-in resiliency | Automatic handling of resiliency | You handle resiliency manually |
+| Subscription management | The service handles allocation of resources within Microsoft subscriptions that back the service. | You manage the subscription within your own Azure subscription. |
+| Autoscaling. | Service automatically scales | No subscription autoscaling |
+| Azure Resource Manager deployment within the lab | Not available | Available |
-## Next steps
-
-See the following articles:
--- [About Lab Services](../lab-services/lab-services-overview.md)-- [About DevTest Labs](devtest-lab-overview.md)
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/samples-cli.md
Title: Azure CLI Samples description: This article provides a list of Azure CLI scripting samples that help you manage labs in Azure Lab Services.-+ Last updated 06/26/2020
devtest-labs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/samples-powershell.md
Title: Azure PowerShell Samples description: Azure PowerShell Samples - Scripts to help you manage labs in Azure Lab Services-+ Last updated 06/26/2020
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-twins-graph.md
When represented as a JSON object, a digital twin will display the following fie
| | | | `$dtId` | A user-provided string representing the ID of the digital twin | | `$etag` | Standard HTTP field assigned by the web server |
-| `$conformance` | An enum containing the conformance status of this digital twin (*conformant*, *non-conformant*, *unknown*) |
+| `$metadata.$model` | The ID of the model interface that characterizes this digital twin |
+| `$metadata.<property-name>` | Other metadata information about properties of the digital twin |
| `<property-name>` | The value of a property in JSON (`string`, number type, or object) | | `$relationships` | The URL of the path to the relationships collection. This field is absent if the digital twin has no outgoing relationship edges. |
-| `$metadata.$model` | [Optional] The ID of the model interface that characterizes this digital twin |
-| `$metadata.<property-name>.desiredValue` | [Only for writable properties] The desired value of the specified property |
-| `$metadata.<property-name>.desiredVersion` | [Only for writable properties] The version of the desired value |
-| `$metadata.<property-name>.ackVersion` | The version acknowledged by the device app implementing the digital twin |
-| `$metadata.<property-name>.ackCode` | [Only for writable properties] The `ack` code returned by the device app implementing the digital twin |
-| `$metadata.<property-name>.ackDescription` | [Only for writable properties] The `ack` description returned by the device app implementing the digital twin |
| `<component-name>` | A JSON object containing the component's property values and metadata, similar to those of the root object. This object exists even if the component has no properties. |
-| `<component-name>.<property-name>` | The value of the component's property in JSON (`string`, number type, or object) |
| `<component-name>.$metadata` | The metadata information for the component, similar to the root-level `$metadata` |
+| `<component-name>.<property-name>` | The value of the component's property in JSON (`string`, number type, or object) |
-Here's an example of a digital twin formatted as a JSON object:
+Here's an example of a digital twin formatted as a JSON object. This twin has two properties, Humidity and Temperature, and a component called Thermostat.
```json {
- "$dtId": "Cafe",
- "$etag": "W/\"e59ce8f5-03c0-4356-aea9-249ecbdc07f9\"",
- "Temperature": 72,
- "Location": {
- "x": 101,
- "y": 33
- },
- "component": {
- "TableOccupancy": 1,
+ "$dtId": "myRoomID",
+ "$etag": "W/\"8e6d3e89-1166-4a1d-9a99-8accd8fef43f\"",
"$metadata": {
- "TableOccupancy": {
- "desiredValue": 1,
- "desiredVersion": 3,
- "ackVersion": 2,
- "ackCode": 200,
- "ackDescription": "OK"
- }
- }
- },
- "$metadata": {
- "$model": "dtmi:com:contoso:Room;1",
- "Temperature": {
- "desiredValue": 72,
- "desiredVersion": 5,
- "ackVersion": 4,
- "ackCode": 200,
- "ackDescription": "OK"
+ "$model": "dtmi:example:Room23;1",
+ "Humidity": {
+ "lastUpdateTime": "2021-11-30T18:47:53.7648958Z"
+ },
+ "Temperature": {
+ "lastUpdateTime": "2021-11-30T18:47:53.7648958Z"
+ }
},
- "Location": {
- "desiredValue": {
- "x": 101,
- "y": 33,
- },
- "desiredVersion": 8,
- "ackVersion": 8,
- "ackCode": 200,
- "ackDescription": "OK"
+ "Humidity": 55,
+ "Temperature": 35,
+ "Thermostat": {
+ "$metadata": {}
}
- }
-}
``` ### Relationship JSON format
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
Previously updated : 01/18/2022 Last updated : 01/19/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
For Azure Firewall pricing information, see [Azure Firewall pricing](https://azu
For Azure Firewall SLA information, see [Azure Firewall SLA](https://azure.microsoft.com/support/legal/sla/azure-firewall/).
+## Supported regions
+
+For the supported regions for Azure Firewall, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-firewall).
+ ## What's new To learn what's new with Azure Firewall, see [Azure updates](https://azure.microsoft.com/updates/?category=networking&query=Azure%20Firewall).
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Previously updated : 12/09/2021 Last updated : 01/19/2022
Under the **Web Categories** tab in **Firewall Policy Settings**, you can reques
:::image type="content" source="media/premium-features/firewall-category-change.png" alt-text="Firewall category report dialog":::
- ## Supported regions
-
-Azure Firewall Premium is supported in the following regions:
--- Australia Central (Public / Australia)-- Australia Central 2 (Public / Australia)-- Australia East (Public / Australia)-- Australia Southeast (Public / Australia)-- Brazil South (Public / Brazil)-- Brazil Southeast (Public / Brazil)-- Canada Central (Public / Canada)-- Canada East (Public / Canada)-- Central India (Public / India)-- Central US (Public / United States)-- Central US EUAP (Public / Canary (US))-- China North 2 (Mooncake / China)-- China East 2 (Mooncake / China)-- East Asia (Public / Asia Pacific)-- East US (Public / United States)-- East US 2 (Public / United States)-- France Central (Public / France)-- France South (Public / France)-- Germany West Central (Public / Germany)-- Japan East (Public / Japan)-- Japan West (Public / Japan)-- Korea Central (Public / Korea)-- Korea South (Public / Korea)-- North Central US (Public / United States)-- North Europe (Public / Europe)-- Norway East (Public / Norway)-- South Africa North (Public / South Africa)-- South Central US (Public / United States)-- South India (Public / India)-- Southeast Asia (Public / Asia Pacific)-- Switzerland North (Public / Switzerland)-- UAE Central (Public / UAE)-- UAE North (Public / UAE)-- UK South (Public / United Kingdom)-- UK West (Public / United Kingdom)-- USGov Arizona (Fairfax / USGov)-- USGov Texas (Fairfax / USGov)-- USGov Virginia (Fairfax / USGov)-- West Central US (Public / United States)-- West Europe (Public / Europe)-- West India (Public / India)-- West US (Public / United States)-- West US 2 (Public / United States)-- West US 3 (Public / United States)
+## Supported regions
+For the supported regions for Azure Firewall, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-firewall).
## Known issues
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/create-front-door-cli.md
+
+ Title: Create an Azure Front Door Standard/Premium with the Azure CLI
+description: Learn how to create an Azure Front Door Standard/Premium (preview) with the Azure CLI. Use the Front Door to protect your web apps against vulnerabilities.
++++ Last updated : 12/30/2021++++
+# Quickstart: Create an Azure Front Door Standard/Premium - Azure CLI
+
+In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using the Azure CLI. You'll create this profile using two Web Apps as your origin, and add a WAF security policy. You can then verify connectivity to your Web Apps using the Azure Front Door Standard/Premium frontend hostname.
+
+> [!NOTE]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
+++
+## Create a resource group
+
+For this quickstart, you'll need two resource groups. One in *Central US* and the second in *East US*.
+
+Run [az group create](/cli/azure/group#az_group_create) to create resource groups.
+
+```azurecli
+az group create \
+ --name myRGFDCentral \
+ --location centralus
+
+az group create \
+ --name myRGFDEast \
+ --location eastus
+```
+
+## Create an Azure Front Door profile
+
+Run [az afd profile create](/cli/azure/afd/profile#az_afd_profile_create) to create an Azure Front Door profile.
+
+```azurecli
+az afd profile create \
+ --profile-name contosoafd \
+ --resource-group myRGFDCentral \
+ --sku Premium_AzureFrontDoor \
+ --subscription mysubscription
+```
+
+## Create two instances of a web app
+
+You need two instances of a web application that run in different Azure regions for this tutorial. Both the web application instances run in Active/Active mode, so either one can service traffic.
+
+If you don't already have a web app, use the following script to set up two example web apps.
+
+### Create app service plans
+
+Before you can create the web apps you'll need two app service plans, one in *Central US* and the second in *East US*.
+
+Run [az appservice plan create](/cli/azure/appservice/plan#az_appservice_plan_create&preserve-view=true) to create your app service plans.
+
+```azurecli
+az appservice plan create \
+ --name myAppServicePlanCentralUS \
+ --resource-group myRGFDCentral
+
+az appservice plan create \
+ --name myAppServicePlanEastUS \
+ --resource-group myRGFDEast
+```
+
+### Create web apps
+
+Run [az webapp create](/cli/azure/webapp#az_webapp_create&preserve-view=true) to create a web app in each of the app service plans in the previous step. Web app names have to be globally unique.
+
+Run [az webapp list-runtimes](/cli/azure/webapp#az_webapp_create&preserve-view=true) to see a list of built-in stacks for web apps.
+
+```azurecli
+az webapp create \
+ --name WebAppContoso-001 \
+ --resource-group myRGFDCentral \
+ --plan myAppServicePlanCentralUS \
+ --runtime "DOTNETCORE|2.1"
+
+az webapp create \
+ --name WebAppContoso-002 \
+ --resource-group myRGFDEast \
+ --plan myAppServicePlanEastUS \
+ --runtime "DOTNETCORE|2.1"
+```
+
+Make note of the default host name of each web app so you can define the backend addresses when you deploy the Front Door in the next step.
+
+## Add an endpoint
+
+Run [az afd endpoint create](/cli/azure/afd/endpoint#az_afd_endpoint_create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience.
+
+```azurecli
+az afd endpoint create \
+ --resource-group myRGFDCentral \
+ --endpoint-name contoso-frontend \
+ --profile-name contosoafd \
+ --origin-response-timeout-seconds 60 \
+ --enabled-state Enabled
+```
+
+## Create an origin group
+
+Run [az afd origin-group create](/cli/azure/afd/origin-group#az_afd_origin_group_create) to create an origin group that contains your two web apps.
+
+```azurecli
+az afd origin-group create \
+ --resource-group myRGFDCentral \
+ --origin-group-name og1 \
+ --profile-name contosoafd \
+ --probe-request-type GET \
+ --probe-protocol Http \
+ --probe-interval-in-seconds 120 \
+ --probe-path /test1/azure.txt \
+ --sample-size 4 \
+ --successful-samples-required 3 \
+ --additional-latency-in-milliseconds 50
+```
+
+## Add an origin to the group
+
+Run [az afd origin create](/cli/azure/afd/origin#az_afd_origin_create) to add an origin to your origin group.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFDCentral \
+ --host-name webappcontoso-1.azurewebsites.net
+ --profile-name contosoafd \
+ --origin-group-name og1 \
+ --origin-name contoso1 \
+ --origin-host-header webappcontoso-1.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+Repeat this step and add your second origin.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFDCentral \
+ --host-name webappcontoso-2.azurewebsites.net
+ --profile-name contosoafd \
+ --origin-group-name og1 \
+ --origin-name contoso2 \
+ --origin-host-header webappcontoso-2.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+## Add a route
+
+Run [az afd route create](/cli/azure/afd/route#az_afd_route_create) to map your frontend endpoint to the origin group. This route forwards requests from the endpoint to *og1*.
+
+```azurecli
+az afd route create \
+ --resource-group myRGFDCentral \
+ --endpoint-name contoso-frontend \
+ --profile-name contosoafd \
+ --route-name route1 \
+ --https-redirect Enabled \
+ --origin-group og1 \
+ --supported-protocols Https \
+ --link-to-default-domain Enabled \
+ --forwarding-protocol MatchRequest
+```
+
+## Create a new security policy
+
+### Create a WAF policy
+
+Run [az network front-door waf-policy create](/cli/azure/network/front-door/waf-policy#az_network_front_door_waf_policy_create) to create a WAF policy for one of your resource groups.
+
+Create a new WAF policy for your Front Door. This example creates a policy that's enabled and in prevention mode.
+
+```azurecli
+az network front-door waf-policy create
+ --name contosoWAF /
+ --resource-group myRGFDCentral /
+ --sku Premium_AzureFrontDoor
+ --disabled false /
+ --mode Prevention
+```
+
+> [!NOTE]
+> If you select `Detection` mode, your WAF doesn't block any requests.
+
+### Create the security policy
+
+Run [az afd security-policy create](/cli/azure/afd/security-policy#az_afd_security_policy_create) to apply your WAF policy to the endpoint's default domain.
+
+```azurecli
+az afd security-policy create \
+ --resource-group myRGFDCentral \
+ --profile-name contosoafd \
+ --security-policy-name contososecurity \
+ --domains /subscriptions/mysubscription/resourcegroups/myRGFDCentral/providers/Microsoft.Cdn/profiles/contosoafd/afdEndpoints/contoso-frontend.z01.azurefd.net \
+ --waf-policy /subscriptions/mysubscription/resourcegroups/myRGFDCentral/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/contosoWAF
+```
+
+## Verify Azure Front Door
+
+When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. In a browser, go to `contoso-frontend.z01.azurefd.net`. Your request will automatically get routed to the nearest server from the specified servers in the origin group.
+
+To test instant global failover, we'll use the following steps:
+
+1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.azurefd.net`.
+
+2. In the Azure portal, search for and select *App services*. Scroll down to find one of your web apps, **WebAppContoso-1** in this example.
+
+3. Select your web app, and then select **Stop**, and **Yes** to verify.
+
+4. Refresh your browser. You should see the same information page.
+
+ >[!TIP]
+ >There is a little bit of delay for these actions. You might need to refresh again.
+
+5. Find the other web app, and stop it as well.
+
+6. Refresh your browser. This time, you should see an error message.
+
+ :::image type="content" source="../media/create-front-door-portal/web-app-stopped-message.png" alt-text="Both instances of the web app stopped":::
+
+## Clean up resources
+
+When you don't need the resources for the Front Door, delete both resource groups. Deleting the resource groups also deletes the Front Door and all its related resources.
+
+Run [az group delete](/cli/azure/group#az_group_delete&preserve-view=true):
+
+```azurecli
+az group delete \
+ --name myRGFDCentral
+
+az group delete \
+ --name myRGFDEast
+```
frontdoor Front Door Add Rules Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/front-door-add-rules-cli.md
+
+ Title: Add delivery rules to Azure Front Door with the Azure CLI
+description: Learn how to create an Azure Front Door Standard/Premium (Preview) with the Azure CLI. Then, add delivery rules to enhance control over your web app behavior.
++++ Last updated : 12/30/2021++++
+# Tutorial: Add and customize delivery rules for Azure Front Door Standard/Premium (Preview) with Azure CLI
+
+Azure Front Door Standard/Premium (Preview) is a fast and secure modern cloud CDN. Azure Front Door uses the Microsoft global edge network and integrates with intelligent threat protection. Azure Front Door Standard focuses on content delivery. Azure Front Door Premium adds extensive security capabilities and customization. This tutorial focuses on creating an Azure Front Door profile, then adding delivery rules for more granular control over your web app behaviors.
+
+> [!NOTE]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> - Create an Azure Front Door profile.
+> - Create two instances of a web app.
+> - Create a new security policy.
+> - Verify connectivity to your web apps.
+> - Create a rule set.
+> - Create a rule and add it to the rule set.
+> - Add actions or conditions to your rules.
+++
+## Create an Azure Front Door
+
+### Create a resource group
+
+For this quickstart, you'll need two resource groups. One in *Central US* and the second in *East US*.
+
+Run [az group create](/cli/azure/group#az_group_create) to create resource groups.
+
+```azurecli
+az group create \
+ --name myRGFDCentral \
+ --location centralus
+
+az group create \
+ --name myRGFDEast \
+ --location eastus
+```
+
+### Create an Azure Front Door profile
+
+Run [az afd profile create](/cli/azure/afd/profile#az_afd_profile_create) to create an Azure Front Door profile.
+
+```azurecli
+az afd profile create \
+ --profile-name contosoafd \
+ --resource-group myRGFDCentral \
+ --sku Premium_AzureFrontDoor \
+ --subscription mysubscription
+```
+
+### Create two instances of a web app
+
+You need two instances of a web application that run in different Azure regions for this tutorial. Both the web application instances run in Active/Active mode, so either one can service traffic.
+
+If you don't already have a web app, use the following script to set up two example web apps.
+
+#### Create app service plans
+
+Before you can create the web apps you'll need two app service plans, one in *Central US* and the second in *East US*.
+
+Run [az appservice plan create](/cli/azure/appservice/plan#az_appservice_plan_create&preserve-view=true) to create your app service plans.
+
+```azurecli
+az appservice plan create \
+ --name myAppServicePlanCentralUS \
+ --resource-group myRGFDCentral
+
+az appservice plan create \
+ --name myAppServicePlanEastUS \
+ --resource-group myRGFDEast
+```
+
+#### Create web apps
+
+Run [az webapp create](/cli/azure/webapp#az_webapp_create&preserve-view=true) to create a web app in each of the app service plans in the previous step. Web app names have to be globally unique.
+
+Run [az webapp list-runtimes](/cli/azure/webapp#az_webapp_create&preserve-view=true) to see a list of built-in stacks for web apps.
+
+```azurecli
+az webapp create \
+ --name WebAppContoso-001 \
+ --resource-group myRGFDCentral \
+ --plan myAppServicePlanCentralUS \
+ --runtime "DOTNETCORE|2.1"
+
+az webapp create \
+ --name WebAppContoso-002 \
+ --resource-group myRGFDEast \
+ --plan myAppServicePlanEastUS \
+ --runtime "DOTNETCORE|2.1"
+```
+
+Make note of the default host name of each web app so you can define the backend addresses when you deploy the Front Door in the next step.
+
+### Add an endpoint
+
+Run [az afd endpoint create](/cli/azure/afd/endpoint#az_afd_endpoint_create) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience.
+
+```azurecli
+az afd endpoint create \
+ --resource-group myRGFDCentral \
+ --endpoint-name contoso-frontend \
+ --profile-name contosoafd \
+ --origin-response-timeout-seconds 60 \
+ --enabled-state Enabled
+```
+
+### Create an origin group
+
+Run [az afd origin-group create](/cli/azure/afd/origin-group#az_afd_origin_group_create) to create an origin group that contains your two web apps.
+
+```azurecli
+az afd origin-group create \
+ --resource-group myRGFDCentral \
+ --origin-group-name og1 \
+ --profile-name contosoafd \
+ --probe-request-type GET \
+ --probe-protocol Http \
+ --probe-interval-in-seconds 120 \
+ --probe-path /test1/azure.txt \
+ --sample-size 4 \
+ --successful-samples-required 3 \
+ --additional-latency-in-milliseconds 50
+```
+
+#### Add origins to the group
+
+Run [az afd origin create](/cli/azure/afd/origin#az_afd_origin_create) to add an origin to your origin group.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFDCentral \
+ --host-name webappcontoso-1.azurewebsites.net
+ --profile-name contosoafd \
+ --origin-group-name og1 \
+ --origin-name contoso1 \
+ --origin-host-header webappcontoso-1.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+Repeat this step and add your second origin.
+
+```azurecli
+az afd origin create \
+ --resource-group myRGFDCentral \
+ --host-name webappcontoso-2.azurewebsites.net
+ --profile-name contosoafd \
+ --origin-group-name og1 \
+ --origin-name contoso2 \
+ --origin-host-header webappcontoso-2.azurewebsites.net \
+ --priority 1 \
+ --weight 1000 \
+ --enabled-state Enabled \
+ --http-port 80 \
+ --https-port 443
+```
+
+### Add a route
+
+Run [az afd route create](/cli/azure/afd/route#az_afd_route_create) to map your frontend endpoint to the origin group. This route forwards requests from the endpoint to *og1*.
+
+```azurecli
+az afd route create \
+ --resource-group myRGFDCentral \
+ --endpoint-name contoso-frontend \
+ --profile-name contosoafd \
+ --route-name route1 \
+ --https-redirect Enabled \
+ --origin-group og1 \
+ --supported-protocols Https \
+ --link-to-default-domain Enabled \
+ --forwarding-protocol MatchRequest
+```
+
+## Create a new security policy
+
+### Create a WAF policy
+
+Run [az network front-door waf-policy create](/cli/azure/network/front-door/waf-policy#az_network_front_door_waf_policy_create) to create a WAF policy for one of your resource groups.
+
+Create a new WAF policy for your Front Door. This example creates a policy that's enabled and in prevention mode.
+
+```azurecli
+az network front-door waf-policy create
+ --name contosoWAF /
+ --resource-group myRGFDCentral /
+ --sku Premium_AzureFrontDoor
+ --disabled false /
+ --mode Prevention
+```
+
+> [!NOTE]
+> If you select `Detection` mode, your WAF doesn't block any requests.
+
+### Create the security policy
+
+Run [az afd security-policy create](/cli/azure/afd/security-policy#az_afd_security_policy_create) to apply your WAF policy to the endpoint's default domain.
+
+```azurecli
+az afd security-policy create \
+ --resource-group myRGFDCentral \
+ --profile-name contosoafd \
+ --security-policy-name contososecurity \
+ --domains /subscriptions/mysubscription/resourcegroups/myRGFDCentral/providers/Microsoft.Cdn/profiles/contosoafd/afdEndpoints/contoso-frontend.z01.azurefd.net \
+ --waf-policy /subscriptions/mysubscription/resourcegroups/myRGFDCentral/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/contosoWAF
+```
+
+## Verify Azure Front Door
+
+When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. In a browser, go to `contoso-frontend.z01.azurefd.net`. Your request will automatically get routed to the nearest server from the specified servers in the origin group.
+
+To test instant global failover, we'll use the following steps:
+
+1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.azurefd.net`.
+
+2. In the Azure portal, search for and select *App services*. Scroll down to find one of your web apps, **WebAppContoso-1** in this example.
+
+3. Select your web app, and then select **Stop**, and **Yes** to verify.
+
+4. Refresh your browser. You should see the same information page.
+
+ >[!TIP]
+ >There is a little bit of delay for these actions. You might need to refresh again.
+
+5. Find the other web app, and stop it as well.
+
+6. Refresh your browser. This time, you should see an error message.
+
+ :::image type="content" source="../media/create-front-door-portal/web-app-stopped-message.png" alt-text="Both instances of the web app stopped":::
+
+## Create a rule set
+
+Create a rule set to customize how HTTP requests are handled at the edge. Delivery rules added to the rule set provide more control over your web application behaviors. Run [az afd rule-set create](/cli/azure/afd/rule-set#az_afd_rule_set_create) to create a rule set in your Azure Front Door profile.
+
+```azurecli
+az afd rule-set create \
+ --profile-name contosoafd \
+ --resource-group myRGFDCentral \
+ --rule-set-name contosorules
+```
+
+## Create a delivery rule and add it to your rule set
+
+Create a new delivery rule within your rule set. Run [az afd rule create](/cli/azure/afd/rule#az_afd_rule_create) to create a delivery rule in your rule set. For this example, we'll create a rule for an http to https redirect.
+
+```azurecli
+az afd rule create \
+ --resource-group myRGFDCentral \
+ --rule-set-name contosorules \
+ --profile-name contosoafd \
+ --order 1 \
+ --match-variable RequestScheme \
+ --operator Equal \
+ --match-values HTTP \
+ --rule-name "redirect" \
+ --action-name "UrlRedirect" \
+ --redirect-protocol Https \
+ --redirect-type Moved
+```
+
+## Add an action or condition to your delivery rule
+
+You might find that you need to further customize your new delivery rule. You can add actions or conditions as needed after creation. Run [az afd rule action add](/cli/azure/afd/rule/action#az_afd_rule_action_add) or [az afd rule condition add](/cli/azure/afd/rule/condition#az_afd_rule_condition_add) to update your rule.
+
+### Add an action
+
+```azurecli
+az afd rule action add \
+ --resource-group myRGFDCentral \
+ --rule-set-name contosorules \
+ --profile-name contosoafd \
+ --rule-name redirect \
+ --action-name "CacheExpiration" \
+ --cache-behavior BypassCache
+```
+
+### Add a condition
+
+```azurecli
+az afd rule condition add \
+ --resource-group myRGFDCentral \
+ --rule-set-name contosorules \
+ --profile-name contosoafd \
+ --rule-name redirect \
+ --match-variable RemoteAddress \
+ --operator GeoMatch \
+ --match-values "TH"
+```
+
+## Clean up resources
+
+When you don't need the resources for the Front Door, delete both resource groups. Deleting the resource groups also deletes the Front Door and all its related resources.
+
+Run [az group delete](/cli/azure/group#az_group_delete&preserve-view=true):
+
+```azurecli
+az group delete \
+ --name myRGFDCentral
+
+az group delete \
+ --name myRGFDEast
+```
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 12/15/2021 Last updated : 01/18/2022
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 12/15/2021 Last updated : 01/18/2022
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-azure-edge-hardware-center](../../../../includes/policy/reference/bycat/policies-azure-edge-hardware-center.md)]
+## Azure Purview
++ ## Azure Stack Edge [!INCLUDE [azure-policy-reference-policies-azure-stack-edge](../../../../includes/policy/reference/bycat/policies-azure-stack-edge.md)]
hdinsight Apache Hive Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/apache-hive-warehouse-connector.md
The Hive Warehouse Connector allows you to take advantage of the unique features
Apache Hive offers support for database transactions that are Atomic, Consistent, Isolated, and Durable (ACID). For more information on ACID and transactions in Hive, see [Hive Transactions](https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions). Hive also offers detailed security controls through Apache Ranger and Low Latency Analytical Processing (LLAP) not available in Apache Spark.
-Apache Spark, has a Structured Streaming API that gives streaming capabilities not available in Apache Hive. Beginning with HDInsight 4.0, Apache Spark 2.3.1 and Apache Hive 3.1.0 have separate metastores. The separate metastores can make interoperability difficult. The Hive Warehouse Connector makes it easier to use Spark and Hive together. The HWC library loads data from LLAP daemons to Spark executors in parallel. This process makes it more efficient and adaptable than a standard JDBC connection from Spark to Hive.
+Apache Spark, has a Structured Streaming API that gives streaming capabilities not available in Apache Hive. Beginning with HDInsight 4.0, Apache Spark 2.3.1 & above, and Apache Hive 3.1.0 have separate metastore catalogs which make interoperability difficult.
+
+The Hive Warehouse Connector (HWC) makes it easier to use Spark and Hive together. The HWC library loads data from LLAP daemons to Spark executors in parallel. This process makes it more efficient and adaptable than a standard JDBC connection from Spark to Hive. This brings out two different execution modes for HWC:
+> - Hive JDBC mode via HiveServer2
+> - Hive LLAP mode using LLAP daemons **[Recommended]**
+
+By default, HWC is configured to use Hive LLAP daemons.
+For executing Hive queries (both read and write) using the above modes with their respective APIs, see [HWC APIs] (./hive-warehouse-connector-apis.md).
:::image type="content" source="./media/apache-hive-warehouse-connector/hive-warehouse-connector-architecture.png" alt-text="hive warehouse connector architecture" border="true":::
Some of the operations supported by the Hive Warehouse Connector are:
In a scenario where you only have Spark workloads and want to use HWC Library, ensure Interactive Query cluster doesn't have Workload Management feature enabled (`hive.server2.tez.interactive.queue` configuration is not set in Hive configs). <br> For a scenario where both Spark workloads (HWC) and LLAP native workloads exists, You need to create two separate Interactive Query Clusters with shared metastore database. One cluster for native LLAP workloads where WLM feature can be enabled on need basis and other cluster for HWC only workload where WLM feature shouldn't be configured. It is important to note that you can view the WLM resource plans from both the clusters even if it is enabled in only one cluster. Don't make any changes to resource plans in the cluster where WLM feature is disabled as it might impact the WLM functionality in other cluster.
+> - Although Spark supports R computing language for simplifying its data analysis, Hive Warehouse Connector (HWC) Library is not supported to be used with R. To execute HWC workloads, you can execute queries from Spark to Hive using the JDBC-style HiveWarehouseSession API that supports only Scala, Java, and Python.
+> - Executing queries (both read and write) through HiveServer2 via JDBC mode is not supported for complex data types like Arrays/Struct/Map types.
+> - HWC supports writing only in ORC file formats. Non-ORC writes (eg: parquet and text file formats) are not supported via HWC.
Hive Warehouse Connector needs separate clusters for Spark and Interactive Query workloads. Follow these steps to set up these clusters in Azure HDInsight.
Below are some examples to connect to HWC from Spark.
### Spark-shell
+This is a way to run Spark interactively through a modified version of the Scala shell.
+ 1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Apache Spark cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command: ```cmd
Below are some examples to connect to HWC from Spark.
### Spark-submit
+Spark-submit is a utility to submit any Spark program (or job) to Spark clusters.
+
+The spark-submit job will setup and configure Spark and Hive Warehouse Connector as per our instructions, execute the program we pass to it, then cleanly release the resources that were being used.
+ Once you build the scala/java code along with the dependencies into an assembly jar, use the below command to launch a Spark application. Replace `<VERSION>`, and `<APP_JAR_PATH>` with the actual values. * YARN Client mode
Once you build the scala/java code along with the dependencies into an assembly
/<APP_JAR_PATH>/myHwcAppProject.jar ```
-For Python, add the following configuration as well.
+This utility is also used when we have written the entire application in pySpark and packaged into py files (Python), so that we can submit the entire code to Spark cluster for execution.
+
+For Python applications, simply pass a .py file in the place of `/<APP_JAR_PATH>/myHwcAppProject.jar`, and add the below configuration (Python .zip) file to the search path with `--py-files`.
```python --py-files /usr/hdp/current/hive_warehouse_connector/pyspark_hwc-<VERSION>.zip
kinit USERNAME
* [Use Interactive Query with HDInsight](./apache-interactive-query-get-started.md) * [HWC integration with Apache Zeppelin](./apache-hive-warehouse-connector-zeppelin.md) * [Examples of interacting with Hive Warehouse Connector using Zeppelin, Livy, spark-submit, and pyspark](https://community.hortonworks.com/articles/223626/integrating-apache-hive-with-apache-spark-hive-war.html)
+* [Submitting Spark Applications via Spark-submit utility](https://spark.apache.org/docs/2.4.0/submitting-applications.html)
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/overview.md
Protect your PHI with unparalleled security intelligence. Your data is isolated
FHIR servers are key tools for interoperability of health data. The FHIR service is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where FHIR service is useful are below: -- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can managing data and exchanging data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
+- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can manage and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it is not uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the FHIR service as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
hpc-cache Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/configuration.md
Title: Configure Azure HPC Cache settings description: Explains how to configure additional settings for the cache like MTU, custom NTP and DNS configuration, and how to access the express snapshots from Azure Blob storage targets.-+ Last updated 04/08/2021-+ # Configure additional Azure HPC Cache settings
Consider using a test cache to check and refine your DNS setup before you use it
### Refresh storage target DNS
-If your DNS server updates IP addresses, the associated NFS storage targets will become temporarily unavailable. Read how to update your custom DNS system IP addresses in [View and manage storage targets](manage-storage-targets.md#update-ip-address-custom-dns-configurations-only).
+If your DNS server updates IP addresses, the associated NFS storage targets will become temporarily unavailable. Read how to update your custom DNS system IP addresses in [View and manage storage targets](manage-storage-targets.md#update-ip-address-specific-configurations-only).
## View snapshots for blob storage targets
hpc-cache Hpc Cache Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-add-storage.md
Title: Add storage to an Azure HPC Cache description: How to define storage targets so that your Azure HPC Cache can use your on-premises NFS system or Azure Blob containers for long-term file storage -+ Previously updated : 09/22/2021 Last updated : 01/06/2022 -+ # Add storage targets
The number of supported storage targets depends on the cache size, which is set
* Up to 10 storage targets - A standard cache with the smallest or medium cache storage value for your selected throughput can have a maximum of 10 storage targets.
- For example, if you choose 2GB/second throughput and do not choose the highest cache storage size, your cache supports a maximum of 10 storage targets.
+ For example, if you choose 2GB/second throughput and don't choose the highest cache storage size, your cache supports a maximum of 10 storage targets.
* Up to 20 storage targets -
Read [Set cache capacity](hpc-cache-create.md#set-cache-capacity) to learn more
## Choose the correct storage target type
-You can select from three storage target types: **NFS**, **Blob**, and **ADLS-NFS**. Choose the type that matches the kind of storage system you will use to store your files during this HPC Cache project.
+You can select from three storage target types: **NFS**, **Blob**, and **ADLS-NFS**. Choose the type that matches the kind of storage system you'll use to store your files during this HPC Cache project.
* **NFS** - Create an NFS storage target to access data on a network-attached storage (NAS) system. This can be an on-premises storage system or another storage type that's accessible with NFS.
To define an Azure Blob container, enter this information.
* **Target type** - Choose **Blob**. * **Storage account** - Select the account that you want to use.
- You will need to authorize the cache instance to access the storage account as described in [Add the access roles](#add-the-access-control-roles-to-your-account).
+ You'll need to authorize the cache instance to access the storage account as described in [Add the access roles](#add-the-access-control-roles-to-your-account).
For information about the kind of storage account you can use, read [Blob storage requirements](hpc-cache-prerequisites.md#blob-storage-requirements).
Azure HPC Cache uses [Azure role-based access control (Azure RBAC)](../role-base
The storage account owner must explicitly add the roles [Storage Account Contributor](../role-based-access-control/built-in-roles.md#storage-account-contributor) and [Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) for the user "HPC Cache Resource Provider".
-You can do this ahead of time, or by clicking a link on the portal page where you add a Blob storage target. Keep in mind that it can take up to five minutes for the role settings to propagate through the Azure environment, so you should wait a few minutes after adding the roles before creating a storage target.
+You can do this ahead of time, or by clicking a link on the portal page where you add a Blob storage target. Keep in mind that it can take up to five minutes for the role settings to propagate through the Azure environment. Wait a few minutes after adding the roles before creating a storage target.
1. Open **Access control (IAM)** for your storage account. 1. Select **Add** > **Add role assignment** to open the Add role assignment page. 1. Assign the following roles, one at a time. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
+ | Setting | Value | | | | | Roles | [Storage Account Contributor](../role-based-access-control/built-in-roles.md#storage-account-contributor) <br/> [Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) |
Steps to add the Azure roles:
1. Click the **Save** button at the bottom.
-1. Repeat this process to assign the role "Storage Blob Data Contributor".
+1. Repeat this process to assign the role "Storage Blob Data Contributor".
![screenshot of add role assignment GUI](media/hpc-cache-add-role.png) -->
When you create a storage target that uses NFS to reach its storage system, you
Read [Understand usage models](cache-usage-models.md) for more details about all of these settings.
-HPC Cache's built-in usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize speed for reading files, you might not care whether the files in the cache are checked against the back-end files. On the other hand, if you want to make sure your files are always up to date with the remote storage, choose a model that checks frequently.
+HPC Cache's built-in usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize speed for reading files, you might not care whether the files in the cache are checked against the back-end files. Alternatively, if you want to make sure your files are always up to date with the remote storage, choose a model that checks frequently.
> [!NOTE] > [High-throughput style caches](hpc-cache-create.md#choose-the-cache-type-for-your-needs) support read caching only.
These three options cover most situations:
This option caches files from client reads, but passes client writes through to the back-end storage immediately. Files stored in the cache are not automatically compared to the files on the NFS storage volume.
- Do not use this option if there is a risk that a file might be modified directly on the storage system without first writing it to the cache. If that happens, the cached version of the file will be out of sync with the back-end file.
+ Don't use this option if there is a risk that a file might be modified directly on the storage system without first writing it to the cache. If that happens, the cached version of the file will be out of sync with the back-end file.
* **Greater than 15% writes** - This option speeds up both read and write performance.
- Client reads and client writes are both cached. Files in the cache are assumed to be newer than files on the back-end storage system. Cached files are only automatically checked against the files on back-end storage every eight hours. Modified files in the cache are written to the back-end storage system after they have been in the cache for 20 minutes with no additional changes.
+ Client reads and client writes are both cached. Files in the cache are assumed to be newer than files on the back-end storage system. Cached files are only automatically checked against the files on back-end storage every eight hours. Modified files in the cache are written to the back-end storage system after they have been in the cache for 20 minutes with no other changes.
Do not use this option if any clients mount the back-end storage volume directly, because there is a risk it will have outdated files.
az hpc-cache nfs-storage-target add --resource-group "hpc-cache-group" --cache-n
``` Output:+ ```azurecli {- Finished ..
ADLS-NFS storage targets have some similarities with Blob storage targets and so
* Like a Blob storage target, you need to give Azure HPC Cache permission to [access your storage account](#add-the-access-control-roles-to-your-account). * Like an NFS storage target, you need to set a cache [usage model](#choose-a-usage-model).
-* Because NFS-enabled blob containers have an NFS-compatible hierarchical structure, you do not need to use the cache to ingest data, and the containers are readable by other NFS systems.
+* Because NFS-enabled blob containers have an NFS-compatible hierarchical structure, you don't need to use the cache to ingest data, and the containers are readable by other NFS systems.
You can pre-load data in an ADLS-NFS container, then add it to an HPC Cache as a storage target, and then access the data later from outside of an HPC Cache. When you use a standard blob container as an HPC Cache storage target, the data is written in a proprietary format and can only be accessed from other Azure HPC Cache-compatible products.
After your storage account is set up you can create a new container when you cre
Read [Use NFS-mounted blob storage with Azure HPC Cache](nfs-blob-considerations.md) to learn more about this configuration.
-To create an ADLS-NFS storage target, open the **Add storage target** page in the Azure portal. (Additional methods are in development.)
+To create an ADLS-NFS storage target, open the **Add storage target** page in the Azure portal. (Other methods are in development.)
![Screenshot of add storage target page with ADLS-NFS target defined](media/add-adls-target.png)
Enter this information.
* **Storage target name** - Set a name that identifies this storage target in the Azure HPC Cache. * **Target type** - Choose **ADLS-NFS**.
-* **Storage account** - Select the account that you want to use. If your NFS-enabled storage account does not appear in the list, check that it conforms to the prerequisites and that the cache can access it.
+* **Storage account** - Select the account that you want to use. If your NFS-enabled storage account doesn't appear in the list, check that it conforms to the prerequisites and that the cache can access it.
- You will need to authorize the cache instance to access the storage account as described in [Add the access roles](#add-the-access-control-roles-to-your-account).
+ You'll need to authorize the cache instance to access the storage account as described in [Add the access roles](#add-the-access-control-roles-to-your-account).
* **Storage container** - Select the NFS-enabled blob container for this target, or click **Create new**.
hpc-cache Hpc Cache Edit Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-edit-storage.md
Title: Update Azure HPC Cache storage targets description: How to edit Azure HPC Cache storage targets-+ Previously updated : 06/30/2021- Last updated : 01/10/2022+ # Edit storage targets
You can modify storage targets with the Azure portal or by using the Azure CLI. For example, you can change access policies, usage models, and namespace paths for an existing storage target. > [!TIP]
-> Read [View and manage storage targets](manage-storage-targets.md) to learn how to delete or suspend storage targets, or make them write cached data to back-end storage.
+> Read [View and manage storage targets](manage-storage-targets.md) to learn how to delete or suspend storage targets, make them write cached data to back-end storage, or refresh their DNS-supplied IP addresses.
Depending on the type of storage, you can modify these storage target values:
To change a blob storage target's namespace with the Azure CLI, use the command
For NFS storage targets, you can change or add virtual namespace paths, change the NFS export or subdirectory values that a namespace path points to, and change the usage model.
-Storage targets in caches with some types of custom DNS settings also have a control for refreshing their IP addresses. (This kind of configuration is rare.) Learn how to refresh the DNS settings in [View and manage storage targets](manage-storage-targets.md#update-ip-address-custom-dns-configurations-only).
- Details are below: * [Change aggregated namespace values](#change-aggregated-namespace-values) (virtual namespace path, access policy, export, and export subdirectory)
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-prerequisites.md
description: Prerequisites for using Azure HPC Cache
Previously updated : 11/03/2021 Last updated : 01/13/2022 # Prerequisites for Azure HPC Cache
-Before using the Azure portal to create a new Azure HPC Cache, make sure your environment meets these requirements.
+Before creating a new Azure HPC Cache, make sure your environment meets these requirements.
## Video overviews
Two network-related prerequisites should be set up before you can use your cache
The Azure HPC Cache needs a dedicated subnet with these qualities: * The subnet must have at least 64 IP addresses available.
-* The subnet cannot host any other VMs, even for related services like client machines.
+* The subnet can't host any other VMs, even for related services like client machines.
* If you use multiple Azure HPC Cache instances, each one needs its own subnet. The best practice is to create a new subnet for each cache. You can create a new virtual network and subnet as part of creating the cache. ### DNS access
-The cache needs DNS to access resources outside of its virtual network. Depending on which resources you are using, you might need to set up a customized DNS server and configure forwarding between that server and Azure DNS servers:
+The cache needs DNS to access resources outside of its virtual network. Depending on which resources you're using, you might need to set up a customized DNS server and configure forwarding between that server and Azure DNS servers:
* To access Azure Blob storage endpoints and other internal resources, you need the Azure-based DNS server. * To access on-premises storage, you need to configure a custom DNS server that can resolve your storage hostnames. You must do this before you create the cache.
This example explicitly opens outbound traffic to the IP address 168.61.215.74,
Make sure that the NTP rule has a higher priority than any rules that broadly deny outbound access.
-Additional tips for NTP access:
+More tips for NTP access:
* If you have firewalls between your HPC Cache and the NTP server, make sure these firewalls also allow NTP access.
Check these permission-related prerequisites before starting to create your cach
Follow the instructions in [Add storage targets](hpc-cache-add-storage.md#add-the-access-control-roles-to-your-account) to add the roles. ## Storage infrastructure
-<!-- heading is linked in create storage target GUI as aka.ms/hpc-cache-prereq#storage-infrastructure - make sure to fix that if you change the wording of this heading -->
+<!-- heading is linked in create storage target GUI as aka.ms/hpc-cache-prereq#storage-infrastructure - fix that if you change the wording of this heading -->
The cache supports Azure Blob containers, NFS hardware storage exports, and NFS-mounted ADLS blob containers. Add storage targets after you create the cache.
-The size of your cache determines how many storage targets it can support - up to 10 storage targets for most caches, or up to 20 for the largest sizes. Read [Size your cache correctly to support your storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets) for details.
+The size of your cache determines the number of storage targets it can support - up to 10 storage targets for most caches, or up to 20 for the largest sizes. Read [Size your cache correctly to support your storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets) for details.
Each storage type has specific prerequisites.
To create a compatible storage account, use one of these combinations:
| Standard | StorageV2 (general purpose v2)| Locally redundant storage (LRS) or Zone-redundant storage (ZRS) | Hot | | Premium | Block blobs | Locally redundant storage (LRS) | Hot |
-The storage account must be accessible from your cache's private subnet. If your account uses a private endpoint or a public endpoint that is restricted to specific virtual networks, make sure to enable access from the cache's subnet. (An open public endpoint is not recommended.)
+The storage account must be accessible from your cache's private subnet. If your account uses a private endpoint or a public endpoint that is restricted to specific virtual networks, make sure to enable access from the cache's subnet. (An open public endpoint is **not** recommended.)
+
+Read [Work with private endpoints](#work-with-private-endpoints) for tips about using private endpoints with HPC Cache storage targets.
It's a good practice to use a storage account in the same Azure region as your cache.
-You also must give the cache application access to your Azure storage account as mentioned in [Permissions](#permissions), above. Follow the procedure in [Add storage targets](hpc-cache-add-storage.md#add-the-access-control-roles-to-your-account) to give the cache the required access roles. If you are not the storage account owner, have the owner do this step.
+You also must give the cache application access to your Azure storage account as mentioned in [Permissions](#permissions), above. Follow the procedure in [Add storage targets](hpc-cache-add-storage.md#add-the-access-control-roles-to-your-account) to give the cache the required access roles. If you're not the storage account owner, have the owner do this step.
### NFS storage requirements <!-- linked from configuration.md and add storage -->
More information is included in [Troubleshoot NAS configuration and NFS storage
* Enable `no_root_squash`. This option ensures that the remote root user can access files owned by root.
- * Check export policies to make sure they do not include restrictions on root access from the cache's subnet.
+ * Check export policies to make sure they don't include restrictions on root access from the cache's subnet.
* If your storage has any exports that are subdirectories of another export, make sure the cache has root access to the lowest segment of the path. Read [Root access on directory paths](troubleshoot-nas.md#allow-root-access-on-directory-paths) in the NFS storage target troubleshooting article for details.
This is a general overview of the steps. These steps might change, so always ref
* Instead of the using the storage account settings for a standard blob storage account, follow the instructions in the [how-to document](../storage/blobs/network-file-system-protocol-support-how-to.md). The type of storage account supported might vary by Azure region. * In the Networking section, choose a private endpoint in the secure virtual network you created (recommended), or choose a public endpoint with restricted access from the secure VNet.
+
+ Read [Work with private endpoints](#work-with-private-endpoints) for tips about using private endpoints with HPC Cache storage targets.
- * Do not forget to complete the Advanced section, where you enable NFS access.
+ * Don't forget to complete the Advanced section, where you enable NFS access.
* Give the cache application access to your Azure storage account as mentioned in [Permissions](#permissions), above. You can do this the first time you create a storage target. Follow the procedure in [Add storage targets](hpc-cache-add-storage.md#add-the-access-control-roles-to-your-account) to give the cache the required access roles.
- If you are not the storage account owner, have the owner do this step.
+ If you aren't the storage account owner, have the owner do this step.
Learn more about using ADLS-NFS storage targets with Azure HPC Cache in [Use NFS-mounted blob storage with Azure HPC Cache](nfs-blob-considerations.md).
+### Work with private endpoints
+<!-- linked from other articles, update links if you change this header -->
+
+Azure Storage supports private endpoints to allow secure data access. You can use private endpoints with Azure Blob or NFS-mounted blob storage targets.
+
+[Learn more about private endpoints](../storage/common/storage-private-endpoints.md)
+
+A private endpoint provides a specific IP address that the HPC Cache uses to communicate with your back-end storage system. If that IP address changes, the cache can't automatically re-establish a connection with the storage.
+
+If you need to change a private endpoint's configuration, follow this procedure to avoid communication problems between the storage and the HPC Cache:
+
+ 1. Suspend the storage target (or all of the storage targets that use this private endpoint).
+ 1. Make changes to the private endpoint, and save those changes.
+ 1. Put the storage target back into service with the "resume" command.
+ 1. Refresh the storage target's DNS setting.
+
+ Read [View and manage storage targets](manage-storage-targets.md) to learn how to suspend, resume, and refresh DNS for storage targets.
+ ## Set up Azure CLI access (optional) If you want to create or manage Azure HPC Cache from the Azure CLI, you need to install Azure CLI and the hpc-cache extension. Follow the instructions in [Set up Azure CLI for Azure HPC Cache](az-cli-prerequisites.md).
hpc-cache Manage Storage Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/manage-storage-targets.md
Title: Manage Azure HPC Cache storage targets description: How to suspend, remove, force delete, and flush Azure HPC Cache storage targets, and how to understand the storage target state-+ Previously updated : 09/27/2021- Last updated : 01/06/2022+ # View and manage storage targets
These options are available:
* **Force remove** - Delete a storage target, skipping some safety steps (**Force remove can cause data loss**) * **Delete** - Permanently remove a storage target
-Some storage targets also have a **Refresh DNS** option on this menu, which updates the storage target IP address from a custom DNS server. This configuration is uncommon.
+Some storage targets also have a **Refresh DNS** option on this menu, which updates the storage target IP address from a custom DNS server or from an Azure Storage private endpoint.
Read the rest of this article for more detail about these options. ### Write cached files to the storage target
-The **Flush** option tells the cache to immediately copy any changed files stored in the cache to the back-end storage system. For example, if your client machines are updating a particular file repeatedly, it is held in the cache for quicker access and not written to the long-term storage system for a period ranging from several minutes to more than an hour.
+The **Flush** option tells the cache to immediately copy any changed files stored in the cache to the back-end storage system. For example, if your client machines are updating a particular file repeatedly, it's held in the cache for quicker access and not written to the long-term storage system for a period ranging from several minutes to more than an hour.
The **Flush** action tells the cache to write all files to the storage system.
You can use the Azure portal or the AZ CLI to delete a storage target.
The regular delete option permanently removes the storage target from the HPC Cache, but first it synchronizes the cache contents with the back-end storage system. It's different from the force delete option, which does not synchronize data.
-Deleting a storage target removes the storage system's association with this Azure HPC Cache, but it does not change the back-end storage system. For example, if you used an Azure Blob storage container, the container and its contents still exist after you delete it from the cache. You can add the container to a different Azure HPC Cache, re-add it to this cache, or delete it with the Azure portal.
+Deleting a storage target removes the storage system's association with this Azure HPC Cache, but it doesn't change the back-end storage system. For example, if you used an Azure Blob storage container, the container and its contents still exist after you delete it from the cache. You can add the container to a different Azure HPC Cache, re-add it to this cache, or delete it with the Azure portal.
-If there is a large amount of changed data stored in the cache, deleting a storage target can take several minutes to complete. Wait for the action to finish to be sure that the data is safely stored in your long-term storage system.
+If there's a large amount of changed data stored in the cache, deleting a storage target can take several minutes to complete. Wait for the action to finish to be sure that the data is safely stored in your long-term storage system.
#### [Portal](#tab/azure-portal)
$ az hpc-cache storage-target remove --resource-group cache-rg --cache-name doc-
-### Update IP address (custom DNS configurations only)
+### Update IP address (specific configurations only)
-If your cache uses a non-default DNS configuration, it's possible for your NFS storage target's IP address to change because of back-end DNS changes. If your DNS server changes the back-end storage system's IP address, Azure HPC Cache can lose access to the storage system.
+In some situations, you might need to update your storage target's IP address. This can happen in two scenarios:
-Ideally, you should work with the manager of your cache's custom DNS system to plan for any updates, because these changes make storage unavailable.
+* Your cache uses a custom DNS system instead of the default setup, and the network infrastructure has changed.
-If you need to update a storage target's DNS-provided IP address, use the **Storage targets** page. Click the **...** symbol in the right column to open the context menu. Choose **Refresh DNS** to query the custom DNS server for a new IP address.
+* Your storage target uses a private endpoint to access Azure Blob or NFS-mounted blob storage, and you have updated the endpoint's configuration. (You should suspend storage targets before modifying their private endpoints, as described in the [prerequisites article](hpc-cache-prerequisites.md#work-with-private-endpoints).)
+
+With a custom DNS system, it's possible for your NFS storage target's IP address to change because of back-end DNS changes. If your DNS server changes the back-end storage system's IP address, Azure HPC Cache can lose access to the storage system. Ideally, you should work with the manager of your cache's custom DNS system to plan for any updates, because these changes make storage unavailable.
+
+If you use a private endpoint for secure storage access, the endpoint's IP addresses can change if you modify its configuration. If you need to change your private endpoint configuration, you should suspend the storage target (or targets) that use the endpoint, then refresh their IP addresses when you re-activate them. Read [Work with private endpoints](hpc-cache-prerequisites.md#work-with-private-endpoints) for additional information.
+
+If you need to update a storage target's IP address, use the **Storage targets** page. Click the **...** symbol in the right column to open the context menu. Choose **Refresh DNS** to query the custom DNS server or private endpoint for a new IP address.
![Screenshot of storage target list. For one storage target, the "..." menu in the far right column is open and these options appear: Flush, Suspend, Refresh DNS, Force remove, Resume (this option is disabled), and Delete.](media/refresh-dns.png)
If successful, the update should take less than two minutes. You can only refres
The storage target list shows two types of status: **State** and **Provisioning state**.
-* **State** indicates the operational state of the storage target. This value updates regularly and helps you understand whether or not the storage target is available for client requests, and also which of the management options are available.
+* **State** indicates the operational state of the storage target. This value updates regularly and helps you understand whether the storage target is available for client requests, and which of the management options are available.
* **Provisioning state** tells you whether the last action to add or edit the storage target was successful. This value is only updated if you edit the storage target.
-The **State** value affects which management options you can use. Here is a short explanation of the values and their effects.
+The **State** value affects which management options you can use. Here's a short explanation of the values and their effects.
* **Ready** - The storage target is operating normally and available to clients. You can use any of the management options on this storage target (except for **Resume**, which only is valid for suspended storage targets). * **Busy** - The storage target is processing another operation. You can delete or force remove the storage target.
hpc-cache Nfs Blob Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/nfs-blob-considerations.md
Title: Use NFS Blob storage with Azure HPC Cache description: Describes procedures and limitations when using ADLS-NFS blob storage with Azure HPC Cache-+ Last updated 07/12/2021
industrial-iot Tutorial Publisher Configure Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md
We have provided a [sample configuration application](https://github.com/Azure-S
>[!NOTE] > This feature is only available in version 2.6 and above of OPC Publisher.
-A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/blob/master/docs/services/publisher.md). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger.
+A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/blob/main/docs/services/publisher.md). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger.
## Configuration of the simple JSON telemetry format via Separate Configuration File
iot-develop Concepts Model Repository https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-model-repository.md
The client accepts a `DTMI` as input and returns a dictionary with all required
using Azure.IoT.ModelsRepository; var client = new ModelsRepositoryClient();
-IDictionary<string, string> models = client.GetModels("dtmi:com:example:TemperatureController;1");
-models.Keys.ToList().ForEach(k => Console.WriteLine(k));
+ModelResult models = client.GetModel("dtmi:com:example:TemperatureController;1");
+models.Content.Keys.ToList().ForEach(k => Console.WriteLine(k));
``` The expected output should display the `DTMI` of the three interfaces found in the dependency chain:
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/about-iot-dps.md
DPS also supports [Availability Zones](../availability-zones/az-overview.md). An
* Australia East * Brazil South * Canada Central
+* Central US
+* East US
+* East US 2
* Japan East * North Europe
-* West Europe
* UK South
+* West Europe
+* West US 2
## Quotas and Limits
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ha-dr.md
IoT Hub supports [Availability Zones](../availability-zones/az-overview.md). An
- North Europe - Southeast Asia - UK South-- West Us 2
+- West US 2
## Cross region DR
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/how-to-configure-key-rotation.md
Last updated 11/24/2021
# Configure key auto-rotation in Azure Key Vault (preview)
-> [!WARNING]
-> This feature is currently disabled due to an issue with the service.
## Overview
Key rotation policy can also be configured using ARM templates.
- [Use an Azure RBAC to control access to keys, certificates and secrets](../general/rbac-guide.md) - [Azure Data Encryption At Rest](../../security/fundamentals/encryption-atrest.md) - [Azure Storage Encryption](../../storage/common/storage-service-encryption.md)-- [Azure Disk Encryption](../../virtual-machines/disk-encryption.md)
+- [Azure Disk Encryption](../../virtual-machines/disk-encryption.md)
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
load-balancer Howto Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/howto-load-balancer-imds.md
Title: Retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)
+ Title: Retrieve load balancer metadata using Azure Instance Metadata Service (IMDS)
-description: Get started learning how to retrieve load balancer metadata using the Azure Instance Metadata Service.
+description: Get started learning how to retrieve load balancer metadata using Azure Instance Metadata Service.
Last updated 02/12/2021
-# Retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)
+# Retrieve load balancer metadata using Azure Instance Metadata Service (IMDS)
## Prerequisites
load-balancer Instance Metadata Service Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/instance-metadata-service-load-balancer.md
Title: Retrieve load balancer information by using the Azure Instance Metadata Service
+ Title: Retrieve load balancer information by using Azure Instance Metadata Service
-description: Get started learning about using the Azure Instance Metadata Service to retrieve load balancer information.
+description: Get started learning about using Azure Instance Metadata Service to retrieve load balancer information.
Last updated 02/12/2021
-# Retrieve load balancer information by using the Azure Instance Metadata Service
+# Retrieve load balancer information by using Azure Instance Metadata Service
IMDS (Azure Instance Metadata Service) provides information about currently running virtual machine instances. The service is a REST API that's available at a well-known, non-routable IP address (169.254.169.254).
-When you place virtual machine or virtual machine set instances behind an Azure Standard Load Balancer, use the IMDS to retrieve metadata related to the load balancer and the instances.
+When you place virtual machine or virtual machine set instances behind an Azure Standard Load Balancer, you can use IMDS to retrieve metadata related to the load balancer and the instances.
The metadata includes the following information for the virtual machines or virtual machine scale sets:
The metadata includes the following information for the virtual machines or virt
* Inbound rule configurations of the load balancer of each private IP of the network interface. * Outbound rule configurations of the load balancer of each private IP of the network interface.
-## Access the load balancer metadata using the IMDS
+## Access the load balancer metadata using IMDS
-For more information on how to access the load balancer metadata, see [Use the Azure Instance Metadata Service to access load balancer information](howto-load-balancer-imds.md).
+For more information on how to access the load balancer metadata, see [Use Azure Instance Metadata Service to access load balancer information](howto-load-balancer-imds.md).
## Troubleshoot common error codes
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-standard-availability-zones.md
# Load Balancer and Availability Zones
-Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase availability throughout your scenario by aligning resources with, and distribution across zones. Review this document to understand these concepts and fundamental scenario design guidance
+Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase availability throughout your scenario by aligning resources with, and distribution across zones. Review this document to understand these concepts and fundamental scenario design guidance.
A Load Balancer can either be **zone redundant, zonal,** or **non-zonal**. To configure the zone related properties (mentioned above) for your load balancer, select the appropriate type of frontend needed.
You can choose to have a frontend guaranteed to a single zone, which is known as
Additionally, the use of zonal frontends directly for load balanced endpoints within each zone is supported. You can use this configuration to expose per zone load-balanced endpoints to individually monitor each zone. For public endpoints, you can integrate them with a DNS load-balancing product like [Traffic Manager](../traffic-manager/traffic-manager-overview.md) and use a single DNS name. - <p align="center"> <img src="./media/az-zonal/zonal-lb-1.svg" alt="Figure depicts three zonal standard load balancers each directing traffic in a zone to three different subnets in a zonal configuration." width="512" title="Virtual Network NAT"> </p>
Now that you understand the zone related properties for Standard Load Balancer,
### Tolerance to zone failure -- A **zone redundant** Load Balancer can serve a zonal resource in any zone with one IP address. The IP can survive one or more zone failures as long as at least one zone remains healthy within the region.
+- A **zone redundant** frontend can serve a zonal resource in any zone with a single IP address. The IP can survive one or more zone failures as long as at least one zone remains healthy within the region.
- A **zonal** frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the zone your deployment is in goes down, your deployment will not survive this failure.
-It is recommended you use zone-redundant Load Balancer for your production workloads.
+Members in the backend pool of a load balancer are normally associated with a single zone (e.g. zonal virtual machines). A common design for production workloads would be to have multiple zonal resources (e.g. virtual machines from zone 1, 2, and 3) in the backend of a load balancer with a zone-redundant frontend.
### Multiple frontends
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022 ms.suite: integration
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-data.md
Supported cloud-based storage services in Azure that can be registered as datast
+ Azure Database for MySQL >[!TIP]
-> The generally available functionality for creating datastores requires credential-based authentication for accessing storage services, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace. <br><br>If this is a concern, [create a datastore that uses identity-based data access to storage services](how-to-identity-based-data-access.md).
+> You can create datastores with credential-based authentication for accessing storage services, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace. <br><br>If this is a concern, [create a datastore that uses identity-based data access] to connect to storage services(how-to-identity-based-data-access.md).
<a name="datasets"></a> ## Reference data in storage with datasets
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-image-models.md
-+ - Previously updated : 10/06/2021 Last updated : 01/18/2022 # Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
image classification multi-label | `ImageTask.IMAGE_CLASSIFICATION_MULTILABEL`
image object detection | `ImageTask.IMAGE_OBJECT_DETECTION` image instance segmentation| `ImageTask.IMAGE_INSTANCE_SEGMENTATION`
-This task type is a required parameter and is passed in using the `task` parameter in the `AutoMLImageConfig`.
+This task type is a required parameter and is passed in using the `task` parameter in the [`AutoMLImageConfig`](/python/api/azureml-train-automl-client/azureml.train.automl.automlimageconfig.automlimageconfig).
+ For example: ```python
training_dataset = training_dataset.register(workspace=ws, name=training_dataset
Automated ML does not impose any constraints on training or validation data size for computer vision tasks. Maximum dataset size is only limited by the storage layer behind the dataset (i.e. blob store). There is no minimum number of images or labels. However, we recommend to start with a minimum of 10-15 samples per label to ensure the output model is sufficiently trained. The higher the total number of labels/classes, the more samples you need per label. -- Training data is a required and is passed in using the `training_data` parameter. You can optionally specify another TabularDataset as a validation dataset to be used for your model with the `validation_data` parameter of the AutoMLImageConfig. If no validation dataset is specified, 20% of your training data will be used for validation by default, unless you pass `split_ratio` argument with a different value. For example:
automl_image_config = AutoMLImageConfig(compute_target=compute_target)
With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep.
-The model algorithm is required and is passed in via `model_name` parameter. You can either specify a single `model_name` or choose between multiple. In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific.
+The model algorithm is required and is passed in via `model_name` parameter. You can either specify a single `model_name` or choose between multiple.
### Supported model algorithms
-The following table summarizes the supported models for each computer vision task.
+The following table summarizes the supported models for each computer vision task.
Task | Model algorithms | String literal syntax<br> ***`default_model`\**** denoted with \* |-|- Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-weighted models for mobile applications <br> **ResNet**: Residual networks<br> **ResNeSt**: Split attention networks<br> **SE-ResNeXt50**: Squeeze-and-Excitation networks<br> **ViT**: Vision transformer networks| `mobilenetv2` <br>`resnet18` <br>`resnet34` <br> `resnet50` <br> `resnet101` <br> `resnet152` <br> `resnest50` <br> `resnest101` <br> `seresnext` <br> `vits16r224` (small) <br> ***`vitb16r224`\**** (base) <br>`vitl16r224` (large)|
-Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn`
+Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn`
Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn` <br>`maskrcnn_resnet50_fpn`
-### Model agnostic hyperparameters
-
-The following table describes the hyperparameters that are model agnostic.
-
-| Parameter name | Description | Default|
-| | - | |
-| `number_of_epochs` | Number of training epochs. <br>Must be a positive integer. | 15 <br> (except `yolov5`: 30) |
-| `training_batch_size` | Training batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 2 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 2 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.|
-| `validation_batch_size` | Validation batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 1 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 1 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.|
-| `grad_accumulation_step` | Gradient accumulation means running a configured number of `grad_accumulation_step` without updating the model weights while accumulating the gradients of those steps, and then using the accumulated gradients to compute the weight updates. <br> Must be a positive integer. | 1 |
-| `early_stopping` | Enable early stopping logic during training. <br> Must be 0 or 1.| 1 |
-| `early_stopping_patience` | Minimum number of epochs or validation evaluations with<br>no primary metric improvement before the run is stopped.<br> Must be a positive integer. | 5 |
-| `early_stopping_delay` | Minimum number of epochs or validation evaluations to wait<br>before primary metric improvement is tracked for early stopping.<br> Must be a positive integer. | 5 |
-| `learning_rate` | Initial learning rate. <br>Must be a float in the range [0, 1]. | Multi-class: 0.01 <br>(except *vit-variants*: <br> `vits16r224`: 0.0125<br>`vitb16r224`: 0.0125<br>`vitl16r224`: 0.001) <br><br> Multi-label: 0.035 <br>(except *vit-variants*:<br>`vits16r224`: 0.025<br>`vitb16r224`: 0.025 <br>`vitl16r224`: 0.002) <br><br> Object detection: 0.005 <br>(except `yolov5`: 0.01) <br><br> Instance segmentation: 0.005 |
-| `lr_scheduler` | Type of learning rate scheduler. <br> Must be `warmup_cosine` or `step`. | `warmup_cosine` |
-| `step_lr_gamma` | Value of gamma when learning rate scheduler is `step`.<br> Must be a float in the range [0, 1]. | 0.5 |
-| `step_lr_step_size` | Value of step size when learning rate scheduler is `step`.<br> Must be a positive integer. | 5 |
-| `warmup_cosine_lr_cycles` | Value of cosine cycle when learning rate scheduler is `warmup_cosine`. <br> Must be a float in the range [0, 1]. | 0.45 |
-| `warmup_cosine_lr_warmup_epochs` | Value of warmup epochs when learning rate scheduler is `warmup_cosine`. <br> Must be a positive integer. | 2 |
-| `optimizer` | Type of optimizer. <br> Must be either `sgd`, `adam`, `adamw`. | `sgd` |
-| `momentum` | Value of momentum when optimizer is `sgd`. <br> Must be a float in the range [0, 1]. | 0.9 |
-| `weight_decay` | Value of weight decay when optimizer is `sgd`, `adam`, or `adamw`. <br> Must be a float in the range [0, 1]. | 1e-4 |
-|`nesterov`| Enable `nesterov` when optimizer is `sgd`. <br> Must be 0 or 1.| 1 |
-|`beta1` | Value of `beta1` when optimizer is `adam` or `adamw`. <br> Must be a float in the range [0, 1]. | 0.9 |
-|`beta2` | Value of `beta2` when optimizer is `adam` or `adamw`.<br> Must be a float in the range [0, 1]. | 0.999 |
-|`amsgrad` | Enable `amsgrad` when optimizer is `adam` or `adamw`.<br> Must be 0 or 1. | 0 |
-|`evaluation_frequency`| Frequency to evaluate validation dataset to get metric scores. <br> Must be a positive integer. | 1 |
-|`split_ratio`| If validation data is not defined, this specifies the split ratio for splitting train data into random train and validation subsets. <br> Must be a float in the range [0, 1].| 0.2 |
-|`checkpoint_frequency`| Frequency to store model checkpoints. <br> Must be a positive integer. | Checkpoint at epoch with best primary metric on validation.|
-|`layers_to_freeze`| How many layers to freeze for your model. For instance, passing 2 as value for `seresnext` means freezing layer0 and layer1 referring to the below supported model layer info. <br> Must be a positive integer. <br><br>`'resnet': [('conv1.', 'bn1.'), 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'mobilenetv2': ['features.0.', 'features.1.', 'features.2.', 'features.3.', 'features.4.', 'features.5.', 'features.6.', 'features.7.', 'features.8.', 'features.9.', 'features.10.', 'features.11.', 'features.12.', 'features.13.', 'features.14.', 'features.15.', 'features.16.', 'features.17.', 'features.18.'],`<br>`'seresnext': ['layer0.', 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'vit': ['patch_embed', 'blocks.0.', 'blocks.1.', 'blocks.2.', 'blocks.3.', 'blocks.4.', 'blocks.5.', 'blocks.6.','blocks.7.', 'blocks.8.', 'blocks.9.', 'blocks.10.', 'blocks.11.'],`<br>`'yolov5_backbone': ['model.0.', 'model.1.', 'model.2.', 'model.3.', 'model.4.','model.5.', 'model.6.', 'model.7.', 'model.8.', 'model.9.'],`<br>`'resnet_backbone': ['backbone.body.conv1.', 'backbone.body.layer1.', 'backbone.body.layer2.','backbone.body.layer3.', 'backbone.body.layer4.']` | no default  |
--
-### Task-specific hyperparameters
-
-The following table summarizes hyperparmeters for image classification (multi-class and multi-label) tasks.
--
-| Parameter name | Description | Default |
-| - |-|--|
-| `weighted_loss` | 0 for no weighted loss.<br>1 for weighted loss with sqrt.(class_weights) <br> 2 for weighted loss with class_weights. <br> Must be 0 or 1 or 2. | 0 |
-| `valid_resize_size` | Image size to which to resize before cropping for validation dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> Training run may get into CUDA OOM if the size is too big*. | 256  |
-| `valid_crop_size` | Image crop size that's input to your neural network for validation dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
-| `train_crop_size` | Image crop size that's input to your neural network for train dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
--
-The following hyperparameters are for object detection and instance segmentation tasks.
-
-> [!Warning]
-> These parameters are not supported with the `yolov5` algorithm.
-
-| Parameter name | Description | Default |
-| - |-|--|
-| `validation_metric_type` | Metric computation method to use for validation metrics. <br> Must be `none`, `coco`, `voc`, or `coco_voc`. | `voc` |
-| `min_size` | Minimum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*.| 600 |
-| `max_size` | Maximum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer.<br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 1333 |
-| `box_score_thresh` | During inference, only return proposals with a classification score greater than `box_score_thresh`. <br> Must be a float in the range [0, 1].| 0.3 |
-| `box_nms_thresh` | Non-maximum suppression (NMS) threshold for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
-| `box_detections_per_img` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 |
-| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default |
-| `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
-| `tile_predictions_nms_thresh` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
-
-### Model-specific hyperparameters
-
-This table summarizes hyperparameters specific to the `yolov5` algorithm.
-
-| Parameter name | Description | Default |
-| - |-|-|
-| `validation_metric_type` | Metric computation method to use for validation metrics. <br> Must be `none`, `coco`, `voc`, or `coco_voc`. | `voc` |
-| `img_size` | Image size for train and validation. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 640 |
-| `model_size` | Model size. <br> Must be `small`, `medium`, `large`, or `xlarge`. <br><br> *Note: training run may get into CUDA OOM if the model size is too big*. | `medium` |
-| `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 |
-| `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 |
-| `box_iou_thresh` | IoU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
-
+In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
### Data augmentation
The primary metric used for model optimization and hyperparameter tuning depends
You can optionally specify the maximum time budget for your AutoML Vision experiment using `experiment_timeout_hours` - the amount of time in hours before the experiment terminates. If none specified, default experiment timeout is seven days (maximum 60 days). + ## Sweeping hyperparameters for your model When training computer vision models, model performance depends heavily on the hyperparameter values selected. Often, you might want to tune the hyperparameters to get optimal performance.
With support for computer vision tasks in automated ML, you can sweep hyperparam
### Define the parameter search space
-You can define the model algorithms and hyperparameters to sweep in the parameter space. See [Configure model algorithms and hyperparameters](#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms and hyperparameters for each task type. See [details on supported distributions for discrete and continuous hyperparameters](how-to-tune-hyperparameters.md#define-the-search-space).
+You can define the model algorithms and hyperparameters to sweep in the parameter space.
+* See [Configure model algorithms and hyperparameters](#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms for each task type.
+* See [Hyperparameters for computer vision tasks](reference-automl-images-hyperparameters.md) hyperparameters for each computer vision task type.
+* See [details on supported distributions for discrete and continuous hyperparameters](how-to-tune-hyperparameters.md#define-the-search-space).
### Sampling methods for the sweep
When sweeping hyperparameters, you need to specify the sampling method to use fo
* [Grid sampling](how-to-tune-hyperparameters.md#grid-sampling) * [Bayesian sampling](how-to-tune-hyperparameters.md#bayesian-sampling)
-It should be noted that currently only random sampling supports conditional hyperparameter spaces
-
+> [!NOTE]
+> Currently only random sampling supports conditional hyperparameter spaces.
### Early termination policies
arguments = ["--early_stopping", 1, "--evaluation_frequency", 2]
automl_image_config = AutoMLImageConfig(arguments=arguments) ```
+## Incremental training (optional)
+
+Once the training run is done, you have the option to further train the model by loading the trained model checkpoint. You can either use the same dataset or a different one for incremental training.
+
+There are two available options for incremental training. You can,
+
+* Pass the run ID that you want to load the checkpoint from.
+* Pass the checkpoints through a FileDataset.
+
+### Pass the checkpoint via run ID
+To find the run ID from the desired model, you can use the following code.
+
+```python
+# find a run id to get a model checkpoint from
+target_checkpoint_run = automl_image_run.get_best_child()
+```
+
+To pass a checkpoint via the run ID, you need to use the `checkpoint_run_id` parameter.
+
+```python
+automl_image_config = AutoMLImageConfig(task='image-object-detection',
+ compute_target=compute_target,
+ training_data=training_dataset,
+ validation_data=validation_dataset,
+ checkpoint_run_id= target_checkpoint_run.id,
+ primary_metric='mean_average_precision',
+ **tuning_settings)
+
+automl_image_run = experiment.submit(automl_image_config)
+automl_image_run.wait_for_completion(wait_post_processing=True)
+```
+
+### Pass the checkpoint via FileDataset
+To pass a checkpoint via a FileDataset, you need to use the `checkpoint_dataset_id` and `checkpoint_filename` parameters.
+
+```python
+# download the checkpoint from the previous run
+model_name = "outputs/model.pt"
+model_local = "checkpoints/model_yolo.pt"
+target_checkpoint_run.download_file(name=model_name, output_file_path=model_local)
+
+# upload the checkpoint to the blob store
+ds.upload(src_dir="checkpoints", target_path='checkpoints')
+
+# create a FileDatset for the checkpoint and register it with your workspace
+ds_path = ds.path('checkpoints/model_yolo.pt')
+checkpoint_yolo = Dataset.File.from_files(path=ds_path)
+checkpoint_yolo = checkpoint_yolo.register(workspace=ws, name='yolo_checkpoint')
+
+automl_image_config = AutoMLImageConfig(task='image-object-detection',
+ compute_target=compute_target,
+ training_data=training_dataset,
+ validation_data=validation_dataset,
+ checkpoint_dataset_id= checkpoint_yolo.id,
+ checkpoint_filename='model_yolo.pt',
+ primary_metric='mean_average_precision',
+ **tuning_settings)
+
+automl_image_run = experiment.submit(automl_image_config)
+automl_image_run.wait_for_completion(wait_post_processing=True)
+
+```
+ ## Submit the run When you have your `AutoMLImageConfig` object ready, you can submit the experiment.
ws = Workspace.from_config()
experiment = Experiment(ws, "Tutorial-automl-image-object-detection") automl_image_run = experiment.submit(automl_image_config) ```+ ## Outputs and evaluation metrics
-The automl training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.
+The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.
> [!TIP] > Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-run-results) section.
Each of the tasks (and some models) have a set of parameters in the `model_setti
|Object detection, instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`box_nms_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 | |Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`box_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
-For a detailed description on these parameters, please refer to the above section on [task specific hyperparameters](#task-specific-hyperparameters).
+For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](reference-automl-images-hyperparameters.md).
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](how-to-use-automl-small-object-detect.md).
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
Previously updated : 01/10/2022 Last updated : 01/12/2022 # Configure a private endpoint for an Azure Machine Learning workspace
Azure Private Link enables you to connect to your workspace using a private endp
* Using a private endpoint does not effect Azure control plane (management operations) such as deleting the workspace or managing compute resources. For example, creating, updating, or deleting a compute target. These operations are performed over the public Internet as normal. Data plane operations, such as using Azure Machine Learning studio, APIs (including published pipelines), or the SDK use the private endpoint. * When creating a compute instance or compute cluster in a workspace with a private endpoint, the compute instance and compute cluster must be in the same Azure region as the workspace. * When creating or attaching an Azure Kubernetes Service cluster to a workspace with a private endpoint, the cluster must be in the same region as the workspace.
-* When using a workspace with multiple private endpoints (preview), one of the private endpoints must be in the same VNet as the following dependency
+* When using a workspace with multiple private endpoints, one of the private endpoints must be in the same VNet as the following dependency
* Azure Storage Account that provides the default storage for the workspace * Azure Key Vault for the workspace
The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learn
[!INCLUDE [machine-learning-connect-secure-workspace](../../includes/machine-learning-connect-secure-workspace.md)]
-## Multiple private endpoints (preview)
+## Multiple private endpoints
-As a preview feature, Azure Machine Learning supports multiple private endpoints for a workspace. Multiple private endpoints are often used when you want to keep different environments separate. The following are some scenarios that are enabled by using multiple private endpoints:
+Azure Machine Learning supports multiple private endpoints for a workspace. Multiple private endpoints are often used when you want to keep different environments separate. The following are some scenarios that are enabled by using multiple private endpoints:
* Client development environments in a separate VNet. * An Azure Kubernetes Service (AKS) cluster in a separate VNet.
As a preview feature, Azure Machine Learning supports multiple private endpoints
> [!IMPORTANT] > Each VNet that contains a private endpoint for the workspace must also be able to access the Azure Storage Account, Azure Key Vault, and Azure Container Registry used by the workspace. For example, you might create a private endpoint for the services in each VNet.
-Adding multiple endpoints uses the same steps as described in the [Add a private endpoint to a workspace](#add-a-private-endpoint-to-a-workspace) section.
+Adding multiple private endpoints uses the same steps as described in the [Add a private endpoint to a workspace](#add-a-private-endpoint-to-a-workspace) section.
### Scenario: Isolated clients
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
Previously updated : 10/21/2021 Last updated : 01/18/2021 # Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models.
For a code first experience, see the following articles to use the [Azure Machin
You can create datastores from [these Azure storage solutions](how-to-access-data.md#matrix). **For unsupported storage solutions**, and to save data egress cost during ML experiments, you must [move your data](how-to-access-data.md#move) to a supported Azure storage solution. [Learn more about datastores](how-to-access-data.md).
+You can create datastores with credential-based access or identity-based access.
+
+# [Credential-based](#tab/credential)
+ Create a new datastore in a few steps with the Azure Machine Learning studio. > [!IMPORTANT]
The following example demonstrates what the form looks like when you create an *
![Form for a new datastore](media/how-to-connect-data-ui/new-datastore-form.png)
+# [Identity-based](#tab/identity)
+
+Create a new datastore in a few steps with the Azure Machine Learning studio. Learn more about [identity-based data access](how-to-identity-based-data-access.md).
+
+> [!IMPORTANT]
+> If your data storage account is in a virtual network, additional configuration steps are required to ensure the studio has access to your data. See [Network isolation & privacy](how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
+1. Select **Datastores** on the left pane under **Manage**.
+1. Select **+ New datastore**.
+1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type. See [which storage types support identity-based](how-to-identity-based-data-access.md#storage-access-permissions) data access.
+1. Select **No** to not **Save credentials with the datastore for data access**.
+
+The following example demonstrates what the form looks like when you create an **Azure blob datastore**:
+
+![Form for a new datastore](media/how-to-connect-data-ui/new-id-based-datastore-form.png)
+++ ## Create datasets After you create a datastore, create a dataset to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training. [Learn more about datasets](how-to-create-register-datasets.md). There are two types of datasets, FileDataset and TabularDataset.
-[FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas,
-[TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
+[FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas [TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
The following steps and animation show how to create a dataset in [Azure Machine Learning studio](https://ml.azure.com).
To create a dataset in the studio:
1. Select **Next** to populate the **Settings and preview** and **Schema** forms; they are intelligently populated based on file type and you can further configure your dataset prior to creation on these forms. 1. On the Settings and preview form, you can indicate if your data contains multi-line data. 1. On the Schema form, you can specify that your TabularDataset has a time component by selecting type: **Timestamp** for your date or time column.
- 1. If your data is formatted into subsets, for example time windows, and you want to use those subsets for training, select type **Partition timestamp**. Doing so enables timeseries operations on your dataset. Learn more about how to [leverage partitions in your dataset for training](how-to-monitor-datasets.md?tabs=azure-studio#create-target-dataset).
+ 1. If your data is formatted into subsets, for example time windows, and you want to use those subsets for training, select type **Partition timestamp**. Doing so enables time series operations on your dataset. Learn more about how to [leverage partitions in your dataset for training](how-to-monitor-datasets.md?tabs=azure-studio#create-target-dataset).
1. Select **Next** to review the **Confirm details** form. Check your selections and create an optional data profile for your dataset. Learn more about [data profiling](#profile). 1. Select **Create** to complete your dataset creation.
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-identity-based-data-access.md
Previously updated : 10/21/2021 Last updated : 01/18/2021
-# Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute to train my machine learning models.
+# Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute for training my machine learning models.
# Connect to storage by using identity-based data access In this article, you learn how to connect to storage services on Azure by using identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-Typically, datastores use credential-based data access to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses identity-based data access, your Azure account ([Azure Active Directory token](../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In this scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
+Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
+
+To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md#create-datastores).
-To create datastores that use credential-based authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
+To create datastores that use **credential-based** authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
## Identity-based data access in Azure Machine Learning There are two scenarios in which you can apply identity-based data access in Azure Machine Learning. These scenarios are a good fit for identity-based access when you're working with confidential data and need more granular data access management:
-> [!IMPORTANT]
+
+> [!WARNING]
> Identity-based data access is not supported for [automated ML experiments](how-to-configure-auto-train.md). - Accessing storage services
The same behavior applies when you:
### Model training on private data
-Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a managed identity of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#managed-identity).
-
+Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a [managed identity](how-to-use-managed-identities.md) of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#managed-identity).
## Prerequisites
Certain machine learning scenarios involve training models with private data. In
To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
-Identity-based data access supports connections to only the following storage
+Identity-based data access supports connections to **only** the following storage services.
* Azure Blob Storage * Azure Data Lake Storage Gen1
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-network-security-overview.md
The following table compares how services access different parts of an Azure Mac
* **Associated resource** - Use service endpoints or private endpoints to connect to workspace resources like Azure storage, Azure Key Vault. For Azure Container Services, use a private endpoint. * **Service endpoints** provide the identity of your virtual network to the Azure service. Once you enable service endpoints in your virtual network, you can add a virtual network rule to secure the Azure service resources to your virtual network. Service endpoints use public IP addresses. * **Private endpoints** are network interfaces that securely connect you to a service powered by Azure Private Link. Private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet.
-* **Training compute access** - Access training compute targets like Azure Machine Learning Compute Instance and Azure Machine Learning Compute Clusters with public IP addresses (preview).
+* **Training compute access** - Access training compute targets like Azure Machine Learning Compute Instance and Azure Machine Learning Compute Clusters with public or private IP addresses.
* **Inference compute access** - Access Azure Kubernetes Services (AKS) compute clusters with private IP addresses.
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-understand-automated-ml.md
Please refer to the metrics definitions from the [classification metrics](#class
![Classification report for image classification](./media/how-to-understand-automated-ml/image-classification-report.png)
-### Object detection and Instance segmentation metrics
+### Object detection and instance segmentation metrics
Every prediction from an image object detection or instance segmentation model is associated with a confidence score.
-The predictions with confidence score greater than score threshold are output as predictions and used in the metric calculation, the default value of which is model specific and can be referred from the [hyperparameter tuning](how-to-auto-train-image-models.md#model-specific-hyperparameters) page(`box_score_threshold` hyperparameter).
+The predictions with confidence score greater than score threshold are output as predictions and used in the metric calculation, the default value of which is model specific and can be referred from the [hyperparameter tuning](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) page(`box_score_threshold` hyperparameter).
The metric computation of an image object detection and instance segmentation model is based on an overlap measurement defined by a metric called **IoU** ([Intersection over Union](https://en.wikipedia.org/wiki/Jaccard_index)) which is computed by dividing the area of overlap between the ground-truth and the predictions by the area of union of the ground-truth and the predictions. The IoU computed from every prediction is compared with an **overlap threshold** called an IoU threshold which determines how much a prediction should overlap with a user-annotated ground-truth in order to be considered as a positive prediction. If the IoU computed from the prediction is less than the overlap threshold the prediction would not be considered as a positive prediction for the associated class.
The primary metric for the evaluation of image object detection and instance seg
[COCO evaluation method](https://cocodataset.org/#detection-eval) uses a 101-point interpolated method for AP calculation along with averaging over ten IoU thresholds. AP@[.5:.95] corresponds to the average AP for IoU from 0.5 to 0.95 with a step size of 0.05. Automated ML logs all the twelve metrics defined by the COCO method including the AP and AR(average recall) at various scales in the application logs while the metrics user interface shows only the mAP at an IoU threshold of 0.5. > [!TIP]
-> The image object detection model evaluation can use coco metrics if the `validation_metric_type` hyperparameter is set to be 'coco' as explained in the [hyperparameter tuning](how-to-auto-train-image-models.md#task-specific-hyperparameters) section.
+> The image object detection model evaluation can use coco metrics if the `validation_metric_type` hyperparameter is set to be 'coco' as explained in the [hyperparameter tuning](reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters) section.
#### Epoch-level metrics for object detection and instance segmentation The mAP, precision and recall values are logged at an epoch-level for image object detection/instance segmentation models. The mAP, precision and recall metrics are also logged at a class level with the name 'per_label_metrics'. The 'per_label_metrics' should be viewed as a table.
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
ONNX is an open-source format for AI models. ONNX supports interoperability betw
- [.NET Core SDK 3.1 or greater](https://dotnet.microsoft.com/download) - Text Editor or IDE (such as [Visual Studio](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/Download))-- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
+- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
- [Netron](https://github.com/lutzroeder/netron) (optional) ## Create a C# console application
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-cli-runs.md
runs = mlflow.search_runs(experiment_ids=all_experiments, filter_string=query, r
runs.head(10) ```
+## Automatic logging
+With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
+
+To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
+
+```Python
+mlflow.autolog()
+```
+
+[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
+ ## Manage models Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-automl-images-hyperparameters.md
+
+ Title: Hyperparameter for AutoML computer vision tasks
+
+description: Learn which hyperparameters are available for computer vision tasks with automated ML.
+++++++ Last updated : 01/18/2022+++
+# Hyperparameters for computer vision tasks in automated machine learning
+
+Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
+
+With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific.
+
+## Model-specific hyperparameters
+
+This table summarizes hyperparameters specific to the `yolov5` algorithm.
+
+| Parameter name | Description | Default |
+| - |-|-|
+| `validation_metric_type` | Metric computation method to use for validation metrics. <br> Must be `none`, `coco`, `voc`, or `coco_voc`. | `voc` |
+| `img_size` | Image size for train and validation. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 640 |
+| `model_size` | Model size. <br> Must be `small`, `medium`, `large`, or `xlarge`. <br><br> *Note: training run may get into CUDA OOM if the model size is too big*. | `medium` |
+| `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 |
+| `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 |
+| `box_iou_thresh` | IoU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
++
+## Model agnostic hyperparameters
+
+The following table describes the hyperparameters that are model agnostic.
+
+| Parameter name | Description | Default|
+| | - | |
+| `number_of_epochs` | Number of training epochs. <br>Must be a positive integer. | 15 <br> (except `yolov5`: 30) |
+| `training_batch_size` | Training batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 2 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 2 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.|
+| `validation_batch_size` | Validation batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 1 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 1 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.|
+| `grad_accumulation_step` | Gradient accumulation means running a configured number of `grad_accumulation_step` without updating the model weights while accumulating the gradients of those steps, and then using the accumulated gradients to compute the weight updates. <br> Must be a positive integer. | 1 |
+| `early_stopping` | Enable early stopping logic during training. <br> Must be 0 or 1.| 1 |
+| `early_stopping_patience` | Minimum number of epochs or validation evaluations with<br>no primary metric improvement before the run is stopped.<br> Must be a positive integer. | 5 |
+| `early_stopping_delay` | Minimum number of epochs or validation evaluations to wait<br>before primary metric improvement is tracked for early stopping.<br> Must be a positive integer. | 5 |
+| `learning_rate` | Initial learning rate. <br>Must be a float in the range [0, 1]. | Multi-class: 0.01 <br>(except *vit-variants*: <br> `vits16r224`: 0.0125<br>`vitb16r224`: 0.0125<br>`vitl16r224`: 0.001) <br><br> Multi-label: 0.035 <br>(except *vit-variants*:<br>`vits16r224`: 0.025<br>`vitb16r224`: 0.025 <br>`vitl16r224`: 0.002) <br><br> Object detection: 0.005 <br>(except `yolov5`: 0.01) <br><br> Instance segmentation: 0.005 |
+| `lr_scheduler` | Type of learning rate scheduler. <br> Must be `warmup_cosine` or `step`. | `warmup_cosine` |
+| `step_lr_gamma` | Value of gamma when learning rate scheduler is `step`.<br> Must be a float in the range [0, 1]. | 0.5 |
+| `step_lr_step_size` | Value of step size when learning rate scheduler is `step`.<br> Must be a positive integer. | 5 |
+| `warmup_cosine_lr_cycles` | Value of cosine cycle when learning rate scheduler is `warmup_cosine`. <br> Must be a float in the range [0, 1]. | 0.45 |
+| `warmup_cosine_lr_warmup_epochs` | Value of warmup epochs when learning rate scheduler is `warmup_cosine`. <br> Must be a positive integer. | 2 |
+| `optimizer` | Type of optimizer. <br> Must be either `sgd`, `adam`, `adamw`. | `sgd` |
+| `momentum` | Value of momentum when optimizer is `sgd`. <br> Must be a float in the range [0, 1]. | 0.9 |
+| `weight_decay` | Value of weight decay when optimizer is `sgd`, `adam`, or `adamw`. <br> Must be a float in the range [0, 1]. | 1e-4 |
+|`nesterov`| Enable `nesterov` when optimizer is `sgd`. <br> Must be 0 or 1.| 1 |
+|`beta1` | Value of `beta1` when optimizer is `adam` or `adamw`. <br> Must be a float in the range [0, 1]. | 0.9 |
+|`beta2` | Value of `beta2` when optimizer is `adam` or `adamw`.<br> Must be a float in the range [0, 1]. | 0.999 |
+|`amsgrad` | Enable `amsgrad` when optimizer is `adam` or `adamw`.<br> Must be 0 or 1. | 0 |
+|`evaluation_frequency`| Frequency to evaluate validation dataset to get metric scores. <br> Must be a positive integer. | 1 |
+|`split_ratio`| If validation data is not defined, this specifies the split ratio for splitting train data into random train and validation subsets. <br> Must be a float in the range [0, 1].| 0.2 |
+|`checkpoint_frequency`| Frequency to store model checkpoints. <br> Must be a positive integer. | Checkpoint at epoch with best primary metric on validation.|
+|`checkpoint_run_id`| The run id of the experiment that has a pretrained checkpoint for incremental training.| no default |
+|`checkpoint_dataset_id`| FileDataset id containing pretrained checkpoint(s) for incremental training. Make sure to pass `checkpoint_filename` along with `checkpoint_dataset_id`.| no default |
+|`checkpoint_filename`| The pretrained checkpoint filename in FileDataset for incremental training. Make sure to pass `checkpoint_dataset_id` along with `checkpoint_filename`.| no default |
+|`layers_to_freeze`| How many layers to freeze for your model. For instance, passing 2 as value for `seresnext` means freezing layer0 and layer1 referring to the below supported model layer info. <br> Must be a positive integer. <br><br>`'resnet': [('conv1.', 'bn1.'), 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'mobilenetv2': ['features.0.', 'features.1.', 'features.2.', 'features.3.', 'features.4.', 'features.5.', 'features.6.', 'features.7.', 'features.8.', 'features.9.', 'features.10.', 'features.11.', 'features.12.', 'features.13.', 'features.14.', 'features.15.', 'features.16.', 'features.17.', 'features.18.'],`<br>`'seresnext': ['layer0.', 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'vit': ['patch_embed', 'blocks.0.', 'blocks.1.', 'blocks.2.', 'blocks.3.', 'blocks.4.', 'blocks.5.', 'blocks.6.','blocks.7.', 'blocks.8.', 'blocks.9.', 'blocks.10.', 'blocks.11.'],`<br>`'yolov5_backbone': ['model.0.', 'model.1.', 'model.2.', 'model.3.', 'model.4.','model.5.', 'model.6.', 'model.7.', 'model.8.', 'model.9.'],`<br>`'resnet_backbone': ['backbone.body.conv1.', 'backbone.body.layer1.', 'backbone.body.layer2.','backbone.body.layer3.', 'backbone.body.layer4.']` | no default |
+
+## Image classification (multi-class and multi-label) specific hyperparameters
+
+The following table summarizes hyperparmeters for image classification (multi-class and multi-label) tasks.
+
+| Parameter name | Description | Default |
+| - |-|--|
+| `weighted_loss` | 0 for no weighted loss.<br>1 for weighted loss with sqrt.(class_weights) <br> 2 for weighted loss with class_weights. <br> Must be 0 or 1 or 2. | 0 |
+| `valid_resize_size` | Image size to which to resize before cropping for validation dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> Training run may get into CUDA OOM if the size is too big*. | 256  |
+| `valid_crop_size` | Image crop size that's input to your neural network for validation dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+| `train_crop_size` | Image crop size that's input to your neural network for train dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+
+## Object detection and instance segmentation task specific hyperparameters
+
+The following hyperparameters are for object detection and instance segmentation tasks.
+
+> [!WARNING]
+> These parameters are not supported with the `yolov5` algorithm.
+
+| Parameter name | Description | Default |
+| - |-|--|
+| `validation_metric_type` | Metric computation method to use for validation metrics. <br> Must be `none`, `coco`, `voc`, or `coco_voc`. | `voc` |
+| `min_size` | Minimum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*.| 600 |
+| `max_size` | Maximum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer.<br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 1333 |
+| `box_score_thresh` | During inference, only return proposals with a classification score greater than `box_score_thresh`. <br> Must be a float in the range [0, 1].| 0.3 |
+| `box_nms_thresh` | Non-maximum suppression (NMS) threshold for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
+| `box_detections_per_img` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 |
+| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default |
+| `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
+| `tile_predictions_nms_thresh` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
+
+## Next steps
+
+* Learn how to [Set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md).
+
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-auto-train-image-models.md
In this automated machine learning tutorial, you did the following tasks:
* [Learn more about computer vision in automated ML (preview)](concept-automated-ml.md#computer-vision-preview). * [Learn how to set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md).
+* [Learn how to onfigure incremental training on computer vision models](how-to-auto-train-image-models.md#incremental-training-optional).
+* See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md).
* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models. > [!NOTE]
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
marketplace Analytics Sample Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/analytics-sample-queries.md
Previously updated : 12/06/2021 Last updated : 1/25/2022 # Sample queries for programmatic analytics
For details about the column names, attributes, and descriptions, refer to the f
- [Customer details table](customer-dashboard.md#customer-details-table) - [Orders details table](orders-dashboard.md#orders-details-table) - [Usage details table](usage-dashboard.md#usage-details-table)
+- [Revenue details table](revenue-dashboard.md#data-dictionary-table)
+- [Quality of service table](quality-of-service-dashboard.md#offer-deployment-details)
## Customers report queries
These sample queries apply to the Revenue report.
| List of non-trial transactions for subscription-based billing model | `SELECT BillingAccountId, OfferName,OfferType, TrialDeployment EstimatedRevenueUSD, EarningAmountUSD FROM ISVRevenue WHERE TrialDeployment=ΓÇÖFalseΓÇÖ and BillingModel=ΓÇÖSubscriptionBasedΓÇÖ` | |||
+## Quality of service report queries
+
+This sample query applies to the Quality of service report.
+
+| **Query Description** | **Sample Query** |
+| | - |
+| Show deployment status of offers for last 6 months | `SELECT OfferId, Sku, DeploymentStatus, DeploymentCorrelationId, SubscriptionId, CustomerTenantId, CustomerName, TemplateType, StartTime, EndTime, DeploymentDurationInMilliSeconds, DeploymentRegion FROM ISVQualityOfService TIMESPAN LAST_6_MONTHS` |
+|||
+ ## Next steps - [APIs for accessing commercial marketplace analytics data](analytics-available-apis.md)
media-services Configure Connect Nodejs Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/configure-connect-nodejs-howto.md
This article shows you how to connect to the Azure Media Services v3 node.js SDK
- An installation of Visual Studio Code. - Install [Node.js](https://nodejs.org/en/download/).-- Install [Typescript](https://www.typescriptlang.org/download).
+- Install [TypeScript](https://www.typescriptlang.org/download).
- [Create a Media Services account](./account-create-how-to.md). Be sure to remember the resource group name and the Media Services account name. - Create a service principal for your application. See [access APIs](./access-api-howto.md).<br/>**Pro tip!** Keep this window open or copy everything in the JSON tab to Notepad. - Make sure to get the latest version of the [AzureMediaServices SDK for JavaScript](https://www.npmjs.com/package/@azure/arm-mediaservices).
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-backup-restore.md
Backup redundancy ensures that your database meets its availability and durabili
- **Zone-redundant backup storage** : When the backups are stored in zone-redundant backup storage, multiple copies are not only stored within the availability zone in which your server is hosted, but are also replicated to another availability zone in the same region. This option can be leveraged for scenarios that require high availability or for restricting replication of data to within a country/region to meet data residency requirements. Also this provides at least 99.9999999999% (12 9's) durability of Backups objects over a given year. One can select Zone-Redundant High Availability option at server create time to ensure zone-redundant backup storage. High Availability for a server can be disabled post create however the backup storage will continue to remain zone-redundant. -- **Geo-Redundant backup storage** : When the backups are stored in geo-redundant backup storage, multiple copies are not only stored within the region in which your server is hosted, but are also replicated to it's geo-paired region. This provides better protection and ability to restore your server in a different region in the event of a disaster. Also this provides at least 99.99999999999999% (16 9's) durability of Backups objects over a given year. One can enable Geo-Redundancy option at server create time to ensure geo-redundant backup storage. Geo redundancy is supported for servers hosted in any of the [Azure paired regions](../../availability-zones/cross-region-replication-azure.md).
+- **Geo-Redundant backup storage** : When the backups are stored in geo-redundant backup storage, multiple copies are not only stored within the region in which your server is hosted, but are also replicated to its geo-paired region. This provides better protection and ability to restore your server in a different region in the event of a disaster. Also this provides at least 99.99999999999999% (16 9's) durability of Backups objects over a given year. One can enable Geo-Redundancy option at server create time to ensure geo-redundant backup storage. Geo redundancy is supported for servers hosted in any of the [Azure paired regions](overview.md#azure-regions).
> [!NOTE] > Geo-redundancy and zone-redundant High Availability to support zone redundancy is current surfaced as a create time operation only. ## Moving from other backup storage options to geo-redundant backup storage
-Configuring geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. However you can still move your existing backups storage to geo-redundant storage using the following suggested ways :
+Configuring geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. However you can still move your existing backups storage to geo-redundant storage using the following suggested ways:
- **Moving from locally redundant to geo-redundant backup storage** - In order to move your backup storage from locally redundant storage to geo-redundant storage, you can perform a point-in-time restore operation and change the Compute + Storage server configuration to enable Geo-redundancy for the locally redundant source server. Same Zone Redundant HA servers can also be restored as a geo-redundant server in a similar fashion as the underlying backup storage is locally redundant for the same.
Configuring geo-redundant storage for backup is only allowed during server creat
Backups are retained based on the backup retention period setting on the server. You can select a retention period of 1 to 35 days with a default retention period is seven days. You can set the retention period during server creation or later by updating the backup configuration using Azure portal.
-The backup retention period governs how far back in time can a point-in-time restore operation be performed, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to seven days, the recovery window is considered last seven days. In this scenario, all the backups required to restore the server in last seven days are retained. With a backup retention window of seven days, database snapshots and transaction log backups are stored for the last eight days (1 day prior to the window).
+The backup retention period governs how far back in time can a point-in-time restore operation be performed, since its based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to seven days, the recovery window is considered last seven days. In this scenario, all the backups required to restore the server in last seven days are retained. With a backup retention window of seven days, database snapshots and transaction log backups are stored for the last eight days (1 day prior to the window).
## Backup storage cost
The primary means of controlling the backup storage cost is by setting the appro
## View Available Full Backups
-The Backup and Restore blade in the Azure portal lists the automated full backups taken daily once. One can use this blade to view the completion timestamps for all available full backups within the serverΓÇÖs retention period and to perform restore operations using these full backups. The list of available backups includes all full automated backups within the retention period, a timestamp showing the successful completion, a timestamp indicating how long a backup will be retained, and a restore action.
+The Backup and Restore blade in the Azure portal lists the automated full backups taken daily once. One can use this blade to view the completion timestamps for all available full backups within the serverΓÇÖs retention period and to perform restore operations using these full backups. The list of available backups includes all full-automated backups within the retention period, a timestamp showing the successful completion, a timestamp indicating how long a backup will be retained, and a restore action.
## Restore In Azure Database for MySQL, performing a restore creates a new server from the original server's backups. There are two types of restore available: -- Point-in-time restore : is available with either backup redundancy option and creates a new server in the same region as your original server.-- Geo-restore : is available only if you configured your server for geo-redundant storage and it allows you to restore your server to the geo-paired region. Geo-restore to other regions is not supported currently.
+- Point-in-time restore: is available with either backup redundancy option and creates a new server in the same region as your original server.
+- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to the geo-paired region. Geo-restore to other regions is not supported currently.
The estimated time for the recovery of the server depends on several factors:
You can choose between latest restore point, custom restore point and fastest re
- **Custom restore point**: This will allow you to choose any point-in-time within the retention period defined for this flexible server. This option is useful to restore the server at the precise point in time to recover from a user error. - **Fastest restore point**: This option allows users to restore the server in the fastest time possible for a given day within the retention period defined for their flexible server. Fastest restore is possible by choosing the restore point-in-time at which the full backup is completed. This restore operation simply restores the full snapshot backup and doesn't warrant restore or recovery of logs which makes it fast. We recommend you select a full backup timestamp which is greater than the earliest restore point in time for a successful restore operation.
-The estimated time of recovery depends on several factors including the database sizes, the transaction log backup size, the compute size of the SKU, and the time of the restore as well. The transaction log recovery is the most time consuming part of the restore process. If the restore time is chosen closer to the snapshot backup schedule, the restore operations are faster since transaction log application is minimal. To estimate the accurate recovery time for your server, we highly recommend testing it in your environment as it has too many environment specific variables.
+The estimated time of recovery depends on several factors including the database sizes, the transaction log backup size, the compute size of the SKU, and the time of the restore as well. The transaction log recovery is the most time consuming part of the restore process. If the restore time is chosen closer to the snapshot backup schedule, the restore operations are faster since transaction log application is minimal. To estimate the accurate recovery time for your server, we highly recommend testing it in your environment as it has too many environment-specific variables.
> [!IMPORTANT] > If you are restoring a flexible server configured with zone redundant high availability, the restored server will be configured in the same region and zone as your primary server, and deployed as a single flexible server in a non-HA mode. Refer to [zone redundant high availability](concepts-high-availability.md) for flexible server.
The estimated time of recovery depends on several factors including the database
## Geo-restore
-You can restore a server to it's [geo-paired region](../../availability-zones/cross-region-replication-azure.md) where the service is available if you have configured your server for geo-redundant backups. Geo-restore to other regions is not supported currently.
+You can restore a server to it's [geo-paired region](overview.md#azure-regions) where the service is available if you have configured your server for geo-redundant backups. Geo-restore to other regions is not supported currently.
Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
After a restore from either **latest restore point** or **custom restore point**
### Backup related questions - **How do I backup my server?**
-By default, Azure Database for MySQL enables automated backups of your entire server (encompassing all databases created) with a default 7 day retention period. The only way to manually take a backup is by using community tools such as mysqldump as documented [here](../concepts-migrate-dump-restore.md#dump-and-restore-using-mysqldump-utility) or mydumper as documented [here](../concepts-migrate-mydumper-myloader.md#create-a-backup-using-mydumper). If you wish to backup Azure Database for MySQL to a Blob storage, refer to our tech community blog [Backup Azure Database for MySQL to a Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/backup-azure-database-for-mysql-to-a-blob-storage/ba-p/803830).
+By default, Azure Database for MySQL enables automated backups of your entire server (encompassing all databases created) with a default 7-day retention period. The only way to manually take a backup is by using community tools such as mysqldump as documented [here](../concepts-migrate-dump-restore.md#dump-and-restore-using-mysqldump-utility) or mydumper as documented [here](../concepts-migrate-mydumper-myloader.md#create-a-backup-using-mydumper). If you wish to backup Azure Database for MySQL to a Blob storage, refer to our tech community blog [Backup Azure Database for MySQL to a Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/backup-azure-database-for-mysql-to-a-blob-storage/ba-p/803830).
- **Can I configure automatic backups to be retained for long term?** No, currently we only support a maximum of 35 days of automated backup retention. You can take manual backups and use that for long-term retention requirement.
No, backups are triggered internally as part of the managed service and have no
Azure Database for MySQL automatically creates server backups and stores them in user-configured, locally redundant storage or in geo-redundant storage. These backup files can't be exported. The default backup retention period is seven days. You can optionally configure the database backup from 1 to 35 days. - **How can I validate my backups?**
-The best way to validate availability of successfully completed backups is to view the full automated backups taken within the retention period in the Backup and Restore blade. If a backup fails it will not be listed in the available backups list and our backup service will try every 20 mins to take a backup until a successful backup is taken. These backup failures are due to heavy transactional production loads on the server.
+The best way to validate availability of successfully completed backups is to view the full-automated backups taken within the retention period in the Backup and Restore blade. If a backup fails it will not be listed in the available backups list and our backup service will try every 20 mins to take a backup until a successful backup is taken. These backup failures are due to heavy transactional production loads on the server.
- **Where can I see the backup usage?** In the Azure portal, under Monitoring tab - Metrics section, you can find the [Backup Storage Used](./concepts-monitoring.md) metric which can help you monitor the total backup usage.
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-business-continuity.md
The table below illustrates the features that Flexible server offers.
| - | -- | | | **Backup & Recovery** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained for any period between 1 to 35 days. You will be able to restore your database server to any point in time within your backup retention period. Recovery time will be dependent on the size of the data to restore + the time to perform log recovery. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details. |Backup data remains within the region | | **Local redundant backup** | Flexible server backups are automatically and securely stored in a local redundant storage within a region and in same availability zone. The locally redundant backups replicate the server backup data files three times within a single physical location in the primary region. Locally redundant backup storage provides at least 99.999999999% (11 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Applicable in all regions |
-| **Geo-redundant backup** | Flexible server backups can be configured as geo-redundant at create time. Enabling Geo-redundancy replicates the server backup data files in the primary region’s paired region to provide regional resiliency. Geo-redundant backup storage provides at least 99.99999999999999% (16 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Available in all [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) |
+| **Geo-redundant backup** | Flexible server backups can be configured as geo-redundant at create time. Enabling Geo-redundancy replicates the server backup data files in the primary region’s paired region to provide regional resiliency. Geo-redundant backup storage provides at least 99.99999999999999% (16 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Available in all [Azure paired regions](overview.md#azure-regions) |
| **Zone redundant high availability** | Flexible server can be deployed in high availability mode, which deploys primary and standby servers in two different availability zones within a region. This protects from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is synchronously replicated to the standby replica. During any downtime event, the database server is automatically failed over to the standby replica. Refer to [Concepts - High availability](./concepts-high-availability.md) for more details. | Supported in general purpose and memory optimized compute tiers. Available only in regions where multiple zones are available.| | **Premium file shares** | Database files are stored in a highly durable and reliable Azure premium file shares that provide data redundancy with three copies of replica stored within an availability zone with automatic data recovery capabilities. Refer to [Premium File shares](../../storage/files/storage-how-to-create-file-share.md) for more details. | Data stored within an availability zone |
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| | | | | | | Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Southeast | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Brazil South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
One advantage of running your workload in Azure is its global reach. The flexibl
| West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
## Contacts
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/policy-reference.md
Title: Built-in policy definitions for Azure Database for MySQL description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/select-right-deployment-type.md
The main differences between these options are listed in the following table:
| Fast restore point | No | Yes | No | | Ability to restore on a different zone | Not supported | Yes | Yes | | Ability to restore to a different VNET | No | Yes | Yes |
-| Ability to restore to a different region | Yes (Geo-redundant) | No | User Managed |
-| Ability to restore a deleted server | Yes | No | No |
+| Ability to restore to a different region | Yes (Geo-redundant) | Yes (Geo-redundant) | User Managed |
+| Ability to restore a deleted server | Yes | Yes | No |
| [**Disaster Recovery**](flexible-server/concepts-business-continuity.md) | | | |
-| DR across Azure regions | Using cross region read replicas, geo-redundant backup | Not supported | User Managed |
+| DR across Azure regions | Using cross region read replicas, geo-redundant backup | Using geo-redundant backup | User Managed |
| Automatic failover | No | Not Supported | No | | Can use the same r/w endpoint | No | Not Supported | No | | [**Monitoring**](flexible-server/concepts-monitoring.md) | | | |
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-deploy-java-liberty-app.md
keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty
-# Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift 4 cluster
+# Deploy a Java application with Open Liberty/WebSphere Liberty on an ARO cluster
-This guide demonstrates how to run your Java, Java EE, [Jakarta EE](https://jakarta.ee/), or [MicroProfile](https://microprofile.io/) application on the Open Liberty/WebSphere Liberty runtime and then deploy the containerized application to an Azure Red Hat OpenShift (ARO) 4 cluster using the Open Liberty Operator. This article will walk you through preparing a Liberty application, building the application Docker image and running the containerized application on an ARO 4 cluster. For more details on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more details on IBM WebSphere Liberty see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
+This guide demonstrates how to run your Java, Java EE, [Jakarta EE](https://jakarta.ee/), or [MicroProfile](https://microprofile.io/) application on the Open Liberty/WebSphere Liberty runtime and then deploy the containerized application to an Azure Red Hat OpenShift (ARO) 4 cluster using the Open Liberty Operator. This article will walk you through preparing a Liberty application, building the application Docker image and running the containerized application on an ARO 4 cluster. For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/).For more information on WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
[!INCLUDE [aro-support](includes/aro-support.md)]
Complete the following prerequisites to successfully walk through this guide.
1. Clone the code for this sample on your local system. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aro). 1. Follow the instructions in [Create an Azure Red Hat OpenShift 4 cluster](./tutorial-create-cluster.md).
- Though the "Get a Red Hat pull secret" step is labeled as optional, **it is required for this article**. The pull secret enables your Azure Red Hat OpenShift cluster to find the Open Liberty Operator.
+ Though the "Get a Red Hat pull secret" step is labeled as optional, **it is required for this article**. The pull secret enables your Azure Red Hat OpenShift cluster to find the Open Liberty Operator.
If you plan to run memory-intensive applications on the cluster, specify the proper virtual machine size for the worker nodes using the `--worker-vm-size` parameter. For example, `Standard_E4s_v3` is the minimum virtual machine size to install the Elasticsearch Operator on a cluster. For more information, see:
Complete the following prerequisites to successfully walk through this guide.
1. Connect to the cluster by following the steps in [Connect to an Azure Red Hat OpenShift 4 cluster](./tutorial-connect-cluster.md). * Be sure to follow the steps in "Install the OpenShift CLI" because we'll use the `oc` command later in this article.
- * Write down the cluster console URL which looks like `https://console-openshift-console.apps.<random>.<region>.aroapp.io/`.
+ * Write down the cluster console URL. It will look like `https://console-openshift-console.apps.<random>.<region>.aroapp.io/`.
* Take note of the `kubeadmin` credentials. 1. Verify you can sign in to the OpenShift CLI with the token for user `kubeadmin`.
Complete the following prerequisites to successfully walk through this guide.
The steps in this tutorial create a Docker image which must be pushed to a container registry accessible to OpenShift. The simplest option is to use the built-in registry provided by OpenShift. To enable the built-in container registry, follow the steps in [Configure built-in container registry for Azure Red Hat OpenShift 4](built-in-container-registry.md). Three items from those steps are used in this article. * The username and password of the Azure AD user for signing in to the OpenShift web console.
-* The output of `oc whoami` after following the steps for signing in to the OpenShift CLI. This value is called **aad-user** for discussion.
+* The output of `oc whoami` after following the steps for signing in to the OpenShift CLI. This value is called **aad-user** for discussion.
* The container registry URL. Note these items down as you complete the steps to enable the built-in container registry.
Note these items down as you complete the steps to enable the built-in container
### Create an administrator for the demo project
-Besides image management, the **aad-user** will also be granted administrative permissions for managing resources in the demo project of the ARO 4 cluster. Sign in to the OpenShift CLI and grant the **aad-user** the necessary privileges by following these steps.
+Besides image management, the **aad-user** will also be granted administrative permissions for managing resources in the demo project of the ARO 4 cluster. Sign in to the OpenShift CLI and grant the **aad-user** the necessary privileges by following these steps.
1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials. 1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
Besides image management, the **aad-user** will also be granted administrative p
### Install the Open Liberty OpenShift Operator
-After creating and connecting to the cluster, install the Open Liberty Operator. The main starting page for the Open Liberty Operator is on [GitHub](https://github.com/OpenLiberty/open-liberty-operator).
+After creating and connecting to the cluster, install the Open Liberty Operator. The main starting page for the Open Liberty Operator is on [GitHub](https://github.com/OpenLiberty/open-liberty-operator).
1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials. 2. Navigate to **Operators** > **OperatorHub** and search for **Open Liberty**.
After creating and connecting to the cluster, install the Open Liberty Operator.
![create operator subscription for Open Liberty Operator](./media/howto-deploy-java-liberty-app/install-operator.png) 6. Select **Install** and wait a minute or two until the installation completes.
-7. Observe the Open Liberty Operator is successfully installed and ready for use. If you don't, diagnose and resolve the problem before continuing.
+7. Observe the Open Liberty Operator is successfully installed and ready for use. If you don't, diagnose and resolve the problem before continuing.
+ :::image type="content" source="media/howto-deploy-java-liberty-app/open-liberty-operator-installed.png" alt-text="Installed Operators showing Open Liberty is installed.":::
+### Create an Azure Database for MySQL
+
+Follow the instructions below to set up an Azure Database for MySQL for use with your app. If your application doesn't require a database, you can skip this section.
+
+1. Create a single database in Azure SQL Database by following the steps in: [Quickstart: Create an Azure Database for MySQL server by using the Azure portal](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal). Return to this document after creating the database.
+ > [!NOTE]
+ >
+ > * At the **Basics** step, write down the ***Server name**.mysql.database.azure.com*, **Server admin login** and **Password**.
+
+2. Once your database is created, open **your SQL server** > **Connection security** and complete the following settings:
+ 1. Set **Allow access to Azure services** to **Yes**.
+ 2. Select **Add current client IP address**.
+ 3. Set **Minimal TLS Version** to **>1.0** and select **Save**.
+
+ ![configure mysql database connection security rule](./media/howto-deploy-java-liberty-app/configure-mysql-database-connection-security.png)
+
+3. Open **your SQL database** > **Connection strings** > Select **JDBC**. Write down the **Port number** following sql server address. For example, **3306** is the port number in the example below.
+
+ ```text
+ String url ="jdbc:mysql://<Database name>.mysql.database.azure.com:3306/{your_database}?useSSL=true&requireSSL=false"; myDbConn = DriverManager.getConnection(url, "<Server admin login>", {your_password});
+ ```
+
+4. If you didn't create a database in above steps, follow the steps in [Quickstart: Create an Azure Database for MySQL server by using the Azure portal#connect-to-the-server-by-using-mysqlexe](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal#connect-to-the-server-by-using-mysqlexe) to create one. Return to this document after creating the database.
+ > [!NOTE]
+ >
+ > * Write down the **Database name** you created.
+ ## Prepare the Liberty application
-We'll use a Java EE 8 application as our example in this guide. Open Liberty is a [Java EE 8 full profile](https://javaee.github.io/javaee-spec/javadocs/) compatible server, so it can easily run the application. Open Liberty is also [Jakarta EE 8 full profile compatible](https://jakarta.ee/specifications/platform/8/apidocs/).
+We'll use a Java EE 8 application as our example in this guide. Open Liberty is a [Java EE 8 full profile](https://javaee.github.io/javaee-spec/javadocs/) compatible server, so it can easily run the application. Open Liberty is also [Jakarta EE 8 full profile compatible](https://jakarta.ee/specifications/platform/8/apidocs/).
### Run the application on Open Liberty
-To run the application on Open Liberty, you need to create an Open Liberty server configuration file so that the [Liberty Maven plugin](https://github.com/OpenLiberty/ci.maven#liberty-maven-plugin) can package the application for deployment. The Liberty Maven plugin is not required to deploy the application to OpenShift. However, we'll use it in this example with Open LibertyΓÇÖs developer (dev) mode. Developer mode lets you easily run the application locally. Complete the following steps on your local computer.
+To run the application on Open Liberty, you need to create an Open Liberty server configuration file so that the [Liberty Maven plugin](https://github.com/OpenLiberty/ci.maven#liberty-maven-plugin) can package the application for deployment. The Liberty Maven plugin is not required to deploy the application to OpenShift. However, we'll use it in this example with Open LibertyΓÇÖs developer (dev) mode. Developer mode lets you easily run the application locally. Complete the following steps on your local computer.
+
+# [with DB connection](#tab/with-mysql-devc)
+
+Follow the steps in this section to prepare the sample application for later use in this article. These steps use Maven and the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin`, see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html).
+
+#### Check out the application
+
+Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aro).
+There are three samples in the repository. We will use *open-liberty-on-aro/3-integration/connect-db/mysql*. Here is the file structure of the application.
+
+```
+open-liberty-on-aro/3-integration/connect-db/mysql
+Γö£ΓöÇ src/main/
+Γöé Γö£ΓöÇ aro/
+Γöé Γöé Γö£ΓöÇ db-secret.yaml
+Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
+Γöé Γö£ΓöÇ docker/
+Γöé Γöé Γö£ΓöÇ Dockerfile
+Γöé Γöé Γö£ΓöÇ Dockerfile-local
+Γöé Γöé Γö£ΓöÇ Dockerfile-wlp
+Γöé Γöé Γö£ΓöÇ Dockerfile-wlp-local
+Γöé Γö£ΓöÇ liberty/config/
+Γöé Γöé Γö£ΓöÇ server.xml
+Γöé Γö£ΓöÇ java/
+Γöé Γö£ΓöÇ resources/
+Γöé Γö£ΓöÇ webapp/
+Γö£ΓöÇ pom.xml
+```
+
+The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
+
+In the *aro* directory, we placed two deployment files. *db-secret.xml* is used to create [Secrets](https://docs.openshift.com/container-platform/4.6/nodes/pods/nodes-pods-secrets.html) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
+
+In the *docker* directory, we placed four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an ARO deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an ARO deployment respectively, but instead work with WebSphere Liberty.
+
+In the *liberty/config* directory, the *server.xml* is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
+
+#### Build project
+
+Now that you have gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
+
+```bash
+cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql
+
+# The following variables will be used for deployment file generation
+export DB_SERVER_NAME=<Server name>.mysql.database.azure.com
+export DB_PORT_NUMBER=3306
+export DB_NAME=<Database name>
+export DB_USER=<Server admin username>@<Database name>
+export DB_PASSWORD=<Server admin password>
+export NAMESPACE=open-liberty-demo
+
+mvn clean install
+```
+
+#### Test your application locally
+
+Use the `liberty:devc` command to run and test the project locally before dealing with any Azure complexity. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
+In the sample application, we've prepared Dockerfile-local and Dockerfile-wlp-local for use with `liberty:devc`.
+
+1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system.
+
+1. Start the application in `liberty:devc` mode
+
+ ```bash
+ cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql
+
+ # If you are running with Open Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
+
+ # If you are running with WebSphere Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
+ ```
+
+1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working.
+
+1. Press `Ctrl+C` to stop `liberty:devc` mode.
+
+# [without DB connection](#tab/without-mysql-dev)
1. Copy `2-simple/src/main/liberty/config/server.xml` to `1-start/src/main/liberty/config`, overwriting the existing zero-length file. This `server.xml` configures the Open Liberty server with Java EE features.
-1. Copy `2-simple/pom.xml` to `1-start/pom.xml`. This step adds the `liberty-maven-plugin` to the POM.
+1. Copy `2-simple/pom.xml` to `1-start/pom.xml`. This step adds the `liberty-maven-plugin` to the POM.
1. Change directory to `1-start` of your local clone. 1. Run `mvn clean package` in a console to generate a war package `javaee-cafe.war` in the directory `./target`. 1. Run `mvn liberty:dev` to start Open Liberty in dev mode.
To run the application on Open Liberty, you need to create an Open Liberty serve
The directory `2-simple` of your local clone shows the Maven project with the above changes already applied. ++ ## Prepare the application image To deploy and run your Liberty application on an ARO 4 cluster, containerize your application as a Docker image using [Open Liberty container images](https://github.com/OpenLiberty/ci.docker) or [WebSphere Liberty container images](https://github.com/WASdev/ci.docker).
To deploy and run your Liberty application on an ARO 4 cluster, containerize you
Complete the following steps to build the application image:
+# [with DB connection](#tab/with-mysql-image)
+
+After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
+
+```bash
+cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql
+
+# Fetch maven artifactId as image name, maven build version as image version
+IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
+IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
+cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql/target
+
+# If you are build with Open Liberty base image
+docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
+# If you are build with WebSphere Liberty base image
+docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
+```
+
+### Push the image to the container image registry
+
+When you're satisfied with the state of the application, push it to the built-in container image registry by following the instructions below.
+
+#### Log in to the OpenShift CLI as the Azure AD user
+
+1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user.
+
+ 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console.
+ 1. Select **openid**
+
+ > [!NOTE]
+ > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
+1. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
+ 1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
+ 1. Sign in to a new tab window with the same user if necessary.
+ 1. Select **Display Token**.
+ 1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here.
+
+ ```bash
+ oc login --token=XOdASlzeT7BHT0JZW6Fd4dl5EwHpeBlN27TAdWHseob --server=https://api.aqlm62xm.rnfghf.aroapp.io:6443
+ Logged into "https://api.aqlm62xm.rnfghf.aroapp.io:6443" as "kube:admin" using the token provided.
+
+ You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'
+
+ Using project "open-liberty-demo".
+ ```
+
+#### Push the container image to the container registry for OpenShift
+
+Execute these commands to push the image to the container registry for OpenShift.
+
+```bash
+# Note: replace "<Container_Registry_URL>" with the fully qualified name of the registry
+Container_Registry_URL=<Container_Registry_URL>
+
+# Create a new tag with registry info that refers to source image
+docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${Container_Registry_URL}/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_VERSION}
+
+# Sign in to the built-in container image registry
+docker login -u $(oc whoami) -p $(oc whoami -t) ${Container_Registry_URL}
+```
+
+Successful output will look similar to the following.
+
+```bash
+WARNING! Using --password via the CLI is insecure. Use --password-stdin.
+Login Succeeded
+```
+
+Push image to the built-in container image registry with the following command.
+
+```bash
+docker push ${Container_Registry_URL}/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_VERSION}
+```
+
+# [without DB connection](#tab/without-mysql-mage)
+ 1. Change directory to `2-simple` of your local clone. 2. Run `mvn clean package` to package the application. 3. Run one of the following commands to build the application image.
When you're satisfied with the state of the application, push it to the built-in
> [!NOTE] > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
-1. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
+1. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**. 1. Sign in to a new tab window with the same user if necessary. 1. Select **Display Token**.
Push image to the built-in container image registry with the following command.
docker push ${Container_Registry_URL}/open-liberty-demo/javaee-cafe-simple:1.0.0 ``` +++ ## Deploy application on the ARO 4 cluster Now you can deploy the sample Liberty application to the Azure Red Hat OpenShift 4 cluster you created earlier when working through the prerequisites.
+# [with DB from web console](#tab/with-mysql-deploy-console)
+
+### Deploy the application from the web console
+
+Because we use the Open Liberty Operator to manage Liberty applications, we need to create an instance of its *Custom Resource Definition*, of type "OpenLibertyApplication". The Operator will then take care of all aspects of managing the OpenShift resources required for deployment.
+
+1. Sign in to the OpenShift web console from your browser using the credentials of the Azure AD user.
+1. Expand **Home**, Select **Projects** > **open-liberty-demo**.
+1. Navigate to **Operators** > **Installed Operators**.
+1. In the middle of the page, select **Open Liberty Operator**.
+1. In the middle of the page, select **Open Liberty Application**. The navigation of items in the user interface mirrors the actual containment hierarchy of technologies in use.
+ <!-- Diagram source https://github.com/Azure-Samples/open-liberty-on-aro/blob/master/diagrams/aro-java-containment.vsdx -->
+ ![ARO Java Containment](./media/howto-deploy-java-liberty-app/aro-java-containment.png)
+1. Select **Create OpenLibertyApplication**
+1. Replace the generated yaml with yours, which is located at `<path-to-repo>/3-integration/connect-db/mysql/target/openlibertyapplication.yaml`.
+1. Select **Create**. You'll be returned to the list of OpenLibertyApplications.
+1. Navigate to **Workloads** > **Secrets**.
+1. Select **Create** > From YAML.
+1. Replace the generated yaml with yours, which is located at `<path-to-repo>/3-integration/connect-db/mysql/target/db-secret.yaml`.
+1. Select **Create**. You'll be returned to the Secret details page.
+1. Select **Add Secret to workload**, then select **javaee-cafe-mysql** from the dropdown box, then select **Save**.
+1. Navigate to **Operators** > **Installed Operators** > **Open Liberty Operator** > **Open Liberty Application**.
+1. Select **javaee-cafe-mysql**.
+1. In the middle of the page, select **Resources**.
+1. In the table, select the link for **javaee-cafe-mysql** with the **Kind** of **Route**.
+1. On the page that opens, select the link below **Location**.
+
+You'll see the application home page opened in the browser.
+
+# [with DB from CLI](#tab/with-mysql-deploy-cli)
+
+### Deploy the application from CLI
+
+Instead of using the web console GUI, you can deploy the application from the CLI. If you haven't already done so, download and install the `oc` command-line tool by following the steps in Red Hat documentation: [Getting Started with the CLI](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html).
+
+Now you can deploy the sample Liberty application to the ARO 4 cluster with the following steps.
+1. Log in to the OpenShift web console from your browser using the credentials of the Azure AD user.
+1. [Log in to the OpenShift CLI with the token for the Azure AD user](https://github.com/Azure-Samples/open-liberty-on-aro/blob/master/guides/howto-deploy-java-liberty-app.md#log-in-to-the-openshift-cli-with-the-token).
+1. Run the following commands to deploy the application.
+ ```bash
+ # Change directory to "<path-to-repo>/3-integration/connect-db/mysql"
+ cd <path-to-repo>/3-integration/connect-db/mysql
+
+ # Change project to "open-liberty-demo"
+ oc project open-liberty-demo
+
+ # Create DB secret
+ oc create -f db-secret.yaml
+
+ # Create the deployment
+ oc create -f openlibertyapplication.yaml
+
+ # Check if OpenLibertyApplication instance is created
+ oc get openlibertyapplication ${IMAGE_NAME}
+
+ # Check if deployment created by Operator is ready
+ oc get deployment ${IMAGE_NAME}
+
+ # Get host of the route
+ HOST=$(oc get route ${IMAGE_NAME} --template='{{ .spec.host }}')
+ echo "Route Host: $HOST"
+ ```
+Once the Liberty application is up and running, open the output of **Route Host** in your browser to visit the application home page.
++
+# [without DB from web console](#tab/without-mysql-deploy-console)
### Deploy the application from the web console
Because we use the Open Liberty Operator to manage Liberty applications, we need
1. Expand **Home**, Select **Projects** > **open-liberty-demo**. 1. Navigate to **Operators** > **Installed Operators**. 1. In the middle of the page, select **Open Liberty Operator**.
-1. In the middle of the page, select **Open Liberty Application**. The navigation of items in the user interface mirrors the actual containment hierarchy of technologies in use.
+1. In the middle of the page, select **Open Liberty Application**. The navigation of items in the user interface mirrors the actual containment hierarchy of technologies in use.
<!-- Diagram source https://github.com/Azure-Samples/open-liberty-on-aro/blob/master/diagrams/aro-java-containment.vsdx --> ![ARO Java Containment](./media/howto-deploy-java-liberty-app/aro-java-containment.png) 1. Select **Create OpenLibertyApplication**
When you're done with the application, follow these steps to delete the applicat
1. In the middle of the page select **Open Liberty Application**. 1. Select the vertical ellipsis (three vertical dots) then select **Delete OpenLiberty Application**.
+# [without DB from CLI](#tab/without-mysql-deploy-cli)
+ ### Deploy the application from CLI Instead of using the web console GUI, you can deploy the application from the CLI. If you haven't already done so, download and install the `oc` command-line tool by following Red Hat documentation [Getting Started with the CLI](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html). 1. Sign in to the OpenShift web console from your browser using the credentials of the Azure AD user. 2. Sign in to the OpenShift CLI with the token for the Azure AD user.
-3. Change directory to `2-simple` of your local clone, and run the following commands to deploy your Liberty application to the ARO 4 cluster. Command output is also shown inline.
+3. Change directory to `2-simple` of your local clone, and run the following commands to deploy your Liberty application to the ARO 4 cluster. Command output is also shown inline.
```bash # Switch to namespace "open-liberty-demo" where resources of demo app will belong to
Instead of using the web console GUI, you can deploy the application from the CL
javaee-cafe-simple 1/1 1 0 102s ```
-4. Check to see `1/1` under the `READY` column before you continue. If not, investigate and resolve the problem before continuing.
+4. Check to see `1/1` under the `READY` column before you continue. If not, investigate and resolve the problem before continuing.
5. Discover the host of route to the application with the `oc get route` command, as shown here. ```bash
Delete the application from the CLI by executing this command.
```bash oc delete -f openlibertyapplication.yaml ```+ ## Clean up resources
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/policy-reference.md
Title: Built-in policy definitions for Azure Database for PostgreSQL description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
The table below lists the available resources that support a private endpoint:
| **SignalR** | Microsoft.SignalRService/SignalR | signalr | | **SignalR** | Microsoft.SignalRService/webPubSub | webpubsub | | **Azure SQL Database** | Microsoft.Sql/servers | Sql Server (sqlServer) |
-| **Azure SQL Managed Instance** | Microsoft.Sql/managedInstances | Sql Managed Instance (managedInstance) |
| **Azure Storage** | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary) | | **Azure File Sync** | Microsoft.StorageSync/storageSyncServices | File Sync Service | | **Azure Synapse** | Microsoft.Synapse/privateLinkHubs | synapse |
remote-rendering Install Remote Rendering Unity Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/how-tos/unity/install-remote-rendering-unity-package.md
# Install the Remote Rendering package for Unity Azure Remote Rendering uses a Unity package to encapsulate the integration into Unity.
-This package contains the entire C# API as well as all plugin binaries required to use Azure Remote Rendering with Unity.
+This package contains the entire C# API and all plugin binaries required to use Azure Remote Rendering with Unity.
Following Unity's naming scheme for packages, the package is called **com.microsoft.azure.remote-rendering**.
+The package is not part of the [ARR samples repository](https://github.com/Azure/azure-remote-rendering), and it is not available from Unity's internal package registry.
You can choose one of the following options to install the Unity package. ## Install Remote Rendering package using the Mixed Reality Feature Tool
-[The Mixed Reality Feature Tool](/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool) ([download](https://aka.ms/mrfeaturetool)) is a tool used to integrate Mixed Reality feature packages into Unity projects. The package is not part of the [ARR samples repository](https://github.com/Azure/azure-remote-rendering), and it is not available from Unity's internal package registry.
+The [Mixed Reality Feature Tool](/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool) ([download](https://aka.ms/mrfeaturetool)) integrates Mixed Reality feature packages into Unity projects.
+
+To add the package to a project, you need to:
-To add the package to a project you need to:
1. [Download the Mixed Reality Feature Tool](https://aka.ms/mrfeaturetool) 1. Follow the [full instructions](/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool) on how to use the tool.
-1. On the **Discover Features** page tick the box for the **Microsoft Azure Remote Rendering** package and select the version of the package you wish to add to your project
+1. On the **Discover Features** page, tick the box for the **Microsoft Azure Remote Rendering** package under **Azure Mixed Reality Services** and select the version of the package you wish to add to your project
+1. If you want to use OpenXR, also add the **Mixed Reality OpenXR Plugin** package under **Azure Mixed Reality Services** in the same way.
![Mixed_Reality_feature_tool_package](media/mixed-reality-feature-tool-package.png)
-To update your local package just select a newer version from the Mixed Reality Feature Tool and install it. Updating the package may occasionally lead to console errors. If this occurs, try closing and reopening the project.
+To update your local package, just select a newer version from the Mixed Reality Feature Tool and install it. Updating the package may occasionally lead to console errors. If you see errors in the console, try closing and reopening the project.
## Install Remote Rendering package manually
To install the Remote Rendering package manually, you need to:
1. Download the package from the Mixed Reality Packages NPM feed at `https://pkgs.dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging/Unity-packages/npm/registry`. * You can either use [NPM](https://www.npmjs.com/get-npm) and run the following command to download the package to the current folder.
- ```
+
+ ```cmd
npm pack com.microsoft.azure.remote-rendering --registry https://pkgs.dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging/Unity-packages/npm/registry ```
+ If you want to use OpenXR, run the following command to download the platform support package to the current folder.
+
+ ```cmd
+ npm pack com.microsoft.mixedreality.openxr --registry https://pkgs.dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging/Unity-packages/npm/registry
+ ```
+ * Or you can use the PowerShell script at `Scripts/DownloadUnityPackages.ps1` from the [azure-remote-rendering GitHub repository](https://github.com/Azure/azure-remote-rendering). * Edit the contents of `Scripts/unity_sample_dependencies.json` to+ ```json { "packages": [
To install the Remote Rendering package manually, you need to:
} ```
- * Run the following command in PowerShell to download the package to the provided destination directory.
+ If you want to use OpenXR, you also need the platform support package. Edit the contents of `Scripts/unity_sample_dependencies.json` to
+
+ ```json
+ {
+ "packages": [
+ {
+ "name": "com.microsoft.azure.remote-rendering",
+ "version": "latest",
+ "registry": "https://pkgs.dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging/Unity-packages/npm/registry"
+ },
+ {
+ "name": "com.microsoft.mixedreality.openxr",
+ "version": "latest",
+ "registry": "https://pkgs.dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging/Unity-packages/npm/registry"
+ }
+ ]
+ }
```+
+ * Run the following command in PowerShell to download the package to the provided destination directory.
+
+ ```PowerShell
DownloadUnityPackages.ps1 -DownloadDestDir <destination directory> ```
-1. [Install the downloaded package](https://docs.unity3d.com/Manual/upm-ui-tarball.html) with Unity's Package Manager.
+1. [Install the downloaded package(s)](https://docs.unity3d.com/Manual/upm-ui-tarball.html) with Unity's Package Manager.
-To update your local package just rerun the respective command you used and reimport the package. Updating the package may occasionally lead to console errors. If this occurs, try closing and reopening the project.
+To update a local package, just repeat the respective download steps you used and reimport the package. Updating the package may occasionally lead to console errors. If you see errors in the console, try closing and reopening the project.
## Unity render pipelines Remote Rendering works with both the **:::no-loc text="Universal render pipeline":::** and the **:::no-loc text="Standard render pipeline":::**. For performance reasons, the Universal render pipeline is recommended.
-To use the **:::no-loc text="Universal render pipeline":::**, its package has to be installed in Unity. This can either be done in Unity's **Package Manager** UI (package name **Universal RP**, version 7.3.1 or newer), or through the `Packages/manifest.json` file, as described in the [Unity project setup tutorial](../../tutorials/unity/view-remote-models/view-remote-models.md#include-the-azure-remote-rendering-package).
+To use the **:::no-loc text="Universal render pipeline":::**, its package has to be installed in Unity. The installation can either be done in Unity's **Package Manager** UI (package name **Universal RP**, version 7.3.1 or newer), or through the `Packages/manifest.json` file, as described in the [Unity project setup tutorial](../../tutorials/unity/view-remote-models/view-remote-models.md#include-the-azure-remote-rendering-and-openxr-packages).
## Next steps * [Unity game objects and components](objects-components.md)
-* [Tutorial: View Remote Models](../../tutorials/unity/view-remote-models/view-remote-models.md)
+* [Tutorial: View Remote Models](../../tutorials/unity/view-remote-models/view-remote-models.md)
remote-rendering System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/overview/system-requirements.md
On desktop, it is required to install the latest [Microsoft Visual C++ Redistrib
It's important to use the latest HEVC codec, as newer versions have significant improvements in latency. To check which version is installed on your device: 1. Start the **Microsoft Store**.
-1. Click the **"..."** button in the top right.
-1. Select **Downloads and Updates**.
-1. Search the list for **HEVC Video Extensions from Device Manufacturer**. If this item is not listed under updates, the most recent version is already installed.
+1. Click the **"Library"** button in the bottom left.
+1. Find **HEVC Video Extensions from Device Manufacturer** in the list. If it is not listed under updates, the most recent version is already installed. Otherwise click the **Get Updates** button and wait for it to install.
1. Make sure the listed codec has at least version **1.0.21821.0**.
-1. Click the **Get Updates** button and wait for it to install.
+ 1. Select the **HEVC Video Extensions from Device Manufacturer** entry from the list.
+ 1. Scroll down to the **Additional Information** section.
+ 1. Check the **Installed version** entry.
## Network
remote-rendering View Remote Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/tutorials/unity/view-remote-models/view-remote-models.md
To get access to the Azure Remote Rendering service, you first need to [create a
From the Unity Hub, create a new project. In this example, we'll assume the project is being created in a folder called **RemoteRendering**.
-## Include the Azure Remote Rendering package
-
-[Follow the instructions](../../../how-tos/unity/install-remote-rendering-unity-package.md) on how to add the Azure Remote Rendering package to a Unity Project.
+## Include the Azure Remote Rendering and OpenXR packages
+Follow the instructions on how to [add the Azure Remote Rendering and OpenXR packages](../../../how-tos/unity/install-remote-rendering-unity-package.md) to your Unity Project.
## Configure the camera 1. Select the **Main Camera** node.
-1. Open the context menu by right clicking on the *Transform* component and select the **Reset** option:
+1. Open the context menu by right-clicking on the *Transform* component and select the **Reset** option:
- ![reset camera transform](./media/camera-reset-transform.png)
+ ![Screenshot of the Unity inspector for a Transform component. The context menu is opened and Reset is selected.](./media/camera-reset-transform.png)
1. Set **Clear flags** to *Solid Color* 1. Set **Background** to *Black* (#000000), with fully transparent (0) alpha (A)
- ![Color wheel](./media/color-wheel-black.png)
+ ![Screenshot of the Unity Color wheel dialog. The color is set to 0 for all R G B A components.](./media/color-wheel-black.png)
1. Set **Clipping Planes** to *Near = 0.3* and *Far = 20*. This means rendering will clip geometry that is closer than 30 cm or farther than 20 meters.
- ![Unity camera properties](./media/camera-properties.png)
+ ![Screenshot of the Unity inspector for a Camera component.](./media/camera-properties.png)
## Adjust the project settings 1. Open *Edit > Project Settings...* 1. Select **Quality** from the left list menu
-1. Change the **Default Quality Level** of all platforms to *Low*. This setting will enable more efficient rendering of local content and doesn't affect the quality of remotely rendered content.
+ 1. Change the **Default Quality Level** of all platforms to *Low*. This setting will enable more efficient rendering of local content and doesn't affect the quality of remotely rendered content.
- ![change project quality settings](./media/settings-quality.png)
+ ![Screenshot of the Unity Project Settings dialog. The Quality entry is selected in the list on the left. The context menu for the default quality level is opened on the right. The low entry is selected.](./media/settings-quality.png)
1. Select **Graphics** from the left list menu
-1. Change the **Scriptable Rendering Pipeline** setting to *HybridRenderingPipeline*.\
- ![Screenshot that points out where you change the Scriptable Rendering Pipeline setting to HybridRenderingPipeline.](./media/settings-graphics-render-pipeline.png)\
- Sometimes the UI does not populate the list of available pipeline types from the packages. If this occurs, the *HybridRenderingPipeline* asset must be dragged onto the field manually:\
- ![changing project graphics settings](./media/hybrid-rendering-pipeline.png)
+ 1. Change the **Scriptable Rendering Pipeline** setting to *HybridRenderingPipeline*.\
+ ![Screenshot of the Unity Project Settings dialog. The Graphics entry is selected in the list on the left. The button to select a Universal Render Pipeline asset is highlighted.](./media/settings-graphics-render-pipeline.png)\
+ Sometimes the UI does not populate the list of available pipeline types from the packages. If this occurs, the *HybridRenderingPipeline* asset must be dragged onto the field manually:\
+ ![Screenshot of the Unity asset browser and Project Settings dialog. The HybridRenderingPipeline asset is highlighted in the asset browser. An arrow points from the asset to the UniversalRenderPipelineAsset field in project settings.](./media/hybrid-rendering-pipeline.png)
+
+ > [!NOTE]
+ > If you're unable to drag and drop the *HybridRenderingPipeline* asset into the Render Pipeline Asset field (possibly because the field doesn't exist!), ensure your package configuration contains the `com.unity.render-pipelines.universal` package.
+
+1. Select **XR Plugin Management** from the left list menu
+ 1. Click the **Install XR Plugin Management** button.
+ 1. Select the **Universal Windows Platform settings** tab, represented as a Windows icon.
+ 1. Click the **Open XR** checkbox under **Plug-In Providers**
+ 1. If a dialog opens that asks you to enable the native platform backends for the new input system click **No**.
+
+ ![Screenshot of the Unity Project Settings dialog. The X R Plug-in Management entry is selected in the list on the left. The tab with the windows logo is highlighted on the right. The Open X R checkbox below it is also highlighted.](./media/xr-plugin-management-settings.png)
> [!NOTE]
- > If you're unable to drag and drop the *HybridRenderingPipeline* asset into the Render Pipeline Asset field (possibly because the field doesn't exist!), ensure your package configuration contains the `com.unity.render-pipelines.universal` package.
+ > If the **Microsoft HoloLens feature group** is disabled the Windows Mixed Reality OpenXR Plugin is missing from your project. Follow the instructions on how to [add the Azure Remote Rendering and OpenXR packages](../../../how-tos/unity/install-remote-rendering-unity-package.md) to install it.
+
+1. Select **OpenXR** from the left list menu
+ 1. Set **Depth Submission Mode** to *Depth 16 Bit*
+ 1. Add the **Microsoft Hand Interaction Profile** to **Interaction Profiles**.
+ 1. Enable these OpenXR features:
+ * **Hand Tracking**
+ * **Mixed Reality Features**
+ * **Motion Controller Model**
+
+ ![Screenshot of the Unity Project Settings dialog. The Open X R sub-entry is selected in the list on the left. Highlights on the right side are placed on the Depth Submission Mode, Interaction Profiles, and Open X R feature settings.](./media/xr-plugin-management-openXR-settings.png)
+
+ > [!NOTE]
+ > If you don't see the required OpenXR features listed the Windows Mixed Reality OpenXR Plugin is missing from your project. Follow the instructions on how to [add the Azure Remote Rendering and OpenXR packages](../../../how-tos/unity/install-remote-rendering-unity-package.md) to install it.
1. Select **Player** from the left list menu
-1. Select the **Universal Windows Platform settings** tab, represented as a Windows icon.
-1. Change the **XR Settings** to support Windows Mixed Reality as shown below:
- 1. Enable **Virtual Reality Supported**
- 1. Press the '+' button and add **Windows Mixed Reality**
- 1. Set **Depth Format** to *16-Bit Depth*
- 1. Ensure **Depth Buffer Sharing** is enabled
- 1. Set **Stereo Rendering Mode** to *Single Pass Instanced*
-
- ![player settings](./media/xr-player-settings.png)
-
-1. In the same window, above **XR Settings**, expand **Publishing Settings**
-1. Scroll down to **Capabilities** and select:
- * **InternetClient**
- * **InternetClientServer**
- * **SpatialPerception**
- * **PrivateNetworkClientServer** (*optional*). Select this option if you want to connect the Unity remote debugger to your device.
-
-1. Under **Supported Device Families**, enable **Holographic** and **Desktop**
+ 1. Select the **Universal Windows Platform settings** tab, represented as a Windows icon.
+ 1. Expand **Other Settings**
+ 1. Under **Rendering** change **Color Space** to **Linear** and restart Unity when it asks you to.
+ 1. Under **Configuration** change **Active Input Handling** to **Both** and restart Unity when it asks you to.
+ ![Screenshot of the Unity Project Settings dialog. The Player entry is selected in the list on the left. Highlights on the right side are placed on the tab with the Windows logo, the Color Space setting, and the Active input Handling setting.](./media/player-settings-other-settings.png)
+ 1. Expand **Publishing Settings**
+ 1. Scroll down to **Capabilities** and select:
+ * **InternetClient**
+ * **InternetClientServer**
+ * **SpatialPerception**
+ * **PrivateNetworkClientServer** (*optional*). Select this option if you want to connect the Unity remote debugger to your device.
+ 1. Under **Supported Device Families**, enable **Holographic** and **Desktop**
+ ![Screenshot of the Unity Project Settings dialog. The Player entry is selected in the list on the left. Highlights on the right side are placed on the Capabilities and the Supported Device Families settings.](./media/player-settings-publishing-settings.png)
+ 1. Close or dock the **Project Settings** panel 1. Open *File->Build Settings*
-1. Select **Universal Windows Platform**
-1. Configure your settings to match those found below
-1. Press the **Switch Platform** button.\
-![build settings](./media/build-settings.png)
+ 1. Select **Universal Windows Platform**
+ 1. Configure your settings to match those found below
+ 1. Press the **Switch Platform** button.\
+ ![Screenshot of the Unity Build Settings dialog. The Universal Windows Platform entry is selected in the list on the left. Highlights on the right side are placed on the settings dropdown boxes and the Switch Platform button.](./media/build-settings.png)
1. After Unity changes platforms, close the build panel. ## Validate project setup
Perform the following steps to validate that the project settings are correct.
1. Choose the **ValidateProject** entry from the **RemoteRendering** menu in the Unity editor toolbar. 1. Review the **ValidateProject** window for errors and fix project settings where necessary.
- ![Unity editor project validation](./media/remote-render-unity-validation.png)
+ ![Screenshot of the Unity Validate Project dialog. The dialog shows a mixture of successful checks, warnings, and errors.](./media/remote-render-unity-validation.png)
> [!NOTE] > If you use MRTK in your project and you enable the camera subsystem, MRTK will override manual changes that you apply to the camera. This includes fixes from the ValidateProject tool.
Perform the following steps to validate that the project settings are correct.
There are four basic stages to show remotely rendered models, outlined in the flowchart below. Each stage must be performed in order. The next step is to create a script which will manage the application state and proceed through each required stage.
-![ARR stack 0](./media/remote-render-stack-0.png)
+![Diagram of the four stages required to load a model.](./media/remote-render-stack-0.png)
1. In the *Project* pane, under **Assets**, create a new folder called *RemoteRenderingCore*. Then inside *RemoteRenderingCore*, create another folder called *Scripts*. 1. Create a [new C# script](https://docs.unity3d.com/Manual/CreatingAndUsingScripts.html) called **RemoteRenderingCoordinator**. Your project should look like this:
- ![Project hierarchy](./media/project-structure.png)
+ ![Screenshot of Unity Project hierarchy containing the new script.](./media/project-structure.png)
This coordinator script will track and manage the remote rendering state. Of note, some of this code is used for maintaining state, exposing functionality to other components, triggering events, and storing application-specific data that is not *directly* related to Azure Remote Rendering. Use the code below as a starting point, and we'll address and implement the specific Azure Remote Rendering code later in the tutorial.
The remote rendering coordinator and its required script (*ARRServiceUnity*) are
1. Create a new GameObject in the scene (Ctrl+Shift+N or *GameObject->Create Empty*) and name it **RemoteRenderingCoordinator**. 1. Add the *RemoteRenderingCoordinator* script to the **RemoteRenderingCoordinator** GameObject.\
-![Add RemoteRenderingCoordinator component](./media/add-coordinator-script.png)
+![Screenshot of the Unity Add Component dialog. The search text field contains the text RemoteRenderingCoordinator.](./media/add-coordinator-script.png)
1. Confirm the *ARRServiceUnity* script, appearing as *Service* in the inspector, is automatically added to the GameObject. In case you're wondering, this is a result having `[RequireComponent(typeof(ARRServiceUnity))]` at the top of the **RemoteRenderingCoordinator** script. 1. Add your Azure Remote Rendering credentials, your Account Domain, and the Remote Rendering Domain to the coordinator script:\
-![Add your credentials](./media/configure-coordinator-script.png)
+![Screenshot of the Unity inspector of the Remote Rendering Coordinator Script. The credential input fields are highlighted.](./media/configure-coordinator-script.png)
## Initialize Azure Remote Rendering Now that we have the framework for our coordinator, we will implement each of the four stages starting with **Initialize Remote Rendering**.
-![ARR stack 1](./media/remote-render-stack-1.png)
+![Diagram of the four stages required to load a model. The first stage "Initialize Remote Rendering" is highlighted.](./media/remote-render-stack-1.png)
**Initialize** tells Azure Remote Rendering which camera object to use for rendering and progresses the state machine into **NotAuthorized**. This means it's initialized but not yet authorized to connect to a session. Since starting an ARR session incurs a cost, we need to confirm the user wants to proceed.
In order to progress from **NotAuthorized** to **NoSession**, we'd typically pre
1. Select the **RemoteRenderingCoordinator** GameObject and find the **OnRequestingAuthorization** Unity Event exposed in the Inspector of the **RemoteRenderingCoordinator** component. 1. Add a new event by pressing the '+' in the lower right.
-1. Drag the component on to its own event, to reference itself.\
-![Bypass Authentication](./media/bypass-authorization-add-event.png)\
+1. Drag the component on to its own event, to reference itself.
+![Screenshot of the Unity inspector of the Remote Rendering Coordinator Script. The title bar of the component is highlighted and an arrow connects it to the On Requesting Authorization event.](./media/bypass-authorization-add-event.png)
1. In the drop down select **RemoteRenderingCoordinator -> BypassAuthorization**.\
-![Screenshot that shows the selected RemoteRenderingCoordinator.BypassAuthorization option.](./media/bypass-authorization-event.png)
+![Screenshot of the On Requesting Authorization event.](./media/bypass-authorization-event.png)
## Create or join a remote session The second stage is to Create or Join a Remote Rendering Session (see [Remote Rendering Sessions](../../../concepts/sessions.md) for more information).
-![ARR stack 2](./media/remote-render-stack-2.png)
+![Diagram of the four stages required to load a model. The second stage "Create or Join Remote Rendering Session" is highlighted.](./media/remote-render-stack-2.png)
The remote session is where the models will be rendered. The **JoinRemoteSession( )** method will attempt to join an existing session, tracked with the **LastUsedSessionID** property or if there is an assigned active session ID on **SessionIDOverride**. **SessionIDOverride** is intended for your debugging purposes only, it should only be used when you know the session exists and would like to explicitly connect to it.
If you want to save time by reusing sessions, make sure to deactivate the option
Next, the application needs to connect its local runtime to the remote session.
-![ARR stack 3](./media/remote-render-stack-3.png)
+![Diagram of the four stages required to load a model. The third stage "Connect Local Runtime to Remote Session" is highlighted.](./media/remote-render-stack-3.png)
The application also needs to listen for events about the connection between the runtime and the current session; those state changes are handled in **OnLocalRuntimeStatusChanged**. This code will advance our state to **ConnectingToRuntime**. Once connected in **OnLocalRuntimeStatusChanged**, the state will advance to **RuntimeConnected**. Connecting to the runtime is the last state the coordinator concerns itself with, which means the application is done with all the common configuration and is ready to begin the session-specific work of loading and rendering models.
private void LateUpdate()
With the required foundation in place, you are ready to load a model into the remote session and start receiving frames.
-![Diagram that shows the process flow for preparing to load and view a model.](./media/remote-render-stack-4.png)
+![Diagram of the four stages required to load a model. The fourth stage "Load and view a Model" is highlighted.](./media/remote-render-stack-4.png)
The **LoadModel** method is designed to accept a model path, progress handler, and parent transform. These arguments will be used to load a model into the remote session, update the user on the loading progress, and orient the remotely rendered model based on the parent transform.
The code above is performing the following steps:
We now have all the code required to view a remotely rendered model, all four of the required stages for remote rendering are complete. Now we need to add a little code to start the model load process.
-![ARR stack 4](./media/remote-render-stack-5.png)
+![Diagram of the four stages required to load a model. All stages are marked as completed.](./media/remote-render-stack-5.png)
1. Add the following code to the **RemoteRenderingCoordinator** class, just below the **LoadModel** method is fine:
We now have all the code required to view a remotely rendered model, all four of
1. Monitor the Console output - waiting for the state to change to **RuntimeConnected**. 1. Once the runtime is connected, right-click on the **RemoteRenderingCoordinator** in the inspector to expose the context menu. Then, click the **Load Test Model** option in the context menu, added by the `[ContextMenu("Load Test Model")]` part of our code above.
- ![Load from context menu](./media/load-test-model.png)
+ ![Screenshot of the Unity inspector of the Remote Rendering Coordinator Script. Highlights instruct to first right-click on the title bar and then select Load Test Model from the context menu.](./media/load-test-model.png)
1. Watch the Console for the output of the **ProgressHandler** we passed into the **LoadModel** method. 1. See the remotely rendered model!
We now have all the code required to view a remotely rendered model, all four of
## Next steps
-![Model loaded](./media/test-model-rendered.png)
+![Screenshot of Unity running the project in Play mode. A car engine is rendered in the center of the viewport.](./media/test-model-rendered.png)
Congratulations! You've created a basic application capable of viewing remotely rendered models using Azure Remote Rendering. In the next tutorial, we will integrate MRTK and import our own models.
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-file-storage-integration.md
Title: Azure Files indexing (preview)
+ Title: Azure Files indexer (preview)
description: Set up an Azure Files indexer to automate indexing of file shares in Azure Cognitive Search.
Previously updated : 01/17/2022 Last updated : 01/19/2022 # Index data from Azure Files
Last updated 01/17/2022
> [!IMPORTANT] > Azure Files indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to create the indexer data source.
-In this article, learn the steps for extracting content and metadata from file shares in Azure Storage and sending the content to a search index in Azure Cognitive Search. The resulting index can be queried using full text search.
+Configure a [search indexer](search-indexer-overview.md) to extract content from Azure File Storage and make it searchable in Azure Cognitive Search.
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing files in Azure Storage.
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ An [SMB file share](../storage/files/files-smb-protocol.md) providing the source content. [NFS shares](../storage/files/files-nfs-protocol.md#support-for-azure-storage-features) are not supported.
-+ Files should contain non-binary textual content for text-based indexing. This indexer also supports [AI enrichment](cognitive-search-concept-intro.md) if you have binary files.
++ Files containing text. If you have binary data, you can include [AI enrichment](cognitive-search-concept-intro.md) for image analysis. ## Supported document formats
The Azure Files indexer can extract text from the following document formats:
## Define the data source
-A primary difference between a file share indexer and other indexers is the data source assignment. The data source definition specifies "type": `"azurefile"`, a content path, and how to connect.
+The data source definition specifies the data source type, content path, and how to connect.
1. [Create or update a data source](/rest/api/searchservice/preview-api/create-or-update-data-source) to set its definition, using a preview API version 2020-06-30-Preview or 2021-04-30-Preview for "type": `"azurefile"`.
A primary difference between a file share indexer and other indexers is the data
1. Set "container" to the root file share, and use "query" to specify any subfolders.
-A data source definition can also include additional properties for [soft deletion policies](#soft-delete-using-custom-metadata) and [field mappings](search-indexer-field-mappings.md) if field names and types are not the same.
+A data source definition can also include [soft deletion policies](search-howto-index-changed-deleted-blobs.md), if you want the indexer to delete a search document when the source document is flagged for deletion.
<a name="Credentials"></a>
A data source definition can also include additional properties for [soft deleti
Indexers can connect to a file share using the following connections.
-**Full access storage account connection string**:
-`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }`
+| Managed identity connection string |
+||
+|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;" }`|
+|This connection string does not require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-storage.md).|
-You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key.
+| Full access storage account connection string |
+|--|
+|`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }` |
+| You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
-**Managed identity connection string**:
-`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;" }`
+| Storage account shared access signature** (SAS) connection string |
+|-|
+| `{ "connectionString" : "BlobEndpoint=https://<your account>.blob.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=b&sp=rl;" }` |
+| The SAS should have the list and read permissions on containers and objects (blobs in this case). |
-This connection string requires [configuring your search service as a trusted service](search-howto-managed-identities-storage.md) under Azure Active Directory,and then granting **Reader and data access** rights to the search service in Azure Storage.
-
-**Storage account shared access signature** (SAS) connection string:
-`{ "connectionString" : "BlobEndpoint=https://<your account>.file.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&sp=rl&sr=s;" }`
-
-The SAS should have the list and read permissions on file shares.
-
-**Container shared access signature**:
-`{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.file.core.windows.net/<share name>?sv=2016-05-31&sr=s&sig=<the signature>&se=<the validity end time>&sp=rl;" }`
-
-The SAS should have the list and read permissions on the file share. For more information on storage shared access signatures, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md).
+| Container shared access signature |
+|--|
+| `{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.blob.core.windows.net/<container name>?sv=2016-05-31&sr=c&sig=<the signature>&se=<the validity end time>&sp=rl;" }` |
+| The SAS should have the list and read permissions on the container. For more information, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
> [!NOTE]
-> If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
+> If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
## Add search fields to an index In the [search index](search-what-is-an-index.md), add fields to accept the content and metadata of your Azure files.
-1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store file content, metadata, and system properties:
+1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store file content and metadata:
- ```json
+ ```http
POST /indexes?api-version=2020-06-30 { "name" : "my-search-index",
In the [search index](search-what-is-an-index.md), add fields to accept the cont
} ```
-1. Create a key field ("key": true) to uniquely identify each search document based on unique identifiers in the files. For this data source type, the indexer will automatically identify and encode a value for this field. No field mappings are necessary.
+1. Create a document key field ("key": true). For blob content, the best candidates are metadata properties. Metadata properties often include characters, such as `/` and `-`, that are invalid for document keys. Because the indexer has a "base64EncodeKeys" property (true by default), it automatically encodes the metadata property, with no configuration or field mapping required.
+
+ + **`metadata_storage_path`** (default) full path to the object or file
+
+ + **`metadata_storage_name`** usable only if names are unique
-1. Add a "content" field to store extracted text from each file.
+ + A custom metadata property that you add to blobs. This option requires that your blob upload process adds that metadata property to all blobs. Since the key is a required property, any blobs that are missing a value will fail to be indexed. If you use a custom metadata property as a key, avoid making changes to that property. Indexers will add duplicate documents for the same blob if the key property changes.
+
+1. Add a "content" field to store extracted text from each file through the blob's "content" property. You aren't required to use this name, but doing so lets you take advantage of implicit field mappings.
1. Add fields for standard metadata properties. In file indexing, the standard metadata properties are the same as blob metadata properties. The file indexer automatically creates internal field mappings for these properties that converts hyphenated property names to underscored property names. You still have to add the fields you want to use the index definition, but you can omit creating field mappings in the data source.
In the [search index](search-what-is-an-index.md), add fields to accept the cont
## Configure the file indexer
+Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors. Under "configuration", you can specify which files are indexed by file type or by properties on the files themselves.
+ 1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index. ```http
In the [search index](search-what-is-an-index.md), add fields to accept the cont
"batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
"configuration:" { "indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg"
In the [search index](search-what-is-an-index.md), add fields to accept the cont
If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters are present, Azure Cognitive Search first looks at `indexedFileNameExtensions`, then at `excludedFileNameExtensions`. If the same file extension is present in both lists, it will be excluded from indexing.
-1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
-
-## Change and deletion detection
-
-After an initial search index is created, you might want subsequent indexer jobs to pick up only new and changed documents. Fortunately, content in Azure Storage is timestamped, which gives indexers sufficient information for determining what's new and changed automatically. For search content that originates from Azure File Storage, the indexer keeps track of the file's `LastModified` timestamp and reindexes only new and changed files.
-
-Although change detection is a given, deletion detection is not. If you want to detect deleted files, make sure to use a "soft delete" approach. If you delete the files outright in a file share, corresponding search documents will not be removed from the search index.
-
-## Soft delete using custom metadata
+1. [Specify field mappings](search-indexer-field-mappings.md) if there are differences in field name or type, or if you need multiple versions of a source field in the search index.
-This method uses a file's metadata to determine whether a search document should be removed from the index. This method requires two separate actions, deleting the search document from the index, followed by file deletion in Azure Storage.
+ In file indexing, you can often omit field mappings because the indexer has built-in support for mapping the "content" and metadata properties to similarly named and typed fields in an index. For metadata properties, the indexer will automatically replace hyphens `-` with underscores in the search index.
-There are steps to follow in both File storage and Cognitive Search, but there are no other feature dependencies.
-
-1. Add a custom metadata key-value pair to the file in Azure storage to indicate to Azure Cognitive Search that it is logically deleted.
-
-1. Configure a soft deletion column detection policy on the data source. For example, the following policy considers a file to be deleted if it has a metadata property `IsDeleted` with the value `true`:
-
- ```http
- PUT https://[service name].search.windows.net/datasources/file-datasource?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "file-datasource",
- "type" : "azurefile",
- "credentials" : { "connectionString" : "<your storage connection string>" },
- "container" : { "name" : "my-share", "query" : null },
- "dataDeletionDetectionPolicy" : {
- "@odata.type" :"#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
- "softDeleteColumnName" : "IsDeleted",
- "softDeleteMarkerValue" : "true"
- }
- }
- ```
-
-1. Once the indexer has processed the file and deleted the document from the search index, you can delete the file in Azure Storage.
-
-### Reindexing undeleted files (using custom metadata)
-
-After an indexer processes a deleted file and removes the corresponding search document from the index, it won't revisit that file if you restore it later if the file's `LastModified` timestamp is older than the last indexer run.
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
-If you would like to reindex that document, change the `"softDeleteMarkerValue" : "false"` for that file and rerun the indexer.
+## Next steps
-## See also
+You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
-+ [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-+ [What is Azure Files?](../storage/files/storage-files-introduction.md)
++ [Change detection and deletion detection](search-howto-index-changed-deleted-blobs.md)++ [Index large data sets](search-howto-large-index.md)
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-azure-data-lake-storage.md
Title: Azure Data Lake Storage Gen2 indexer
-description: Set up an Azure Data Lake Storage Gen2 indexer to automate indexing of content and metadata for full text search in Azure Cognitive Search.
+description: Set up an Azure Data Lake Storage (ADLS) Gen2 indexer to automate indexing of content and metadata for full text search in Azure Cognitive Search.
Previously updated : 01/17/2022 Last updated : 01/19/2022 # Index data from Azure Data Lake Storage Gen2
-This article shows you how to configure an Azure Data Lake Storage (ADLS) Gen2 indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from ADLS Gen2.
+Configure a [search indexer](search-indexer-overview.md) to extract content and metadata from Azure Data Lake Storage (ADLS) Gen2 and make it searchable in Azure Cognitive Search.
-ADLS Gen2 is available through Azure Storage. When setting up an Azure Storage account, you have the option of enabling [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) that organizes files into a hierarchy of directories and nested subdirectories. By enabling hierarchical namespace, you enable ADLS Gen2.
+ADLS Gen2 is available through Azure Storage. When setting up a storage account, you have the option of enabling [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md), organizing files into a hierarchy of directories and nested subdirectories. By enabling a hierarchical namespace, you enable ADLS Gen2.
-Examples in this article use the portal and REST APIs. For examples in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from ADLS Gen2.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Blob Storage.
+For a code sample in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
## Prerequisites
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ [Access tiers](../storage/blobs/access-tiers-overview.md) for ADLS Gen2 include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
-+ Blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
++ Blobs containing text. If you have binary data, you can include [AI enrichment](cognitive-search-concept-intro.md) for image analysis.+
+Note that blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
## Access control ADLS Gen2 implements an [access control model](../storage/blobs/data-lake-storage-access-control.md) that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs).
-Azure Cognitive Search supports [Azure RBAC for indexer access](search-howto-managed-identities-storage.md) to your content in storage, but it does not support document-level permissions. In Azure Cognitive Search, all users have the same level of access to all searchable and retrievable content in the index. If document-level permissions are an application requirement, consider [security trimming](search-security-trimming-for-azure-search.md) as a workaround.
+Azure Cognitive Search supports [Azure RBAC for indexer access](search-howto-managed-identities-storage.md) to your content in storage, but it does not support document-level permissions. In Azure Cognitive Search, all users have the same level of access to all searchable and retrievable content in the index. If document-level permissions are an application requirement, consider [security trimming](search-security-trimming-for-azure-search.md) as a potential solution.
<a name="SupportedFormats"></a>
The ADLS Gen2 indexer can extract text from the following document formats:
## Define the data source
-The data source definition specifies the data source type, as well as other properties for authentication and connection to the content to be indexed.
+The data source definition specifies the data source type, content path, and how to connect.
-A Data Lake Storage Gen2 data source definition looks similar to the example below:
+1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
-```http
- POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
- Content-Type: application/json
- api-key: [Search service admin key]
+ ```json
{
- "name" : "adlsgen2-datasource",
+ "name" : "my-adlsgen2-datasource",
"type" : "adlsgen2", "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" }, "container" : { "name" : "my-container", "query" : "<optional-virtual-directory-name>" } }
-```
-
-The `"credentials"` property can be a connection string, as shown in the above example, or one of the alternative approaches described in the next section. The `"container"` property provides the location of content within Azure Storage, and `"query"` is used to specify a subfolder in the container. For more information about data source definitions, see [Create Data Source (REST)](/rest/api/searchservice/create-data-source).
+ ```
-<a name="Credentials"></a>
+1. Set "type" to `"adlsgen2"` (required).
-### Supported credentials and connection strings
+1. Set `"credentials"` to an Azure Storage connection string. The next section describes the supported formats.
-You can provide the credentials for the container in one of these ways:
+1. Set `"container"` to the blob container, and use "query" to specify any subfolders.
-**Managed identity connection string**:
-`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;" }`
+A data source definition can also include [soft deletion policies](search-howto-index-changed-deleted-blobs.md), if you want the indexer to delete a search document when the source document is flagged for deletion.
-This connection string does not require an account key, but you must follow the instructions for [Setting up a connection to an Azure Storage account using a managed identity](search-howto-managed-identities-storage.md).
+<a name="Credentials"></a>
-**Full access storage account connection string**:
-`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }`
+### Supported credentials and connection strings
-You can get the connection string from the Azure portal by navigating to the storage account blade > Settings > Keys (for Classic storage accounts) or Settings > Access keys (for Azure Resource Manager storage accounts).
+Indexers can connect to a blob container using the following connections.
-**Storage account shared access signature** (SAS) connection string:
-`{ "connectionString" : "BlobEndpoint=https://<your account>.blob.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=b&sp=rl;" }`
+| Managed identity connection string |
+||
+|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;" }`|
+|This connection string does not require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-storage.md).|
-The SAS should have the list and read permissions on containers and objects (blobs in this case).
+| Full access storage account connection string |
+|--|
+|`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }` |
+| You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
-**Container shared access signature**:
-`{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.blob.core.windows.net/<container name>?sv=2016-05-31&sr=c&sig=<the signature>&se=<the validity end time>&sp=rl;" }`
+| Storage account shared access signature** (SAS) connection string |
+|-|
+| `{ "connectionString" : "BlobEndpoint=https://<your account>.blob.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=b&sp=rl;" }` |
+| The SAS should have the list and read permissions on containers and objects (blobs in this case). |
-The SAS should have the list and read permissions on the container. For more information on storage shared access signatures, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md).
+| Container shared access signature |
+|--|
+| `{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.blob.core.windows.net/<container name>?sv=2016-05-31&sr=c&sig=<the signature>&se=<the validity end time>&sp=rl;" }` |
+| The SAS should have the list and read permissions on the container. For more information, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
> [!NOTE] > If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
-### Step 2 - Create an index
+## Add search fields to an index
-The index specifies the fields in a document, attributes, and other constructs that shape the search experience. All indexers require that you specify a search index definition as the destination. The following example uses the [Create Index (REST API)](/rest/api/searchservice/create-index).
+In a [search index](search-what-is-an-index.md), add fields to accept the content and metadata of your Azure blobs.
-```http
+1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store blob content and metadata:
+
+ ```http
POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
{
- "name" : "my-target-index",
+ "name" : "my-search-index",
"fields": [
- { "name": "id", "type": "Edm.String", "key": true, "searchable": false },
- { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false }
+ { "name": "ID", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false },
+ { "name": "metadata_storage_name", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_size", "type": "Edm.Int64", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_content_type", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
]
- }
-```
-
-Index definitions require one field in the `"fields"` collection to act as the document key. Index definitions should also include fields for content and metadata.
-
-A **`content`** field is common to blob content. It contains the text extracted from blobs. Your definition of this field might look similar to the one above. You aren't required to use this name, but doing lets you take advantage of implicit field mappings. The blob indexer can send blob contents to a content Edm.String field in the index, with no field mappings required.
-
-You could also add fields for any blob metadata that you want in the index. The indexer can read custom metadata properties, [standard metadata](#indexing-blob-metadata) properties, and [content-specific metadata](search-blob-metadata-properties.md) properties. For more information about indexes, see [Create an index](search-what-is-an-index.md).
-
-### Step 3 - Configure and run the indexer
-
-Once the index and data source have been created, you're ready to [create the indexer](/rest/api/searchservice/create-indexer):
-
-```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "adlsgen2-indexer",
- "dataSourceName" : "adlsgen2-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : {
- "interval" : "PT2H"
} }
-```
-
-This indexer runs immediately, and then [on a schedule](search-howto-schedule-indexers.md) every two hours (schedule interval is set to "PT2H"). To run an indexer every 30 minutes, set the interval to "PT30M". The shortest supported interval is 5 minutes. The schedule is optional - if omitted, an indexer runs only once when it's created. However, you can run an indexer on-demand at any time.
-
-<a name="DocumentKeys"></a>
+ ```
-## Defining document keys and field mappings
+1. Create a document key field ("key": true). For blob content, the best candidates are metadata properties. Metadata properties often include characters, such as `/` and `-`, that are invalid for document keys. Because the indexer has a "base64EncodeKeys" property (true by default), it automatically encodes the metadata property, with no configuration or field mapping required.
-In a search index, the document key uniquely identifies each document. The field you choose must be of type `Edm.String`. For blob content, the best candidates for a document key are metadata properties on the blob.
+ + **`metadata_storage_path`** (default) full path to the object or file
-+ **`metadata_storage_name`** - this property is a candidate, but only if names are unique across all containers and folders you are indexing. Regardless of blob location, the end result is that the document key (name) must be unique in the search index after all content has been indexed.
+ + **`metadata_storage_name`** usable only if names are unique
- Another potential issue about the storage name is that it might contain characters that are invalid for document keys, such as dashes. You can handle invalid characters by using the `base64Encode` [field mapping function](search-indexer-field-mappings.md#base64EncodeFunction). If you do this, remember to also encode document keys when passing them in API calls such as [Lookup Document (REST)](/rest/api/searchservice/lookup-document). In .NET, you can use the [UrlTokenEncode method](/dotnet/api/system.web.httpserverutility.urltokenencode) to encode characters.
+ + A custom metadata property that you add to blobs. This option requires that your blob upload process adds that metadata property to all blobs. Since the key is a required property, any blobs that are missing a value will fail to be indexed. If you use a custom metadata property as a key, avoid making changes to that property. Indexers will add duplicate documents for the same blob if the key property changes.
-+ **`metadata_storage_path`** - using the full path ensures uniqueness, but the path definitely contains `/` characters that are [invalid in a document key](/rest/api/searchservice/naming-rules). As above, you can use the `base64Encode` [function](search-indexer-field-mappings.md#base64EncodeFunction) to encode characters.
+1. Add a "content" field to store extracted text from each file through the blob's "content" property. You aren't required to use this name, but doing so lets you take advantage of implicit field mappings.
-+ A third option is to add a custom metadata property to the blobs. This option requires that your blob upload process adds that metadata property to all blobs. Since the key is a required property, any blobs that are missing a value will fail to be indexed.
+1. Add fields for standard metadata properties. The indexer can read custom metadata properties, [standard metadata](#indexing-blob-metadata) properties, and [content-specific metadata](search-blob-metadata-properties.md) properties.
-> [!IMPORTANT]
-> If there is no explicit mapping for the key field in the index, Azure Cognitive Search automatically uses `metadata_storage_path` as the key and base-64 encodes key values (the second option above).
->
-> If you use a custom metadata property as a key, avoid making changes to that property. Indexers will add duplicate documents for the same blob if the key property changes.
+## Configure the ADLS Gen2 indexer
-### Example
+Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors. The "configuration" section determines what content gets indexed.
-The following example demonstrates `metadata_storage_name` as the document key. Assume the index has a key field named `key` and another field named `fileSize` for storing the document size. [Field mappings](search-indexer-field-mappings.md) in the indexer definition establish field associations, and `metadata_storage_name` has the [`base64Encode` field mapping function](search-indexer-field-mappings.md#base64EncodeFunction) to handle unsupported characters.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
-```http
- PUT https://[service name].search.windows.net/indexers/adlsgen2-indexer?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
+ ```http
+ POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
{
- "dataSourceName" : "adlsgen2-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" },
- "fieldMappings" : [
- { "sourceFieldName" : "metadata_storage_name", "targetFieldName" : "key", "mappingFunction" : { "name" : "base64Encode" } },
- { "sourceFieldName" : "metadata_storage_size", "targetFieldName" : "fileSize" }
- ]
+ "name" : "my-adlsgen2-indexer,
+ "dataSourceName" : "my-adlsgen2-datasource",
+ "targetIndexName" : "my-search-index",
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration:" {
+ "indexedFileNameExtensions" : ".pdf,.docx",
+ "excludedFileNameExtensions" : ".png,.jpeg",
+ "dataToExtract": "contentAndMetadata",
+ "parsingMode": "default",
+ "imageAction": "none"
+ }
+ },
+ "schedule" : { },
+ "fieldMappings" : [ ]
}
-```
+ ```
-### How to make an encoded field "searchable"
+1. Set "batchSize` if the default (10 documents) is either under utilizing or overwhelming available resources. Default batch sizes are data source specific. Blob indexing sets batch size at 10 documents in recognition of the larger average document size.
-There are times when you need to use an encoded version of a field like `metadata_storage_path` as the key, but also need that field to be searchable (without encoding) in the search index. To support both use cases, you can map `metadata_storage_path` to two fields; one for the key (encoded), and a second for a path field that we can assume is attributed as "searchable" in the index schema. The example below shows two field mappings for `metadata_storage_path`.
+1. Under "configuration", provide any [inclusion or exclusion criteria](#PartsOfBlobToIndex) based on file type or leave unspecified to retrieve all blobs.
-```http
- PUT https://[service name].search.windows.net/indexers/adlsgen2-indexer?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "dataSourceName" : " adlsgen2-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" },
- "fieldMappings" : [
- { "sourceFieldName" : "metadata_storage_path", "targetFieldName" : "key", "mappingFunction" : { "name" : "base64Encode" } },
- { "sourceFieldName" : "metadata_storage_path", "targetFieldName" : "path" }
- ]
- }
-```
+1. Set "dataToExtract" to control which parts of the blobs are indexed:
-<a name="PartsOfBlobToIndex"></a>
+ + "contentAndMetadata" specifies that all metadata and textual content extracted from the blob are indexed. This is the default value.
-## Index content and metadata
+ + "storageMetadata" specifies that only the [standard blob properties and user-specified metadata](../storage/blobs/storage-blob-container-properties-metadata.md) are indexed.
-Data Lake Storage Gen2 blobs contain content and metadata. You can control which parts of the blobs are indexed using the `dataToExtract` configuration parameter. It can take the following values:
+ + "allMetadata" specifies that standard blob properties and any [metadata for found content types](search-blob-metadata-properties.md) are extracted from the blob content and indexed.
-+ `contentAndMetadata` - specifies that all metadata and textual content extracted from the blob are indexed. This is the default value.
+1. Set "parsingMode" if blobs should be mapped to [multiple search documents](search-howto-index-one-to-many-blobs.md), or if they consist of [plain text](search-howto-index-plaintext-blobs.md), [JSON documents](search-howto-index-json-blobs.md), or [CSV files](search-howto-index-csv-blobs.md).
-+ `storageMetadata` - specifies that only the [standard blob properties and user-specified metadata](../storage/blobs/storage-blob-container-properties-metadata.md) are indexed.
+1. [Specify field mappings](search-indexer-field-mappings.md) if there are differences in field name or type, or if you need multiple versions of a source field in the search index.
-+ `allMetadata` - specifies that standard blob properties and any [metadata for found content types](search-blob-metadata-properties.md) are extracted from the blob content and indexed.
+ In blob indexing, you can often omit field mappings because the indexer has built-in support for mapping the "content" and metadata properties to similarly named and typed fields in an index. For metadata properties, the indexer will automatically replace hyphens `-` with underscores in the search index.
-For example, to index only the storage metadata, use:
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
-```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
+For the full list of parameter descriptions, see [Blob configuration parameters](/rest/api/searchservice/create-indexer#blob-configuration-parameters) in the REST API.
-{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "dataToExtract" : "storageMetadata" } }
-}
-```
+## How blobs are indexed
-## Indexing blob content
+By default, most blobs are indexed as a single search document in the index, including blobs with structured content, such as JSON or CSV, which are indexed as a single chunk of text. However, for JSON or CSV documents that have an internal structure (delimiters), you can assign parsing modes to generate individual search documents for each line or element:
-By default, blobs with structured content, such as JSON or CSV, are indexed as a single chunk of text. But if the JSON or CSV documents have an internal structure (delimiters), you can assign parsing modes to generate individual search documents for each line or element. For more information, see [Indexing JSON blobs](search-howto-index-json-blobs.md) and [Indexing CSV blobs](search-howto-index-csv-blobs.md).
++ [Indexing JSON blobs](search-howto-index-json-blobs.md)++ [Indexing CSV blobs](search-howto-index-csv-blobs.md)
-A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or a .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field.
+A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or a .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field. If you have images, consider adding [AI enrichment](cognitive-search-concept-intro.md) to get more search utility from that content.
-The textual content of the document is extracted into a string field named `content`.
+Textual content of a document is extracted into a string field named "content".
> [!NOTE]
- > Azure Cognitive Search limits how much text it extracts depending on the pricing tier. The current [service limits](search-limits-quotas-capacity.md#indexer-limits) are 32,000 characters for Free tier, 64,000 for Basic, 4 million for Standard, 8 million for Standard S2, and 16 million for Standard S3. A warning is included in the indexer status response for truncated documents.
+ > Azure Cognitive Search imposes [indexer limits](search-limits-quotas-capacity.md#indexer-limits) on how much text it extracts depending on the pricing tier. A warning will appear in the indexer status response if documents are truncated.
<a name="indexing-blob-metadata"></a> ### Indexing blob metadata
-Indexers can also index blob metadata. First, any user-specified metadata properties can be extracted verbatim. To receive the values, you must define field in the search index of type `Edm.String`, with same name as the metadata key of the blob. For example, if a blob has a metadata key of `Sensitivity` with value `High`, you should define a field named `Sensitivity` in your search index and it will be populated with the value `High`.
+Blob metadata can also be indexed, and that's helpful if you think any of the standard or custom metadata properties will be useful in filters and queries.
+
+User-specified metadata properties are extracted verbatim. To receive the values, you must define field in the search index of type `Edm.String`, with same name as the metadata key of the blob. For example, if a blob has a metadata key of `Sensitivity` with value `High`, you should define a field named `Sensitivity` in your search index and it will be populated with the value `High`.
-Second, standard blob metadata properties can be extracted into the fields listed below. The blob indexer automatically creates internal field mappings for these blob metadata properties. You still have to add the fields you want to use the index definition, but you can omit creating field mappings in the indexer.
+Standard blob metadata properties can be extracted into similarly named and typed fields, as listed below. The blob indexer automatically creates internal field mappings for these blob metadata properties, converting the original hyphenated name ("metadata-storage-name") to an underscored equivalent name ("metadata_storage_name").
- + **metadata_storage_name** (`Edm.String`) - the file name of the blob. For example, if you have a blob /my-container/my-folder/subfolder/resume.pdf, the value of this field is `resume.pdf`.
+You still have to add the underscored fields to the index definition, but you can omit field mappings because the indexer will make the association automatically.
- + **metadata_storage_path** (`Edm.String`) - the full URI of the blob, including the storage account. For example, `https://myaccount.blob.core.windows.net/my-container/my-folder/subfolder/resume.pdf`
++ **metadata_storage_name** (`Edm.String`) - the file name of the blob. For example, if you have a blob /my-container/my-folder/subfolder/resume.pdf, the value of this field is `resume.pdf`.
- + **metadata_storage_content_type** (`Edm.String`) - content type as specified by the code you used to upload the blob. For example, `application/octet-stream`.
++ **metadata_storage_path** (`Edm.String`) - the full URI of the blob, including the storage account. For example, `https://myaccount.blob.core.windows.net/my-container/my-folder/subfolder/resume.pdf`
- + **metadata_storage_last_modified** (`Edm.DateTimeOffset`) - last modified timestamp for the blob. Azure Cognitive Search uses this timestamp to identify changed blobs, to avoid reindexing everything after the initial indexing.
++ **metadata_storage_content_type** (`Edm.String`) - content type as specified by the code you used to upload the blob. For example, `application/octet-stream`.
- + **metadata_storage_size** (`Edm.Int64`) - blob size in bytes.
++ **metadata_storage_last_modified** (`Edm.DateTimeOffset`) - last modified timestamp for the blob. Azure Cognitive Search uses this timestamp to identify changed blobs, to avoid reindexing everything after the initial indexing.
- + **metadata_storage_content_md5** (`Edm.String`) - MD5 hash of the blob content, if available.
++ **metadata_storage_size** (`Edm.Int64`) - blob size in bytes.
- + **metadata_storage_sas_token** (`Edm.String`) - A temporary SAS token that can be used by [custom skills](cognitive-search-custom-skill-interface.md) to get access to the blob. This token should not be stored for later use as it might expire.
++ **metadata_storage_content_md5** (`Edm.String`) - MD5 hash of the blob content, if available.+++ **metadata_storage_sas_token** (`Edm.String`) - A temporary SAS token that can be used by [custom skills](cognitive-search-custom-skill-interface.md) to get access to the blob. This token should not be stored for later use as it might expire. Lastly, any metadata properties specific to the document format of the blobs you are indexing can also be represented in the index schema. For more information about content-specific metadata, see [Content metadata properties](search-blob-metadata-properties.md). It's important to point out that you don't need to define fields for all of the above properties in your search index - just capture the properties you need for your application.
-<a name="WhichBlobsAreIndexed"></a>
+<a name="PartsOfBlobToIndex"></a>
## How to control which blobs are indexed
-You can control which blobs are indexed, and which are skipped, by setting role assignments, the blob's file type, or by setting properties on the blob themselves, causing the indexer to skip over them.
-
-### Use access controls and role assignments
-
-Indexers that run under a system or user-assigned managed identity can have membership in either a Reader or Storage Blob Data Reader role that grants read permissions on specific files and folders.
-
-### Include specific file extensions
-
-Use `indexedFileNameExtensions` to provide a comma-separated list of file extensions to index (with a leading dot). For example, to index only the .PDF and .DOCX blobs, do this:
-
-```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
-
-{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }
-}
-```
+You can control which blobs are indexed, and which are skipped, by the blob's file type or by setting properties on the blob themselves, causing the indexer to skip over them.
-### Exclude specific file extensions
-
-Use `excludedFileNameExtensions` to provide a comma-separated list of file extensions to skip (again, with a leading dot). For example, to index all blobs except those with the .PNG and .JPEG extensions, do this:
+Include specific file extensions by setting `"indexedFileNameExtensions"` to a comma-separated list of file extensions (with a leading dot). Exclude specific file extensions by setting `"excludedFileNameExtensions"` to the extensions that should be skipped. If the same extension is in both lists, it will be excluded from indexing.
```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
-
+PUT /indexers/[indexer name]?api-version=2020-06-30
{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "excludedFileNameExtensions" : ".png,.jpeg" } }
+ "parameters" : {
+ "configuration" : {
+ "indexedFileNameExtensions" : ".pdf, .docx",
+ "excludedFileNameExtensions" : ".png, .jpeg"
+ }
+ }
} ```
-If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters are present, the indexer first looks at `indexedFileNameExtensions`, then at `excludedFileNameExtensions`. If the same file extension is in both lists, it will be excluded from indexing.
- ### Add "skip" metadata the blob
-The indexer configuration parameters apply to all blobs in the container or folder. Sometimes, you want to control how *individual blobs* are indexed. You can do this by adding the following metadata properties and values to blobs in Blob storage. When the indexer encounters this property, it will skip the blob or its content in the indexing run.
+The indexer configuration parameters apply to all blobs in the container or folder. Sometimes, you want to control how *individual blobs* are indexed.
+
+Add the following metadata properties and values to blobs in Blob Storage. When the indexer encounters this property, it will skip the blob or its content in the indexing run.
| Property name | Property value | Explanation | | - | -- | -- | | `AzureSearch_Skip` |`"true"` |Instructs the blob indexer to completely skip the blob. Neither metadata nor content extraction is attempted. This is useful when a particular blob fails repeatedly and interrupts the indexing process. | | `AzureSearch_SkipContent` |`"true"` |This is equivalent of `"dataToExtract" : "allMetadata"` setting described [above](#PartsOfBlobToIndex) scoped to a particular blob. |
-## Index large datasets
+## How to index large datasets
-Indexing blobs can be a time-consuming process. In cases where you have millions of blobs to index, you can speed up indexing by partitioning your data and using multiple indexers to [process the data in parallel](search-howto-large-index.md#parallel-indexing). Here's how you can set this up:
+Indexing blobs can be a time-consuming process. In cases where you have millions of blobs to index, you can speed up indexing by partitioning your data and using multiple indexers to [process the data in parallel](search-howto-large-index.md#parallel-indexing).
1. Partition your data into multiple blob containers or virtual folders.
-1. Set up several data sources, one per container or folder. To point to a blob folder, use the `query` parameter:
+1. Set up several data sources, one per container or folder. Use the "query" parameter to specify the partition: `"container" : { "name" : "my-container", "query" : "my-folder" }`.
- ```json
- {
- "name" : "blob-datasource",
- "type" : "azureblob",
- "credentials" : { "connectionString" : "<your storage connection string>" },
- "container" : { "name" : "my-container", "query" : "my-folder" }
- }
- ```
-
-1. Create a corresponding indexer for each data source. All of the indexers should point to the same target search index.
+1. Create one indexer for each data source. Point them to the same target index.
-One search unit in your service can run one indexer at any given time. Creating multiple indexers as described above is only useful if they actually run in parallel.
-
-To run multiple indexers in parallel, scale out your search service by creating an appropriate number of partitions and replicas. For example, if your search service has 6 search units (for example, 2 partitions x 3 replicas), then 6 indexers can run simultaneously, resulting in a six-fold increase in the indexing throughput. To learn more about scaling and capacity planning, see [Adjust the capacity of an Azure Cognitive Search service](search-capacity-planning.md).
+Make sure you have sufficient capacity. One search unit in your service can run one indexer at any given time. Creating multiple indexers is only useful if they can run in parallel.
<a name="DealingWithErrors"></a>
-## Handling errors
+## Handle errors
Errors that commonly occur during indexing include unsupported content types, missing content, or oversized blobs.
-By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an image). You could use the `excludedFileNameExtensions` parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
-
-### Respond to errors
-
-There are four indexer properties that control the indexer's response when errors occur. The following examples show how to set these properties in the indexer definition. If an indexer already exists, you can add these properties by editing the definition in the portal.
+By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an audio file). You could use the "excludedFileNameExtensions" parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
-#### `"maxFailedItems"` and `"maxFailedItemsPerBatch"`
-
-Continue indexing if errors happen at any point of processing, either while parsing blobs or while adding documents to an index. Set these properties to the number of acceptable failures. A value of `-1` allows processing no matter how many errors occur. Otherwise, the value is a positive integer.
+There are five indexer properties that control the indexer's response when errors occur.
```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
-
+PUT /indexers/[indexer name]?api-version=2020-06-30
{
- ... other parts of indexer definition
- "parameters" : { "maxFailedItems" : 10, "maxFailedItemsPerBatch" : 10 }
+ "parameters" : {
+ "maxFailedItems" : 10,
+ "maxFailedItemsPerBatch" : 10,
+ "configuration" : {
+ "failOnUnsupportedContentType" : false,
+ "failOnUnprocessableDocument" : false,
+ "indexStorageMetadataOnlyForOversizedDocuments": false
+ }
} ```
-#### `"failOnUnsupportedContentType"` and `"failOnUnprocessableDocument"`
-
-For some blobs, Azure Cognitive Search is unable to determine the content type, or unable to process a document of an otherwise supported content type. To ignore these failure conditions, set configuration parameters to `false`:
-
-```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
-
-{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "failOnUnsupportedContentType" : false, "failOnUnprocessableDocument" : false } }
-}
-```
-
-### Relax indexer constraints
-
-You can also set [blob configuration properties](/rest/api/searchservice/create-indexer#blob-configuration-parameters) that effectively determine whether an error condition exists. The following property can relax constraints, suppressing errors that would otherwise occur.
+| Parameter | Valid values | Description |
+|--|--|-|
+| "maxFailedItems" | -1, null or 0, positive integer | Continue indexing if errors happen at any point of processing, either while parsing blobs or while adding documents to an index. Set these properties to the number of acceptable failures. A value of `-1` allows processing no matter how many errors occur. Otherwise, the value is a positive integer. |
+| "maxFailedItemsPerBatch" | -1, null or 0, positive integer | Same as above, but used for batch indexing. |
+| "failOnUnsupportedContentType" | true or false | If the indexer is unable to determine the content type, specify whether to continue or fail the job. |
+|"failOnUnprocessableDocument" | true or false | If the indexer is unable to process a document of an otherwise supported content type, specify whether to continue or fail the job. |
+| "indexStorageMetadataOnlyForOversizedDocuments" | true or false | Oversized blobs are treated as errors by default. If you set this parameter to true, the indexer will try to index its metadata even if the content cannot be indexed. For limits on blob size, see [service Limits](search-limits-quotas-capacity.md). |
-+ `"indexStorageMetadataOnlyForOversizedDocuments"` to index storage metadata for blob content that is too large to process. Oversized blobs are treated as errors by default. For limits on blob size, see [service Limits](search-limits-quotas-capacity.md).
+## Next steps
-## See also
+You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
++ [Change detection and deletion detection](search-howto-index-changed-deleted-blobs.md)++ [Index large data sets](search-howto-large-index.md) + [C# Sample: Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md)
-+ [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-+ [Create an indexer](search-howto-create-indexers.md)
-+ [AI enrichment overview](cognitive-search-concept-intro.md)
-+ [Search over blobs overview](search-blob-storage-integration.md)
search Search Howto Index Changed Deleted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-changed-deleted-blobs.md
Title: Changed and deleted blobs
-description: After an initial search index build that imports from Azure Blob Storage, subsequent indexing can pick up just those blobs that are changed or deleted. This article explains the details.
+description: Indexers that index from Azure Storage can pick up new and changed content automatically. To automate deletion detection, follow the strategies described in this article.
- Previously updated : 01/29/2021+ Last updated : 01/19/2022
-# Change and deletion detection in blob indexing (Azure Cognitive Search)
+# Change and delete detection using indexers for Azure Storage in Azure Cognitive Search
-After an initial search index is created, you might want subsequent indexer jobs to only pick up new and changed documents. For search content that originates from Azure Blob Storage or Azure Data Lake Storage Gen2, change detection occurs automatically when you use a schedule to trigger indexing. By default, the service reindexes only the changed blobs, as determined by the blob's `LastModified` timestamp. In contrast with other data sources supported by search indexers, blobs always have a timestamp, which eliminates the need to set up a change detection policy manually.
+After an initial search index is created, you might want subsequent indexer jobs to only pick up new and changed documents. For indexed content that originates from Azure Storage, change detection occurs automatically because indexers keep track of the last update using the built-in timestamps on objects and files in Azure Storage.
-Although change detection is a given, deletion detection is not. If you want to detect deleted documents, make sure to use a "soft delete" approach. If you delete the blobs outright, corresponding documents will not be removed from the search index.
+Although change detection is a given, deletion detection is not. An indexer doesn't track object deletion in data sources. To avoid having orphan search documents, you can implement a "soft delete" strategy that results in deleting search documents first, with physical deletion in Azure Storage following as a second step.
-There are two ways to implement the soft delete approach:
+There are two ways to implement a soft delete strategy:
-+ Native blob soft delete (preview), described next
++ [Native blob soft delete (preview)](#native-blob-soft-delete-preview), applies to Blob Storage only + [Soft delete using custom metadata](#soft-delete-using-custom-metadata)
-> [!NOTE]
-> Azure Data Lake Storage Gen2 allows directories to be renamed. When a directory is renamed the timestamps for the blobs in that directory do not get updated. As a result, the indexer will not reindex those blobs. If you need the blobs in a directory to be reindexed after a directory rename because they now have new URLs, you will need to update the `LastModified` timestamp for all the blobs in the directory so that the indexer knows to reindex them during a future run. The virtual directories in Azure Blob Storage cannot be changed so they do not have this issue.
+## Prerequisites
+++ Use an Azure Storage indexer for [Blob Storage](search-howto-indexing-azure-blob-storage.md), [Table Storage](search-howto-indexing-azure-tables.md), [File Storage](search-howto-indexing-azure-tables.md), or [Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md)+++ Use consistent document keys and file structure. Changing document keys or directory names and paths (applies to ADLS Gen2) breaks the internal tracking information used by indexers to know which content was indexed, and when it was last indexed.+
+> [!NOTE]
+> ADLS Gen2 allows directories to be renamed. When a directory is renamed, the timestamps for the blobs in that directory do not get updated. As a result, the indexer will not reindex those blobs. If you need the blobs in a directory to be reindexed after a directory rename because they now have new URLs, you will need to update the `LastModified` timestamp for all the blobs in the directory so that the indexer knows to reindex them during a future run. The virtual directories in Azure Blob Storage cannot be changed, so they do not have this issue.
## Native blob soft delete (preview) For this deletion detection approach, Cognitive Search depends on the [native blob soft delete](../storage/blobs/soft-delete-blob-overview.md) feature in Azure Blob Storage to determine whether blobs have transitioned to a soft deleted state. When blobs are detected in this state, a search indexer uses this information to remove the corresponding document from the index. > [!IMPORTANT]
-> Support for native blob soft delete is in preview. Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [REST API version 2020-06-30-Preview](./search-api-preview.md) provides this feature. There is currently no portal or .NET SDK support.
+> Support for native blob soft delete is in preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [REST API version 2020-06-30-Preview](./search-api-preview.md) provides this feature. There is currently no portal or .NET SDK support.
-### Prerequisites
+### Requirements for native soft delete
+ [Enable soft delete for blobs](../storage/blobs/soft-delete-blob-enable.md).
-+ Blobs must be in an Azure Blob Storage container. The Cognitive Search native blob soft delete policy is not supported for blobs from Azure Data Lake Storage Gen2.
++ Blobs must be in an Azure Blob Storage container. The Cognitive Search native blob soft delete policy is not supported for blobs in ADLS Gen2. + Document keys for the documents in your index must be mapped to either be a blob property or blob metadata. + You must use the preview REST API (`api-version=2020-06-30-Preview`) to configure support for soft delete.
If you restore a soft deleted blob in Blob storage, the indexer will not always
To make sure that an undeleted blob is reindexed, you will need to update the blob's `LastModified` timestamp. One way to do this is by resaving the metadata of that blob. You don't need to change the metadata, but resaving the metadata will update the blob's `LastModified` timestamp so that the indexer knows to pick it up.
-## Soft delete using custom metadata
+<a name="soft-delete-using-custom-metadata"></a>
-This method uses a blob's metadata to determine whether a search document should be removed from the index. This method requires two separate actions, deleting the search document from the index, followed by blob deletion in Azure Storage.
+## Custom metadata: Soft delete strategy
-There are steps to follow in both Blob storage and Cognitive Search, but there are no other feature dependencies. This capability is supported in generally available APIs.
+This method uses custom metadata to indicate whether a search document should be removed from the index. It requires two separate actions: deleting the search document from the index, followed by file deletion in Azure Storage.
-1. Add a custom metadata key-value pair to the blob to indicate to Azure Cognitive Search that it is logically deleted.
+There are steps to follow in both Azure Storage and Cognitive Search, but there are no other feature dependencies.
-1. Configure a soft deletion column detection policy on the data source. For example, the following policy considers a blob to be deleted if it has a metadata property `IsDeleted` with the value `true`:
+1. In Azure Storage, add a custom metadata key-value pair to the file to indicate the file is flagged for deletion. For example, you could name the property "IsDeleted", set to false. When you want to delete the file, change it to true.
- ```http
- PUT https://[service name].search.windows.net/datasources/blob-datasource?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
+1. In Azure Cognitive Search, edit the data source definition to include a "dataDeletionDetectionPolicy" property. For example, the following policy considers a file to be deleted if it has a metadata property `IsDeleted` with the value `true`:
+ ```http
+ PUT https://[service name].search.windows.net/datasources/file-datasource?api-version=2020-06-30
{
- "name" : "blob-datasource",
- "type" : "azureblob",
+ "name" : "file-datasource",
+ "type" : "azurefile",
"credentials" : { "connectionString" : "<your storage connection string>" },
- "container" : { "name" : "my-container", "query" : null },
+ "container" : { "name" : "my-share", "query" : null },
"dataDeletionDetectionPolicy" : { "@odata.type" :"#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy", "softDeleteColumnName" : "IsDeleted",
There are steps to follow in both Blob storage and Cognitive Search, but there a
} ```
-1. Once the indexer has processed the blob and deleted the document from the index, you can delete the blob in Azure Blob Storage.
+1. Run the indexer. Once the indexer has processed the file and deleted the document from the search index, you can then delete the physical file in Azure Storage.
+
+## Custom metadata: Re-index undeleted blobs and files
+
+You can reverse a soft-delete if the original source file still physically exists in Azure Storage.
-### Reindexing undeleted blobs (using custom metadata)
+1. Change the `"softDeleteMarkerValue" : "false"` on the blob or file in Azure Storage.
-After an indexer processes a deleted blob and removes the corresponding search document from the index, it won't revisit that blob if you restore it later if the blob's `LastModified` timestamp is older than the last indexer run.
+1. Check the blob or file's `LastModified` timestamp to make it is newer than the last indexer run. You can force an update to the current date and time by re-saving the existing metadata.
-If you would like to reindex that document, change the `"softDeleteMarkerValue" : "false"` for that blob and rerun the indexer.
+1. Run the indexer.
## Next steps
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-blob-storage.md
Previously updated : 01/17/2022 Last updated : 01/19/2022 # Index data from Azure Blob Storage
-In Azure Cognitive Search, blob [indexers](search-indexer-overview.md) are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing.
+Configure a [search indexer](search-indexer-overview.md) to extract content and metadata from Azure Blob Storage and make it searchable in Azure Cognitive Search.
-This article focuses on how to configure a blob indexer for text-based indexing, where just the textual content and metadata are loaded into a search index for full text search scenarios. Inputs are your blobs, in a single container. Output is a search index with searchable content and metadata stored in individual fields.
+Blob [indexers are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing. This article focuses on indexers for text-based indexing, where just the textual content and metadata are ingested for full text search scenarios.
+
+Inputs to the indexer are your blobs, in a single container. Output is a search index with searchable content and metadata stored in individual fields.
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Blob Storage.
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ [Access tiers](../storage/blobs/access-tiers-overview.md) for Blob storage include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
-+ Blob containers storing non-binary textual content for text-based indexing. This indexer also supports [AI enrichment](cognitive-search-concept-intro.md) if you have binary files. Note that blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
++ Blobs containing text. If you have binary data, you can include [AI enrichment](cognitive-search-concept-intro.md) for image analysis.+
+Note that blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
<a name="SupportedFormats"></a> ## Supported document formats
-The Azure Cognitive Search blob indexer can extract text from the following document formats:
+The blob indexer can extract text from the following document formats:
[!INCLUDE [search-blob-data-sources](../../includes/search-blob-data-sources.md)] ## Define the data source
-A primary difference between a blob indexer and other indexers is the data source assignment. The data source definition specifies "type": `"azureblob"`, a content path, and how to connect
+The data source definition specifies the data source type, content path, and how to connect.
1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
A primary difference between a blob indexer and other indexers is the data sourc
1. Set "container" to the blob container, and use "query" to specify any subfolders.
-A data source definition can also include additional properties for [soft deletion policies](search-howto-index-changed-deleted-blobs.md) and [field mappings](search-indexer-field-mappings.md) if field names and types are not the same.
+A data source definition can also include [soft deletion policies](search-howto-index-changed-deleted-blobs.md), if you want the indexer to delete a search document when the source document is flagged for deletion.
<a name="credentials"></a>
Indexers can connect to a blob container using the following connections.
| Managed identity connection string | || |`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;" }`|
-|This connection string does not require an account key, but you must follow the instructions for [Setting up a connection to an Azure Storage account using a managed identity](search-howto-managed-identities-storage.md).|
+|This connection string does not require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-storage.md).|
| Full access storage account connection string | |--|
Indexers can connect to a blob container using the following connections.
| Container shared access signature | |--| | `{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.blob.core.windows.net/<container name>?sv=2016-05-31&sr=c&sig=<the signature>&se=<the validity end time>&sp=rl;" }` |
-| The SAS should have the list and read permissions on the container. For more information on Azure Storage shared access signatures, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
+| The SAS should have the list and read permissions on the container. For more information, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
> [!NOTE] > If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
In a [search index](search-what-is-an-index.md), add fields to accept the conten
{ "name" : "my-search-index", "fields": [
- { "name": "metadata_storage_path", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "ID", "type": "Edm.String", "key": true, "searchable": false },
{ "name": "content", "type": "Edm.String", "searchable": true, "filterable": false }, { "name": "metadata_storage_name", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true }, { "name": "metadata_storage_path", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
In a [search index](search-what-is-an-index.md), add fields to accept the conten
} ```
-1. Designate one string field as the document key that uniquely identifies each document. For blob content, the best candidates for a document key are metadata properties on the blob:
+1. Create a document key field ("key": true). For blob content, the best candidates are metadata properties. Metadata properties often include characters, such as `/` and `-`, that are invalid for document keys. Because the indexer has a "base64EncodeKeys" property (true by default), it automatically encodes the metadata property, with no configuration or field mapping required.
- + **`metadata_storage_path`** (default). Using the full path ensures uniqueness, but the path contains `/` characters that are [invalid in a document key](/rest/api/searchservice/naming-rules). Use the [base64Encode function](search-indexer-field-mappings.md#base64EncodeFunction) to encode characters (see the example in the next section). If using the portal to define the indexer, the encoding step is built in.
+ + **`metadata_storage_path`** (default) full path to the object or file
- + **`metadata_storage_name`** is a candidate, but only if names are unique across all containers and folders you are indexing. When considering this option, watch for characters that are invalid for document keys, such as dashes. You can handle invalid characters by using the [base64Encode function](search-indexer-field-mappings.md#base64EncodeFunction) (see the example in the next section). If you do this, remember to also encode document keys when passing them in API calls such as [Lookup Document (REST)](/rest/api/searchservice/lookup-document). In .NET, you can use the [UrlTokenEncode method](/dotnet/api/system.web.httpserverutility.urltokenencode) to encode characters.
+ + **`metadata_storage_name`** usable only if names are unique
+ A custom metadata property that you add to blobs. This option requires that your blob upload process adds that metadata property to all blobs. Since the key is a required property, any blobs that are missing a value will fail to be indexed. If you use a custom metadata property as a key, avoid making changes to that property. Indexers will add duplicate documents for the same blob if the key property changes.
- > [!NOTE]
- > If there is no explicit mapping for the key field in the index, Azure Cognitive Search automatically uses "metadata_storage_path" as the key and base-64 encodes key values.
-
-1. Add a "content" field if you want to import the content, or text, extracted from blobs. You aren't required to name the field "content", but doing lets you take advantage of implicit field mappings. The blob indexer can send blob contents to a "content" field of type `Edm.String` in the index, with no field mappings required.
+1. Add a "content" field to store extracted text from each file through the blob's "content" property. You aren't required to use this name, but doing so lets you take advantage of implicit field mappings.
1. Add more fields for any blob metadata that you want in the index. The indexer can read custom metadata properties, [standard metadata](#indexing-blob-metadata) properties, and [content-specific metadata](search-blob-metadata-properties.md) properties. ## Configure the blob indexer
-Indexer configuration specifies the inputs, parameters, and properties that inform run time behaviors.
-
-Under "configuration", you can control which blobs are indexed, and which are skipped, by the blob's file type or by setting properties on the blob themselves, causing the indexer to skip over them.
+Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors. The "configuration" section determines what content gets indexed.
1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
Under "configuration", you can control which blobs are indexed, and which are sk
"batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
"configuration:" { "indexedFileNameExtensions" : ".pdf,.docx",
- "excludedFileNameExtensions" : ".png,.jpeg"
+ "excludedFileNameExtensions" : ".png,.jpeg",
+ "dataToExtract": "contentAndMetadata",
+ "parsingMode": "default",
+ "imageAction": "none"
} }, "schedule" : { },
Under "configuration", you can control which blobs are indexed, and which are sk
} ```
-1. In the optional "configuration" section, provide any inclusion or exclusion criteria. If left unspecified, all blobs in the container are retrieved.
-
- If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters are present, Azure Cognitive Search first looks at `indexedFileNameExtensions`, then at `excludedFileNameExtensions`. If the same file extension is present in both lists, it will be excluded from indexing.
-
-1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
-
-### Set field mappings
-
-Field mappings are a section in the indexer definition that maps source fields to destination fields in the search index.
-
-In blob indexing, you can often omit field mappings because the indexer has built-in support for mapping the "content" property and [blob metadata properties](#indexing-blob-metadata) to fields in an index, assuming the search field name matches the blob property name, and is of the expected data type.
-
-Reasons for [creating an explicit field mapping](search-indexer-field-mappings.md) might be to handle naming or data type differences, or to add functions such as base64encoding, as illustrated in the following examples.
-
-### Example: Base-encoding metadata_storage_name
-
-The following example demonstrates "metadata_storage_name" as the document key. Assume the index has a key field named "key" and another field named "fileSize" for storing the document size. [Field mappings](search-indexer-field-mappings.md) in the indexer definition establish field associations, and "metadata_storage_name" has the [base64Encode field mapping function](search-indexer-field-mappings.md#base64EncodeFunction) to handle unsupported characters.
-
-```http
-POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
-{
- "name" : "my-blob-indexer",
- "dataSourceName" : "my-blob-datasource ",
- "targetIndexName" : "my-search-index",
- "schedule" : { "interval" : "PT2H" },
- "fieldMappings" : [
- { "sourceFieldName" : "metadata_storage_name", "targetFieldName" : "key", "mappingFunction" : { "name" : "base64Encode" } },
- { "sourceFieldName" : "metadata_storage_size", "targetFieldName" : "fileSize" }
- ]
-}
-```
+1. Set `batchSize` if the default (10 documents) is either underutilizing or overwhelming available resources. Default batch sizes are data source specific. Blob indexing sets batch size at 10 documents in recognition of the larger average document size.
-### Example: How to make an encoded field "searchable"
+1. Under "configuration", provide any [inclusion or exclusion criteria](#PartsOfBlobToIndex) based on file type or leave unspecified to retrieve all blobs.
-There are times when you need to use an encoded version of a field like "metadata_storage_path" as the key, but also need that field to be searchable (without encoding) in the search index. To support both use cases, you can map "metadata_storage_path" to two fields; one for the key (encoded), and a second for a path field that we can assume is attributed as "searchable" in the index schema. The example below shows two field mappings for "metadata_storage_path".
+1. Set "dataToExtract" to control which parts of the blobs are indexed:
-```http
-PUT /indexers/blob-indexer?api-version=2020-06-30
-{
- "dataSourceName" : " blob-datasource ",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" },
- "fieldMappings" : [
- { "sourceFieldName" : "metadata_storage_path", "targetFieldName" : "key", "mappingFunction" : { "name" : "base64Encode" } },
- { "sourceFieldName" : "metadata_storage_path", "targetFieldName" : "path" }
- ]
-}
-```
+ + "contentAndMetadata" specifies that all metadata and textual content extracted from the blob are indexed. This is the default value.
-<a name="PartsOfBlobToIndex"></a>
+ + "storageMetadata" specifies that only the [standard blob properties and user-specified metadata](../storage/blobs/storage-blob-container-properties-metadata.md) are indexed.
-### Set parameters
+ + "allMetadata" specifies that standard blob properties and any [metadata for found content types](search-blob-metadata-properties.md) are extracted from the blob content and indexed.
-Blob indexers include parameters that optimize indexing for specific use cases, such as content types (JSON, CSV, PDF), or to specify which parts of the blob to index.
+1. Set "parsingMode" if blobs should be mapped to [multiple search documents](search-howto-index-one-to-many-blobs.md), or if they consist of [plain text](search-howto-index-plaintext-blobs.md), [JSON documents](search-howto-index-json-blobs.md), or [CSV files](search-howto-index-csv-blobs.md).
-This section describes several of the more frequently used parameters. For the full list, see [Blob configuration parameters reference](/rest/api/searchservice/create-indexer#blob-configuration-parameters).
+1. [Specify field mappings](search-indexer-field-mappings.md) if there are differences in field name or type, or if you need multiple versions of a source field in the search index.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to set its parameters:
+ In blob indexing, you can often omit field mappings because the indexer has built-in support for mapping the "content" and metadata properties to similarly named and typed fields in an index. For metadata properties, the indexer will automatically replace hyphens `-` with underscores in the search index.
- ```json
- "parameters": {
- "batchSize": null,
- "maxFailedItems": 0,
- "maxFailedItemsPerBatch": 0,
- "configuration": {
- "dataToExtract": "contentAndMetadata",
- "parsingMode": "default",
- "imageAction": "none"
- }
- }
- ```
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
-1. Set "dataToExtract" to control which parts of the blobs are indexed:
+For the full list of parameter descriptions, see [Blob configuration parameters](/rest/api/searchservice/create-indexer#blob-configuration-parameters) in the REST API.
- + "contentAndMetadata" specifies that all metadata and textual content extracted from the blob are indexed. This is the default value.
+## How blobs are indexed
- + "storageMetadata" specifies that only the [standard blob properties and user-specified metadata](../storage/blobs/storage-blob-container-properties-metadata.md) are indexed.
+By default, most blobs are indexed as a single search document in the index, including blobs with structured content, such as JSON or CSV, which are indexed as a single chunk of text. However, for JSON or CSV documents that have an internal structure (delimiters), you can assign parsing modes to generate individual search documents for each line or element:
- + "allMetadata" specifies that standard blob properties and any [metadata for found content types](search-blob-metadata-properties.md) are extracted from the blob content and indexed.
++ [Indexing JSON blobs](search-howto-index-json-blobs.md)++ [Indexing CSV blobs](search-howto-index-csv-blobs.md)
-1. Set "parsingMode" if blobs should be mapped to [multiple search documents](search-howto-index-one-to-many-blobs.md), or if they consist of [plain text](search-howto-index-plaintext-blobs.md), [JSON documents](search-howto-index-json-blobs.md), or [CSV files](search-howto-index-csv-blobs.md).
+A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or a .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field. If you have images, consider adding [AI enrichment](cognitive-search-concept-intro.md) to get more search utility from that content.
-1. Set "batchSize` if the default (10 documents) is either under utilizing or overwhelming available resources. Default batch sizes are data source specific. Blob indexing sets batch size at 10 documents in recognition of the larger average document size.
+Textual content of a document is extracted into a string field named "content".
-1. See [Handling errors](#handling-errors) for guidance on setting "maxFailedItems" or "maxFailedItemsPerBatch".
+ > [!NOTE]
+ > Azure Cognitive Search imposes [indexer limits](search-limits-quotas-capacity.md#indexer-limits) on how much text it extracts depending on the pricing tier. A warning will appear in the indexer status response if documents are truncated.
<a name="indexing-blob-metadata"></a>
-## Indexing blob metadata
+### Indexing blob metadata
+
+Blob metadata can also be indexed, and that's helpful if you think any of the standard or custom metadata properties will be useful in filters and queries.
-Blob metadata can be helpful in search solutions, providing information about blob origin and content types that could be useful for filtering and queries.
+User-specified metadata properties are extracted verbatim. To receive the values, you must define field in the search index of type `Edm.String`, with same name as the metadata key of the blob. For example, if a blob has a metadata key of `Sensitivity` with value `High`, you should define a field named `Sensitivity` in your search index and it will be populated with the value `High`.
-First, any user-specified metadata properties can be extracted verbatim. To receive the values, you must define field in the search index of type `Edm.String`, with same name as the metadata key of the blob. For example, if a blob has a metadata key of `Sensitivity` with value `High`, you should define a field named `Sensitivity` in your search index and it will be populated with the value `High`.
+Standard blob metadata properties can be extracted into similarly named and typed fields, as listed below. The blob indexer automatically creates internal field mappings for these blob metadata properties, converting the original hyphenated name ("metadata-storage-name") to an underscored equivalent name ("metadata_storage_name").
-Second, standard blob metadata properties can be extracted into the fields listed below. The blob indexer automatically creates internal field mappings for these blob metadata properties. You still have to add the fields to the index definition, but you can omit creating field mappings in the indexer.
+You still have to add the underscored fields to the index definition, but you can omit field mappings because the indexer will make the association automatically.
+ **metadata_storage_name** (`Edm.String`) - the file name of the blob. For example, if you have a blob /my-container/my-folder/subfolder/resume.pdf, the value of this field is `resume.pdf`.
Lastly, any metadata properties specific to the document format of the blobs you
It's important to point out that you don't need to define fields for all of the above properties in your search index - just capture the properties you need for your application.
+<a name="PartsOfBlobToIndex"></a>
+ ## How to control which blobs are indexed You can control which blobs are indexed, and which are skipped, by the blob's file type or by setting properties on the blob themselves, causing the indexer to skip over them.
PUT /indexers/[indexer name]?api-version=2020-06-30
} ```
-## How blobs are indexed
-
-By default, most blobs are indexed as a single search document in the index, including blobs with structured content, such as JSON or CSV, which are indexed as a single chunk of text. However, for JSON or CSV documents that have an internal structure (delimiters), you can assign parsing modes to generate individual search documents for each line or element:
-
-+ [Indexing JSON blobs](search-howto-index-json-blobs.md)
-+ [Indexing CSV blobs](search-howto-index-csv-blobs.md).
-
-A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or a .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field within the same search document.
- ### Add "skip" metadata the blob
-The indexer configuration parameters apply to all blobs in the container or folder. Sometimes, you want to control how *individual blobs* are indexed. You can do this by adding the following metadata properties and values to blobs in Blob storage. When the indexer encounters this property, it will skip the blob or its content in the indexing run.
+The indexer configuration parameters apply to all blobs in the container or folder. Sometimes, you want to control how *individual blobs* are indexed.
+
+Add the following metadata properties and values to blobs in Blob Storage. When the indexer encounters this property, it will skip the blob or its content in the indexing run.
| Property name | Property value | Explanation | | - | -- | -- |
The indexer configuration parameters apply to all blobs in the container or fold
## How to index large datasets
-Indexing blobs can be a time-consuming process. In cases where you have millions of blobs to index, you can speed up indexing by partitioning your data and using multiple indexers to [process the data in parallel](search-howto-large-index.md#parallel-indexing). Here's how you can set this up:
+Indexing blobs can be a time-consuming process. In cases where you have millions of blobs to index, you can speed up indexing by partitioning your data and using multiple indexers to [process the data in parallel](search-howto-large-index.md#parallel-indexing).
-+ Partition your data into multiple blob containers or virtual folders
+1. Partition your data into multiple blob containers or virtual folders.
-+ Set up several data sources, one per container or folder. To point to a blob folder, use the `query` parameter:
+1. Set up several data sources, one per container or folder. Use the "query" parameter to specify the partition: `"container" : { "name" : "my-container", "query" : "my-folder" }`.
- ```json
- {
- "name" : "blob-datasource",
- "type" : "azureblob",
- "credentials" : { "connectionString" : "<your storage connection string>" },
- "container" : { "name" : "my-container", "query" : "my-folder" }
- }
- ```
-
-+ Create a corresponding indexer for each data source. All of the indexers should point to the same target search index.
-
-+ One search unit in your service can run one indexer at any given time. Creating multiple indexers as described above is only useful if they actually run in parallel.
+1. Create one indexer for each data source. Point them to the same target index.
- To run multiple indexers in parallel, scale out your search service by creating an appropriate number of partitions and replicas. For example, if your search service has 6 search units (for example, 2 partitions x 3 replicas), then 6 indexers can run simultaneously, resulting in a six-fold increase in the indexing throughput. To learn more about scaling and capacity planning, see [Adjust the capacity of an Azure Cognitive Search service](search-capacity-planning.md).
+Make sure you have sufficient capacity. One search unit in your service can run one indexer at any given time. Creating multiple indexers is only useful if they can run in parallel.
<a name="DealingWithErrors"></a>
-## Handling errors
+## Handle errors
Errors that commonly occur during indexing include unsupported content types, missing content, or oversized blobs.
-By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an image). You could use the "excludedFileNameExtensions" parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
-
-### Respond to errors
-
-There are four indexer properties that control the indexer's response when errors occur. The following examples show how to set these properties in the indexer definition. If an indexer already exists, you can add these properties by editing the definition in the portal.
+By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an audio file). You could use the "excludedFileNameExtensions" parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
-#### `"maxFailedItems"` and `"maxFailedItemsPerBatch"`
-
-Continue indexing if errors happen at any point of processing, either while parsing blobs or while adding documents to an index. Set these properties to the number of acceptable failures. A value of `-1` allows processing no matter how many errors occur. Otherwise, the value is a positive integer.
+There are five indexer properties that control the indexer's response when errors occur.
```http PUT /indexers/[indexer name]?api-version=2020-06-30 { "parameters" : { "maxFailedItems" : 10,
- "maxFailedItemsPerBatch" : 10
- }
-}
-```
-
-#### `"failOnUnsupportedContentType"` and `"failOnUnprocessableDocument"`
-
-For some blobs, Azure Cognitive Search is unable to determine the content type, or unable to process a document of an otherwise supported content type. To ignore these failure conditions, set configuration parameters to `false`:
-
-```http
-PUT /indexers/[indexer name]?api-version=2020-06-30
-{
- "parameters" : {
+ "maxFailedItemsPerBatch" : 10,
"configuration" : { "failOnUnsupportedContentType" : false,
- "failOnUnprocessableDocument" : false
- }
+ "failOnUnprocessableDocument" : false,
+ "indexStorageMetadataOnlyForOversizedDocuments": false
} } ```
-### Relax indexer constraints
-
-You can also set [blob configuration parameters](/rest/api/searchservice/create-indexer#blob-configuration-parameters) that effectively determine whether an error condition exists. The following property can relax constraints, suppressing errors that would otherwise occur.
+| Parameter | Valid values | Description |
+|--|--|-|
+| "maxFailedItems" | -1, null or 0, positive integer | Continue indexing if errors happen at any point of processing, either while parsing blobs or while adding documents to an index. Set these properties to the number of acceptable failures. A value of `-1` allows processing no matter how many errors occur. Otherwise, the value is a positive integer. |
+| "maxFailedItemsPerBatch" | -1, null or 0, positive integer | Same as above, but used for batch indexing. |
+| "failOnUnsupportedContentType" | true or false | If the indexer is unable to determine the content type, specify whether to continue or fail the job. |
+|"failOnUnprocessableDocument" | true or false | If the indexer is unable to process a document of an otherwise supported content type, specify whether to continue or fail the job. |
+| "indexStorageMetadataOnlyForOversizedDocuments" | true or false | Oversized blobs are treated as errors by default. If you set this parameter to true, the indexer will try to index its metadata even if the content cannot be indexed. For limits on blob size, see [service Limits](search-limits-quotas-capacity.md). |
-+ "indexStorageMetadataOnlyForOversizedDocuments" to index storage metadata for blob content that is too large to process. Oversized blobs are treated as errors by default. For limits on blob size, see [service Limits](search-limits-quotas-capacity.md).
+## Next steps
-## See also
+You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
-+ [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-+ [Create an indexer](search-howto-create-indexers.md)
-+ [AI enrichment overview](cognitive-search-concept-intro.md)
-+ [Search over blobs overview](search-blob-storage-integration.md)
++ [Change detection and deletion detection](search-howto-index-changed-deleted-blobs.md)++ [Index large data sets](search-howto-large-index.md)
search Search Howto Indexing Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-tables.md
Previously updated : 01/17/2022 Last updated : 01/19/2022 # Index data from Azure Table Storage
-Configure a table [indexer](search-indexer-overview.md) in Azure Cognitive Search to retrieve, serialize, and index entity content from a single table in Azure Table Storage.
+Configure a [search indexer](search-indexer-overview.md) to extract content from Azure Table Storage and make it searchable in Azure Cognitive Search.
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Azure Table Storage.
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ [Azure Table Storage](../storage/tables/table-storage-overview.md)
-+ Tables with entities containing non-binary textual content for text-based indexing. This indexer also supports [AI enrichment](cognitive-search-concept-intro.md) if you have binary files.
++ Tables containing text. If you have binary data, you can include [AI enrichment](cognitive-search-concept-intro.md) for image analysis. ## Define the data source
-A primary difference between a table indexer and other indexers is the data source assignment. The data source definition specifies "type": `"azuretable"`, a content path, and how to connect.
+The data source definition specifies the data source type, content path, and how to connect.
1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
A primary difference between a table indexer and other indexers is the data sour
1. Optionally, set "query" to a filter on PartitionKey. This is a best practice that improves performance. If "query" is specified any other way, the indexer will execute a full table scan, resulting in poor performance if the tables are large.
-A data source definition can also include additional properties for [soft deletion policies](#soft-delete-using-custom-metadata) and [field mappings](search-indexer-field-mappings.md) if field names and types are not the same.
+A data source definition can also include [soft deletion policies](search-howto-index-changed-deleted-blobs.md), if you want the indexer to delete a search document when the source document is flagged for deletion.
<a name="Credentials"></a>
A data source definition can also include additional properties for [soft deleti
Indexers can connect to a table using the following connections.
-**Full access storage account connection string**:
-`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }`
+| Managed identity connection string |
+||
+|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;" }`|
+|This connection string does not require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-storage.md).|
-You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key.
+| Full access storage account connection string |
+|--|
+|`{ "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>;" }` |
+| You can get the connection string from the Storage account page in Azure portal by selecting **Access keys** in the left navigation pane. Make sure to select a full connection string and not just a key. |
-+ **Managed identity connection string**: `ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;`
-This connection string does not require an account key, but you must follow the instructions for [Setting up a connection to an Azure Storage account using a managed identity](search-howto-managed-identities-storage.md).
+| Storage account shared access signature** (SAS) connection string |
+|-|
+| `{ "connectionString" : "BlobEndpoint=https://<your account>.blob.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=b&sp=rl;" }` |
+| The SAS should have the list and read permissions on tables and entities. |
-+ **Storage account shared access signature connection string**: `TableEndpoint=https://<your account>.table.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=t&sp=rl` The shared access signature should have the list and read permissions on containers (tables in this case) and objects (table rows).
+| Container shared access signature |
+|--|
+| `{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.blob.core.windows.net/<container name>?sv=2016-05-31&sr=c&sig=<the signature>&se=<the validity end time>&sp=rl;" }` |
+| The SAS should have the list and read permissions on the container. For more information, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
-+ **Table shared access signature**: `ContainerSharedAccessUri=https://<your storage account>.table.core.windows.net/<table name>?tn=<table name>&sv=2016-05-31&sig=<the signature>&se=<the validity end time>&sp=r` The shared access signature should have query (read) permissions on the table.
+> [!NOTE]
+> If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
-For more information on storage shared access signatures, see [Using shared access signatures](../storage/common/storage-sas-overview.md).
+<a name="Performance"></a>
-> [!NOTE]
-> If you use shared access signature credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration or the indexer will fail with a "Credentials provided in the connection string are invalid or have expired" message.
+### Partition for improved performance
+
+By default, Azure Cognitive Search uses the following internal query filter to keep track of which source entities have been updated since the last run: `Timestamp >= HighWaterMarkValue`. Because Azure tables donΓÇÖt have a secondary index on the `Timestamp` field, this type of query requires a full table scan and is therefore slow for large tables.
+
+To avoid a full scan, you can use table partitions to narrow the scope of each indexer job.
+++ If your data can naturally be partitioned into several partition ranges, create a data source and a corresponding indexer for each partition range. Each indexer now has to process only a specific partition range, resulting in better query performance. If the data that needs to be indexed has a small number of fixed partitions, even better: each indexer only does a partition scan. +
+ For example, to create a data source for processing a partition range with keys from `000` to `100`, use a query like this: `"container" : { "name" : "my-table", "query" : "PartitionKey ge '000' and PartitionKey lt '100' " }`
+++ If your data is partitioned by time (for example, if you create a new partition every day or week), consider the following approach: +
+ + In the data source definition, specify a query similar to the following example: `(PartitionKey ge <TimeStamp>) and (other filters)`.
+
+ + Monitor indexer progress by using [Get Indexer Status API](/rest/api/searchservice/get-indexer-status), and periodically update the `<TimeStamp>` condition of the query based on the latest successful high-water-mark value.
+
+ + With this approach, if you need to trigger a complete reindexing, you need to reset the data source query in addition to resetting the indexer.
## Add search fields to an index
In a [search index](search-what-is-an-index.md), add fields to accept the conten
```http POST https://[service name].search.windows.net/indexes?api-version=2020-06-30 {
- "name" : "my-search-index",
- "fields": [
- { "name": "ID", "type": "Edm.String", "key": true, "searchable": false },
- { "name": "SomeColumnInMyTable", "type": "Edm.String", "searchable": true }
- ]
+ "name" : "my-search-index",
+ "fields": [
+ { "name": "ID", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "SomeColumnInMyTable", "type": "Edm.String", "searchable": true }
+ ]
} ```
-1. Create a key field, but do not define field mappings to alternative unique strings in the table.
+1. Create a document key field ("key": true), but allow the indexer to populate it automatically. Do not define a field mapping to alternative unique string field in your table.
+
+ A table indexer populates the key field with concatenated partition and row keys from the table. For example, if a rowΓÇÖs PartitionKey is `PK1` and RowKey is `RK1`, then the key value is `PK1RK1`. If the partition key is null, just the row key is used.
- A table indexer will populate the key field with concatenated partition and row keys from the table. For example, if a rowΓÇÖs PartitionKey is `PK1` and RowKey is `RK1`, then the `Key` field's value is `PK1RK1`. If the partition key is null, just the row key is used.
+1. Create additional fields that correspond to entity fields. For example, if an entity looks like the following example, your search index should have fields for HotelName, Description, and Category.
-1. Create additional fields that correspond to entity fields. Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md).
+ :::image type="content" source="media/search-howto-indexing-tables/table.png" alt-text="Screenshot of table content in Storage browser." border="true":::
+
+ Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md).
## Configure the table indexer
In a [search index](search-what-is-an-index.md), add fields to accept the conten
"name" : "table-indexer", "dataSourceName" : "my-table-datasource", "targetIndexName" : "my-search-index",
- "schedule" : { "interval" : "PT2H" }
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration:" { }
+ },
+ "schedule" : { },
+ "fieldMappings" : [ ]
} ``` 1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
-## Change and deletion detection
-
-After an initial search index is created, you might want subsequent indexer jobs to pick up only new and changed documents. Fortunately, content in Azure Storage is timestamped, which gives indexers sufficient information for determining what's new and changed automatically. For search content that originates from Azure Table Storage, the indexer keeps track of the entity's `Timestamp` timestamp and reindexes only new and changed content.
-
-Although change detection is a given, deletion detection is not. If you want to detect deleted entities, make sure to use a "soft delete" approach. If you delete the files outright in a table, corresponding search documents will not be removed from the search index.
-
-## Soft delete using custom metadata
-
-To indicate that certain documents must be removed from the search index, you can use a soft delete strategy. Instead of deleting an entity, add a property to indicate that it's deleted, and set up a soft deletion detection policy on the data source. For example, the following policy considers that an entity is deleted if it has an `IsDeleted` property set to `"true"`:
-
-```http
-PUT https://[service name].search.windows.net/datasources?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
-
-{
- "name" : "my-table-datasource",
- "type" : "azuretable",
- "credentials" : { "connectionString" : "<your storage connection string>" },
- "container" : { "name" : "table name", "query" : "<query>" },
- "dataDeletionDetectionPolicy" : { "@odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy", "softDeleteColumnName" : "IsDeleted", "softDeleteMarkerValue" : "true" }
-}
-```
-
-<a name="Performance"></a>
-
-## Performance considerations
-
-By default, Azure Cognitive Search uses the following internal query filter to keep track of which source entities have been updated since the last run: `Timestamp >= HighWaterMarkValue`.
-
-Because Azure tables donΓÇÖt have a secondary index on the `Timestamp` field, this type of query requires a full table scan and is therefore slow for large tables.
-
-Here are two possible approaches for improving table indexing performance. Both rely on using table partitions:
-
-+ If your data can naturally be partitioned into several partition ranges, create a data source and a corresponding indexer for each partition range. Each indexer now has to process only a specific partition range, resulting in better query performance. If the data that needs to be indexed has a small number of fixed partitions, even better: each indexer only does a partition scan. For example, to create a data source for processing a partition range with keys from `000` to `100`, use a query like this:
-
- ```json
- "container" : { "name" : "my-table", "query" : "PartitionKey ge '000' and PartitionKey lt '100' " }
- ```
-
-+ If your data is partitioned by time (for example, you create a new partition every day or week), consider the following approach:
-
- + Use a query of the form: `(PartitionKey ge <TimeStamp>) and (other filters)`.
-
- + Monitor indexer progress by using [Get Indexer Status API](/rest/api/searchservice/get-indexer-status), and periodically update the `<TimeStamp>` condition of the query based on the latest successful high-water-mark value.
-
- + With this approach, if you need to trigger a complete reindexing, you need to reset the datasource query in addition to resetting the indexer.
+## Next steps
-## See also
+You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
-+ [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-+ [Create an indexer](search-howto-create-indexers.md)
++ [Change detection and deletion detection](search-howto-index-changed-deleted-blobs.md)++ [Index large data sets](search-howto-large-index.md)
search Search Indexer Field Mappings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-field-mappings.md
Previously updated : 10/19/2021 Last updated : 01/19/2022 # Field mappings and transformations using Azure Cognitive Search indexers ![Indexer Stages](./media/search-indexer-field-mappings/indexer-stages-field-mappings.png "indexer stages")
-When using Azure Cognitive Search indexers, the indexer will automatically map fields in a data source to fields in a target index, assuming field names and types are compatible. In some cases, input data doesn't quite match the schema of your target index. One solution is to use *field mappings* to specifically set the data path during the indexing process.
+When using Azure Cognitive Search indexers, the indexer will automatically map fields in a data source to fields in a target index, assuming field names and types are compatible. When input data doesn't quite match the schema of your target index, you can define *field mappings* to specifically set the data path.
-Field mappings can be used to address the following scenarios:
+Field mappings address the following scenarios:
+ Mismatched field names. Suppose your data source has a field named `_id`. Given that Azure Cognitive Search doesn't allow field names that start with an underscore, a field mapping lets you effectively rename a field.
A field mapping function transforms the contents of a field before it's stored i
Performs *URL-safe* Base64 encoding of the input string. Assumes that the input is UTF-8 encoded.
-#### Example - document key lookup
+#### Example: Base-encoding a document key
-Only URL-safe characters can appear in an Azure Cognitive Search document key (so that you can address the document using the [Lookup API](/rest/api/searchservice/lookup-document)). If the source field for your key contains URL-unsafe characters, you can use the `base64Encode` function to convert it at indexing time. However, a document key (both before and after conversion) can't be longer than 1,024 characters.
+Only URL-safe characters can appear in an Azure Cognitive Search document key (so that you can address the document using the [Lookup API](/rest/api/searchservice/lookup-document)). If the source field for your key contains URL-unsafe characters, such as `-` and `\`, use the `base64Encode` function to convert it at indexing time.
-When you retrieve the encoded key at search time, use the `base64Decode` function to get the original key value, and use that to retrieve the source document.
+The following example specifies the base64Encode function on "metadata_storage_name" to handle unsupported characters.
-```JSON
-"fieldMappings" : [
- {
- "sourceFieldName" : "SourceKey",
- "targetFieldName" : "IndexKey",
- "mappingFunction" : {
- "name" : "base64Encode",
- "parameters" : { "useHttpServerUtilityUrlTokenEncode" : false }
+```http
+PUT /indexers?api-version=2020-06-30
+{
+ "dataSourceName" : "my-blob-datasource ",
+ "targetIndexName" : "my-search-index",
+ "fieldMappings" : [
+ {
+ "sourceFieldName" : "metadata_storage_name",
+ "targetFieldName" : "key",
+ "mappingFunction" : {
+ "name" : "base64Encode",
+ "parameters" : { "useHttpServerUtilityUrlTokenEncode" : false }
+ }
}
- }]
- ```
+ ]
+}
+```
+
+A document key (both before and after conversion) can't be longer than 1,024 characters. When you retrieve the encoded key at search time, use the `base64Decode` function to get the original key value, and use that to retrieve the source document.
+
+#### Example: Make a base-encoded field "searchable"
+
+There are times when you need to use an encoded version of a field like "metadata_storage_path" as the key, but also need an un-encoded version for full text search. To support both scenarios, you can map "metadata_storage_path" to two fields: one for the key (encoded), and a second for a path field that we can assume is attributed as "searchable" in the index schema.
+
+```http
+PUT /indexers/blob-indexer?api-version=2020-06-30
+{
+ "dataSourceName" : " blob-datasource ",
+ "targetIndexName" : "my-target-index",
+ "schedule" : { "interval" : "PT2H" },
+ "fieldMappings" : [
+ { "sourceFieldName" : "metadata_storage_path", "targetFieldName" : "key", "mappingFunction" : { "name" : "base64Encode" } },
+ { "sourceFieldName" : "metadata_storage_path", "targetFieldName" : "path" }
+ ]
+}
+```
#### Example - preserve original values
When you retrieve the encoded key at search time, you can then use the `urlDecod
### Example - decode blob metadata
- Some Azure storage clients automatically url encode blob metadata if it contains non-ASCII characters. However, if you want to make such metadata searchable (as plain text), you can use the `urlDecode` function to turn the encoded data back into regular strings when populating your search index.
+ Some Azure storage clients automatically URL-encode blob metadata if it contains non-ASCII characters. However, if you want to make such metadata searchable (as plain text), you can use the `urlDecode` function to turn the encoded data back into regular strings when populating your search index.
```JSON "fieldMappings" : [
When you retrieve the encoded key at search time, you can then use the `urlDecod
### fixedLengthEncode function
- This function converts a string of any length to a fixed length string.
+ This function converts a string of any length to a fixed-length string.
### Example - map document keys that are too long
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-windows-microsoft-services.md
For additional installation options and further details, see the [**Log Analytic
#### Determine the logs to send
-For the Windows DNS Server and Windows Firewall connectors, select the **Install solution** button. For the legacy Security Events connector, choose the [**event set**](windows-security-event-id-reference.md) you wish to send and select **Update**.
+For the Windows DNS Server and Windows Firewall connectors, select the **Install solution** button. For the legacy Security Events connector, choose the **event set** you wish to send and select **Update**. For more information, see [Windows security event sets that can be sent to Microsoft Sentinel](windows-security-event-id-reference.md).
You can find and query the data for these services using the table names in their respective sections in the [Data connectors reference](data-connectors-reference.md) page.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/data-connectors-reference.md
If a longer timeout duration is required, consider upgrading to an [App Service
| **Supported by** | Microsoft | | | |
+For more information, see:
-For more information, see [Insecure protocols workbook setup](./get-visibility.md#use-built-in-workbooks).
-
-See also: [**Windows Security Events via AMA**](#windows-security-events-via-ama) connector based on Azure Monitor Agent (AMA)
-
-[Configure the **Security events / Windows Security Events connector** for **anomalous RDP login detection**](#configure-the-security-events--windows-security-events-connector-for-anomalous-rdp-login-detection).
+- [Windows security event sets that can be sent to Microsoft Sentinel](windows-security-event-id-reference.md)
+- [Insecure protocols workbook setup](./get-visibility.md#use-built-in-workbooks)
+- [**Windows Security Events via AMA**](#windows-security-events-via-ama) connector based on Azure Monitor Agent (AMA)
+- [Configure the **Security events / Windows Security Events connector** for **anomalous RDP login detection**](#configure-the-security-events--windows-security-events-connector-for-anomalous-rdp-login-detection).
## SentinelOne (Preview)
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/policy-reference.md
Title: Built-in policy definitions for Azure Service Fabric description: Lists Azure Policy built-in policy definitions for Azure Service Fabric. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
spring-cloud How To Elastic Apm Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-elastic-apm-java-agent-monitor.md
+
+ Title: How to monitor Spring Boot apps with Elastic APM Java Agent
+description: How to use Elastic APM Java Agent to monitor Spring Boot applications running in Azure Spring Cloud
++++ Last updated : 12/07/2021+++
+# How to monitor Spring Boot apps with Elastic APM Java Agent
+
+This article explains how to use Elastic APM Agent to monitor Spring Boot applications running in Azure Spring Cloud.
+
+With the Elastic Observability Solution, you can achieve unified observability to:
+
+* Monitor apps using the Elastic APM Java Agent and using persistent storage with Azure Spring Cloud.
+* Use diagnostic settings to ship Azure Spring Cloud logs to Elastic. For more information, see [Analyze logs with Elastic (ELK) using diagnostics settings](https://github.com/hemantmalik/azure-docs/blob/master/articles/spring-cloud/how-to-elastic-diagnostic-settings.md).
+
+The following video introduces unified observability for Spring Boot applications using Elastic.
+
+<br>
+
+> [!VIDEO https://www.youtube.com/embed/KjmQX1SxZdA]
+
+## Prerequisites
+
+* [Azure CLI](/cli/azure/install-azure-cli)
+* [Deploy Elastic on Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement)
+* [Elastic APM Endpoint and Secret Token from the Elastic Deployment](https://www.elastic.co/guide/en/cloud/current/ec-manage-apm-and-fleet.html)
+
+## Deploy the Spring Petclinic application
+
+This article uses the Spring Petclinic sample to walk through the required steps. Use the following steps to deploy the sample application:
+
+1. Follow the steps in [Deploy Spring Boot apps using Azure Spring Cloud and MySQL](https://github.com/Azure-Samples/spring-petclinic-microservices#readme) until you reach the [Deploy Spring Boot applications and set environment variables](https://github.com/Azure-Samples/spring-petclinic-microservices#deploy-spring-boot-applications-and-set-environment-variables) section.
+
+1. Use the Azure Spring Cloud extension for Azure CLI with the following command to create an application to run in Azure Spring Cloud:
+
+ ```azurecli
+ az spring-cloud app create \
+ --resource-group <your-resource-group-name> \
+ --service <your-Azure-Spring-Cloud-instance-name> \
+ --name <your-app-name> \
+ --is-public true
+ ```
+
+## Enable custom persistent storage for Azure Spring Cloud
+
+Use the following steps to enable custom persistent storage:
+
+1. Follow the steps in [How to enable your own persistent storage in Azure Spring Cloud](how-to-custom-persistent-storage.md).
+
+1. Use the following Azure CLI command to add persistent storage for your Azure Spring Cloud apps.
+
+ ```azurecli
+ az spring-cloud app append-persistent-storage \
+ --resource-group <your-resource-group-name> \
+ --service <your-Azure-Spring-Cloud-instance-name> \
+ --name <your-app-name> \
+ --persistent-storage-type AzureFileVolume \
+ --share-name <your-Azure-file-share-name> \
+ --mount-path <unique-mount-path> \
+ --storage-name <your-mounted-storage-name>
+ ```
+
+## Activate Elastic APM Java Agent
+
+Before proceeding, you'll need your Elastic APM server connectivity information handy, which assumes you've deployed Elastic on Azure. For more information, see [How to deploy and manage Elastic on Microsoft Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement). To get this information, use the following steps:
+
+1. In the Azure portal, go to the **Overview** page of your Elastic deployment, then select **Manage Elastic Cloud Deployment**.
+
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-link-from-microsoft-azure.png" alt-text="Azure portal screenshot of 'Elasticsearch (Elastic Cloud)' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-link-from-microsoft-azure.png":::
+
+1. Under your deployment on Elastic Cloud Console, select the **APM & Fleet** section to get Elastic APM Server endpoint and secret token.
+
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-endpoint-secret.png" alt-text="Elastic screenshot 'APM & Fleet' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-endpoint-secret.png":::
+
+1. Download Elastic APM Java Agent from [Maven Central](https://search.maven.org/search?q=g:co.elastic.apm%20AND%20a:elastic-apm-agent).
+
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/maven-central-repository-search.png" alt-text="Maven Central screenshot with jar download highlighted." lightbox="media/how-to-elastic-apm-java-agent-monitor/maven-central-repository-search.png":::
+
+1. Upload Elastic APM Agent to the custom persistent storage you enabled earlier. Go to Azure Fileshare and select **Upload** to add the agent JAR file.
+
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/upload-files-microsoft-azure.png" alt-text="Azure portal screenshot showing 'Upload files' pane of 'File share' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/upload-files-microsoft-azure.png":::
+
+1. After you have the Elastic APM endpoint and secret token, use the following command to activate Elastic APM Java agent when deploying applications. The placeholder *`<agent-location>`* refers to the mounted storage location of the Elastic APM Java Agent.
+
+ ```azurecli
+ az spring-cloud app deploy \
+ --name <your-app-name> \
+ --artifact-path <unique-path-to-your-app-jar-on-custom-storage> \
+ --jvm-options='-javaagent:<agent-location>' \
+ --env ELASTIC_APM_SERVICE_NAME=<your-app-name> \
+ ELASTIC_APM_APPLICATION_PACKAGES='<your-app-package-name>' \
+ ELASTIC_APM_SERVER_URL='<your-Elastic-APM-server-URL>' \
+ ELASTIC_APM_SECRET_TOKEN='<your-Elastic-APM-secret-token>'
+ ```
+
+## Automate provisioning
+
+You can also run a provisioning automation pipeline using Terraform or an Azure Resource Manager template (ARM template). This pipeline can provide a complete hands-off experience to instrument and monitor any new applications that you create and deploy.
+
+### Automate provisioning using Terraform
+
+To configure the environment variables in a Terraform template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Manages an Active Azure Spring Cloud Deployment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_active_deployment).
+
+```terraform
+resource "azurerm_spring_cloud_java_deployment" "example" {
+ ...
+ jvm_options = "-javaagent:<unique-path-to-your-app-jar-on-custom-storage>"
+ ...
+ environment_variables = {
+ "ELASTIC_APM_SERVICE_NAME"="<your-app-name>",
+ "ELASTIC_APM_APPLICATION_PACKAGES"="<your-app-package>",
+ "ELASTIC_APM_SERVER_URL"="<your-Elastic-APM-server-URL>",
+ "ELASTIC_APM_SECRET_TOKEN"="<your-Elastic-APM-secret-token>"
+ }
+}
+```
+
+### Automate provisioning using an ARM template
+
+To configure the environment variables in an ARM template, add the following code to the template, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=json).
+
+```arm
+"deploymentSettings": {
+ "environmentVariables": {
+ "ELASTIC_APM_SERVICE_NAME"="<your-app-name>",
+ "ELASTIC_APM_APPLICATION_PACKAGES"="<your-app-package>",
+ "ELASTIC_APM_SERVER_URL"="<your-Elastic-APM-server-URL>",
+ "ELASTIC_APM_SECRET_TOKEN"="<your-Elastic-APM-secret-token>"
+ },
+ "jvmOptions": "-javaagent:<unique-path-to-your-app-jar-on-custom-storage>",
+ ...
+}
+```
+
+## Upgrade Elastic APM Java Agent
+
+To plan your upgrade, see [Upgrade versions](https://www.elastic.co/guide/en/cloud/current/ec-upgrade-deployment.html) for Elastic Cloud on Azure, and [Breaking Changes](https://www.elastic.co/guide/en/apm/server/current/breaking-changes.html) for APM. After you've upgraded APM Server, upload the Elastic APM Java agent JAR file in the custom persistent storage and restart apps with updated JVM options pointing to the upgraded Elastic APM Java agent JAR.
+
+## Monitor applications and metrics with Elastic APM
+
+Use the following steps to monitor applications and metrics:
+
+1. In the Azure portal, go to the **Overview** page of your Elastic deployment, then select the Kibana link.
+
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-kibana-link.png" alt-text="Azure portal screenshot showing Elasticsearch page with 'Deployment URL / Kibana' highlighted." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-kibana-link.png":::
+
+1. After Kibana is open, search for *APM* in the search bar, then select **APM**.
+
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png" alt-text="Elastic / Kibana screenshot showing APM search results." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png":::
+
+Kibana APM is the curated application to support Application Monitoring workflows. Here you can view high-level details such as request/response times, throughput, transactions in a service with most impact on the duration.
++
+You can drill down in a specific transaction to understand the transaction-specific details such as the distributed tracing.
++
+Elastic APM Java agent also captures the JVM metrics from the Azure Spring Cloud apps that are available with Kibana App for users for troubleshooting.
++
+Using the inbuilt AI engine in the Elastic solution, you can also enable Anomaly Detection on the Azure Spring Cloud Services and choose an appropriate action - such as Teams notification, creation of a JIRA issue, a webhook-based API call, and others.
++
+## Next steps
+
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](./quickstart.md)
+* [Deploy Elastic on Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement)
spring-cloud How To Elastic Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-elastic-diagnostic-settings.md
+
+ Title: Analyze logs with Elastic Cloud from Azure Spring Cloud
+description: Learn how to analyze diagnostics logs in Azure Spring Cloud using Elastic
+++ Last updated : 12/07/2021++++
+# Analyze logs with Elastic (ELK) using diagnostics settings
+
+**This article applies to:** ✔️ Java ✔️ C#
+
+This article explains how to use the diagnostics functionality of Azure Spring Cloud to analyze logs with Elastic (ELK).
+
+The following video introduces unified observability for Spring Boot applications using Elastic.
+
+<br>
+
+> [!VIDEO https://www.youtube.com/embed/KjmQX1SxZdA]
+
+## Configure diagnostics settings
+
+To configure diagnostics settings, use the following steps:
+
+1. In the Azure portal, go to your Azure Spring Cloud instance.
+1. Select **diagnostics settings** option, then select **Add diagnostics setting**.
+1. Enter a name for the setting, choose **Send to partner solution**, then select **Elastic** and an Elastic deployment where you want to send the logs.
+1. Select **Save**.
++
+> [!NOTE]
+> There might be a gap of up to 15 minutes between when logs are emitted and when they appear in your Elastic deployment.
+> If the Azure Spring Cloud instance is deleted or moved, the operation will not cascade to the diagnostics settings resources. You have to manually delete the diagnostics settings resources before you perform the operation against its parent, the Azure Spring Cloud instance. Otherwise, if you provision a new Azure Spring Cloud instance with the same resource ID as the deleted one, or if you move the Azure Spring Cloud instance back, the previous diagnostics settings resources will continue to extend it.
+
+## Analyze the logs with Elastic
+
+To learn more about deploying Elastic on Azure, see [How to deploy and manage Elastic on Microsoft Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement).
+
+Use the following steps to analyze the logs:
+
+1. From the Elastic deployment overview page in the Azure portal, open **Kibana**.
+
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-on-azure-native-microsoft-azure.png" alt-text="Azure portal screenshot showing 'Elasticsearch (Elastic Cloud)' page with Deployment URL / Kibana highlighted." lightbox="media/how-to-elastic-diagnostic-settings/elastic-on-azure-native-microsoft-azure.png":::
+
+1. In Kibana, in the **Search** bar at top, type *Spring Cloud type:dashboard*.
+
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-spring-cloud-dashboard.png" alt-text="Elastic / Kibana screenshot showing 'Spring Cloud type:dashboard' search results." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-spring-cloud-dashboard.png":::
+
+1. Select **[Logs Azure] Azure Spring Cloud logs Overview** from the results.
+
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-dashboard-full.png" alt-text="Elastic / Kibana screenshot showing Azure Spring Cloud Application Console Logs." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-dashboard-full.png":::
+
+1. Search on out-of-the-box Azure Spring Cloud dashboards by using the queries such as the following:
+
+ ```query
+ azure.springcloudlogs.properties.app_name : "visits-service"
+ ```
+
+## Analyze the logs with Kibana Query Language in Discover
+
+Application logs provide critical information and verbose logs about your application's health, performance, and more. Use the following steps to analyze the logs:
+
+1. In Kibana, in the **Search** bar at top, type *Discover*, then select the result.
+
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-go-discover.png" alt-text="Elastic / Kibana screenshot showing 'Discover' search results." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-go-discover.png":::
+
+1. In the **Discover** app, select the **logs-** index pattern if it's not already selected.
+
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-index-pattern.png" alt-text="Elastic / Kibana screenshot showing logs in the Discover app." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-index-pattern.png":::
+
+1. Use queries such as the ones in the following sections to help you understand your application's current and past states.
+
+For more information about different queries, see [Guide to Kibana Query Language](https://www.elastic.co/guide/en/kibana/current/kuery-query.html).
+
+### Show all logs from Azure Spring Cloud
+
+To review a list of application logs from Azure Spring Cloud, sorted by time with the most recent logs shown first, run the following query in the **Search** box:
+
+```query
+azure_log_forwarder.resource_type : "Microsoft.AppPlatform/Spring"
+```
++
+### Show specific log types from Azure Spring Cloud
+
+To review a list of application logs from Azure Spring Cloud, sorted by time with the most recent logs shown first, run the following query in the **Search** box:
+
+```query
+azure.springcloudlogs.category : "ApplicationConsole"
+```
++
+### Show log entries containing errors or exceptions
+
+To review unsorted log entries that mention an error or exception, run the following query:
+
+```query
+azure_log_forwarder.resource_type : "Microsoft.AppPlatform/Spring" and (log.level : "ERROR" or log.level : "EXCEPTION")
+```
++
+The Kibana Query Language helps you form queries by providing autocomplete and suggestions to help you gain insights from the logs. Use your query to find errors, or modify the query terms to find specific error codes or exceptions.
+
+### Show log entries from a specific service
+
+To review log entries that are generated by a specific service, run the following query:
+
+```query
+azure.springcloudlogs.properties.service_name : "sa-petclinic-service"
+```
++
+### Show Config Server logs containing warnings or errors
+
+To review logs from Config Server, run the following query:
+
+```query
+azure.springcloudlogs.properties.type : "ConfigServer" and (log.level : "ERROR" or log.level : "WARN")
+```
++
+### Show Service Registry logs
+
+To review logs from Service Registry, run the following query:
+
+```query
+azure.springcloudlogs.properties.type : "ServiceRegistry"
+```
++
+## Visualizing logs from Azure Spring Cloud with Elastic
+
+Kibana allows you to visualize data with Dashboards and a rich ecosystem of visualizations. For more information, see [Dashboard and Visualization](https://www.elastic.co/guide/en/kibana/current/dashboard.html).
+
+Use the following steps to show the various log levels in your logs so you can assess the overall health of the services.
+
+1. From the available fields list on left in **Discover**, search for *log.level* in the search box under the **logs-** index pattern.
+
+1. Select the **log.level** field. From the floating informational panel about **log.level**, select **Visualize**.
+
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-visualize.png" alt-text="Elastic / Kibana screenshot showing Discover app showing log levels." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-asc-visualize.png":::
+
+1. From here, you can choose to add more data from the left pane, or choose from multiple suggestions how you would like to visualize your data.
+
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-kibana-visualize-lens.png" alt-text="Elastic / Kibana screenshot showing Discover app showing visualization options." lightbox="media/how-to-elastic-diagnostic-settings/elastic-kibana-visualize-lens.png":::
+
+## Next steps
+
+* [Quickstart: Deploy your first Spring Boot app in Azure Spring Cloud](quickstart.md)
+* [Deploy Elastic on Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement)
spring-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/policy-reference.md
Title: Built-in policy definitions for Azure Spring Cloud description: Lists Azure Policy built-in policy definitions for Azure Spring Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/redundancy-migration.md
Previously updated : 11/30/2021 Last updated : 01/19/2022
The following table provides an overview of how to switch from each type of repl
|--|-||-|| | <b>…from LRS</b> | N/A | Use Azure portal, PowerShell, or CLI to change the replication setting<sup>1,2</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>5</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Switch to GRS/RA-GRS first and then request a live migration<sup>3</sup> | | <b>…from GRS/RA-GRS</b> | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A | Perform a manual migration <br /><br /> OR <br /><br /> Switch to LRS first and then request a live migration<sup>3</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>3</sup> |
-| <b>…from ZRS</b> | Perform a manual migration | Perform a manual migration | N/A | Request a live migration<sup>3</sup> <br /><br /> OR <br /><br /> Use PowerShell or Azure CLI to change the replication setting as part of a failback operation only<sup>4</sup> |
+| <b>…from ZRS</b> | Perform a manual migration | Perform a manual migration | N/A | Request a live migration<sup>3</sup> <br /><br /> OR <br /><br /> Use Azure portal, PowerShell, or Azure CLI to change the replication setting as part of a failback operation only<sup>4</sup> |
| <b>…from GZRS/RA-GZRS</b> | Perform a manual migration | Perform a manual migration | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A | <sup>1</sup> Incurs a one-time egress charge.<br /> <sup>2</sup> Migrating from LRS to GRS is not supported if the storage account contains blobs in the archive tier.<br /> <sup>3</sup> Live migration is supported for standard general-purpose v2 and premium file share storage accounts. Live migration is not supported for premium block blob or page blob storage accounts.<br />
-<sup>4</sup> After an account failover to the secondary region, it's possible to initiate a fail back from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [Use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary). <br />
+<sup>4</sup> After an account failover to the secondary region, it's possible to initiate a fail back from the new primary back to the new secondary with Azure portal, PowerShell, or Azure CLI (version 2.30.0 or later). For more information, see [Use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary). <br />
<sup>5</sup> Migrating from LRS to ZRS is not supported if the storage account contains Azure Files NFSv4.1 shares. <br /> > [!CAUTION]
stream-analytics Machine Learning Udf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/machine-learning-udf.md
# Integrate Azure Stream Analytics with Azure Machine Learning (Preview)
-You can implement machine learning models as a user-defined function (UDF) in your Azure Stream Analytics jobs to do real-time scoring and predictions on your streaming input data. [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) allows you to use any popular open-source tool, such as Tensorflow, scikit-learn, or PyTorch, to prep, train, and deploy models.
+You can implement machine learning models as a user-defined function (UDF) in your Azure Stream Analytics jobs to do real-time scoring and predictions on your streaming input data. [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) allows you to use any popular open-source tool, such as TensorFlow, scikit-learn, or PyTorch, to prep, train, and deploy models.
## Prerequisites
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
Consumption models in Synapse SQL enable you to use different database objects.
| **Schemas** | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true) | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true), schemas are supported. Use schemas to isolate different tenants and place their tables per schemas. | | **Temporary tables** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-temporary.md?context=/azure/synapse-analytics/context/context) | Temporary tables might be used just to store some information from the system views, literals, or other temporary tables. UPDATE/DELETE on temp table is also supported. You can join temporary tables with the system views. You cannot select data from an external table to insert it into temporary table or join a temporary table with an external table - these operations will fail because external data and temporary tables cannot be mixed in the same query. | | **User defined procedures** | [Yes](/sql/t-sql/statements/create-procedure-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, stored procedures can be placed in any user databases (not `master` database). Procedures can just read external data and use [query language elements](#query-language) that are available in serverless pool. |
-| **User defined functions** | [Yes](/sql/t-sql/statements/create-function-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | Yes, only inline table-valued functions. Scalar user-defined functions are not supported. |
+| **User defined functions** | [Yes](/sql/t-sql/statements/create-function-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | Yes, only inline table-valued functions are supported. Scalar user-defined functions are not supported. |
| **Triggers** | No | No, serverless SQL pools do not allow changing data, so the triggers cannot react on data changes. | | **External tables** | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See supported [data formats](#data-formats). | Yes, [external tables](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true) are available and can be used to read data from [Azure Data Lake storage or Dataverse](#data-access). See the supported [data formats](#data-formats). | | **Caching queries** | Yes, multiple forms (SSD-based caching, in-memory, [resultset caching](../sql-data-warehouse/performance-tuning-result-set-caching.md)). In addition, Materialized View are supported. | No, only the file statistics are cached. |
Consumption models in Synapse SQL enable you to use different database objects.
| **[Table indexes](../sql-data-warehouse/sql-data-warehouse-tables-index.md?context=/azure/synapse-analytics/context/context)** | Yes | No, indexes are not supported. | | **Table partitioning** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-partition.md?context=/azure/synapse-analytics/context/context). | External tables do not support partitioning. You can partition files using Hive-partition folder structure and create partitioned tables in Spark. The Spark partitioning will be [synchronized with the serverless pool](../metadat#partitioned-views) on folder partition structure, but the external tables cannot be created on partitioned folders. | | **[Statistics](develop-tables-statistics.md)** | Yes | Yes, statistics are [created on external files](develop-tables-statistics.md#statistics-in-serverless-sql-pool). |
-| **Workload management, resource classes, and concurrency control** | Yes, see [workload management, resource classes, and concurrency control](../sql-data-warehouse/resource-classes-for-workload-management.md?context=/azure/synapse-analytics/context/context). | No, serverless SQL pool automatically manages the resources. |
-| **Cost control** | Yes, using scale-up and scale-down actions. | Yes, using [the Azure portal or T-SQL procedure](./data-processed.md#cost-control). |
+| **Workload management, resource classes, and concurrency control** | Yes, see [workload management, resource classes, and concurrency control](../sql-data-warehouse/resource-classes-for-workload-management.md?context=/azure/synapse-analytics/context/context). | No, you cannot manage the resources that are assigned to the queries. The serverless SQL pool automatically manages the resources. |
+| **Cost control** | Yes, using scale-up and scale-down actions. | Yes, you can limit daily, weekly, or monthly usage of serverless pool using [the Azure portal or T-SQL procedure](./data-processed.md#cost-control). |
## Query language
Query languages used in Synapse SQL can have different supported features depend
| | Dedicated | Serverless | | | | |
-| **SELECT statement** | Yes. `SELECT` statement is supported, but some Transact-SQL query clauses, such as [FOR XML/FOR JSON](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), OFFSET/FETCH are not supported. | Yes, `SELECT` statement is supported, but some Transact-SQL query clauses like [FOR XML](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), [PREDICT](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true), GROUPNG SETS, and query hints are not supported. |
+| **SELECT statement** | Yes. `SELECT` statement is supported, but some Transact-SQL query clauses, such as [FOR XML/FOR JSON](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), OFFSET/FETCH are not supported. | Yes, `SELECT` statement is supported, but some Transact-SQL query clauses like [FOR XML](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), [PREDICT](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true), GROUPNG SETS, and the query hints are not supported. |
| **INSERT statement** | Yes | No. Upload new data to Data lake using Spark or other tools. Use Cosmos DB with the analytical storage for highly transactional workloads. You can use [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) to create an external table and insert data. | | **UPDATE statement** | Yes | No, update Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads. | | **DELETE statement** | Yes | No, delete Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads.| | **MERGE statement** | Yes ([preview](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true)) | No, merge Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. |
-| **CTAS statement** | Yes | No, [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) statement is not supported in serverless SQL pool. |
+| **CTAS statement** | Yes | No, [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) statement is not supported in the serverless SQL pool. |
| **CETAS statement** | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). CETAS supports Parquet and CSV output formats. | | **[Transactions](develop-transactions.md)** | Yes | Yes, transactions are applicable only on the meta-data objects. |
-| **[Labels](develop-label.md)** | Yes | No, labels are not supported. |
+| **[Labels](develop-label.md)** | Yes | No, labels are not supported in serverless SQL pools. |
| **Data load** | Yes. Preferred utility is [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement, but the system supports both BULK load (BCP) and [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) for data loading. | No, you cannot load data into the serverless SQL pool because data is stored on external storage. You can initially load data into an external table using CETAS statement. | | **Data export** | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes. You can export data from external storage (Azure data lake, Dataverse, Cosmos DB) into Azure data lake using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | | **Types** | Yes, all Transact-SQL types except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL types are supported, except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Table type. See how to [map Parquet column types to SQL types here](develop-openrowset.md#type-mapping-for-parquet). |
-| **Cross-database queries** | No | Yes, 3-part-name references are supported including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. The queries can reference the serverless SQL databases or the Lake databases in the same workspace. Cross-workspace queries are not supported. |
+| **Cross-database queries** | No | Yes, the cross-database queries and the 3-part-name references are supported including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. The queries can reference the serverless SQL databases or the Lake databases in the same workspace. Cross-workspace queries are not supported. |
| **Built-in/system functions (analysis)** | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions, except [CHOOSE](/sql/t-sql/functions/logical-functions-choose-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [PARSE](/sql/t-sql/functions/parse-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, and [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions are supported. | | **Built-in/system functions ([string](/sql/t-sql/functions/string-functions-transact-sql))** | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions, except [STRING_ESCAPE](/sql/t-sql/functions/string-escape-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [TRANSLATE](/sql/t-sql/functions/translate-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions are supported. | | **Built-in/system functions ([Cryptographic](/sql/t-sql/functions/cryptographic-functions-transact-sql))** | Some | `HASHBYTES` is the only supported cryptographic function in serverless SQL pools. |
Synapse SQL pools enable you to use built-in security features to secure your da
| **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, you can specify schema-level permissions including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema. | | **Permissions - Object-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users | Yes, you can GRANT, DENY, and REVOKE permissions to users/logins on the system objects that are supported. | | **Permissions - [Column-level security](../sql-data-warehouse/column-level-security.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)** | Yes | Yes, column-level security is supported in serverless SQL pools. |
-| **Row-level security** | [Yes](/sql/relational-databases/security/row-level-security?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) | No built-in support. Use custom views as a [workaround](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759). |
-| **Data masking** | [Yes](../guidance/security-white-paper-access-control.md#dynamic-data-masking) | No, use wrapper SQL views that explicitly mask some columns as a workaround. |
+| **Row-level security** | [Yes](/sql/relational-databases/security/row-level-security?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) | No, there is no built-in support for the row-level security. Use custom views as a [workaround](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759). |
+| **Data masking** | [Yes](../guidance/security-white-paper-access-control.md#dynamic-data-masking) | No, built-in data masking is not supported in the serverless SQL pools. Use wrapper SQL views that explicitly mask some columns as a workaround. |
| **Built-in/system security &amp; identity functions** | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `IS_MEMBER`, `IS_ROLEMEMBER`, `SESSION_USER`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, `OPEN/CLOSE MASTER KEY` | Some Transact-SQL security functions and operators are supported: `CURRENT_USER`, `HAS_DBACCESS`, `HAS_PERMS_BY_NAME`, `IS_MEMBER`, `IS_ROLEMEMBER`, `IS_SRVROLEMEMBER`, `SESSION_USER`, `SESSION_CONTEXT`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, and `REVERT`. Security functions cannot be used to query external data (store the result in variable that can be used in the query). | | **Transparent Data Encryption (TDE)** | [Yes](../../azure-sql/database/transparent-data-encryption-tde-overview.md) | No, Transparent Data Encryption is not supported. | | **Data Discovery & Classification** | [Yes](../../azure-sql/database/data-discovery-and-classification-overview.md) | No, Data Discovery & Classification is not supported. |
You can use various tools to connect to Synapse SQL to query data.
| **Synapse Studio** | Yes, SQL scripts | Yes, SQL scripts can be used in Synapse Studio. Use SSMS or ADS instead of Synapse Studio if you are returning a large amount of data as a result. | | **Power BI** | Yes | Yes, you can [use Power BI](tutorial-connect-power-bi-desktop.md) to create reports on serverless SQL pool. Import mode is recommended for reporting.| | **Azure Analysis Service** | Yes | Yes, you can load data in Azure Analysis Service using the serverless SQL pool. |
-| **Azure Data Studio (ADS)** | Yes | Yes, you can [use Azure Data Studio](get-started-azure-data-studio.md) (version 1.18.0 or higher) to query serverless SQL pool. SQL scripts and SQL Notebooks are supported. |
-| **SQL Server Management Studio (SSMS)** | Yes | Yes, you can [use SQL Server Management Studio](get-started-ssms.md) (version 18.5 or higher) to query serverless SQL pool. SSMS shows only the objects that are available in the serverless SQL pools. |
+| **Azure Data Studio (ADS)** | Yes | Yes, you can [use Azure Data Studio](get-started-azure-data-studio.md) (version 1.18.0 or higher) to query a serverless SQL pool. SQL scripts and SQL notebooks are supported. |
+| **SQL Server Management Studio (SSMS)** | Yes | Yes, you can [use SQL Server Management Studio](get-started-ssms.md) (version 18.5 or higher) to query a serverless SQL pool. SSMS shows only the objects that are available in the serverless SQL pools. |
> [!NOTE] > You can use SSMS to connect to serverless SQL pool and query. It is partially supported starting from version 18.5, you can use it to connect and query only.
Data that is analyzed can be stored on various storage types. The following tabl
| | Dedicated | Serverless | | | | | | **Internal storage** | Yes | No, data is placed in Azure Data Lake or [Cosmos DB analytical storage](query-cosmos-db-analytical-store.md). |
-| **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. |
-| **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. |
+| **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). |
+| **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). |
| **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). | | **Dataverse** | No | Yes, you can read Dataverse tables using [Synapse link](https://docs.microsoft.com/powerapps/maker/data-platform/azure-synapse-link-data-lake). | | **Azure Cosmos DB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use [Spark pools to update the Cosmos DB](../synapse-link/how-to-query-analytical-store-spark.md) transactional storage. |
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
The Azure Virtual Desktop agent updates at least once per month.
Here's what's changed in the Azure Virtual Desktop Agent:
+- Version 1.0.3855.1400: This update was released December 2021 and has the following changes:
+ - Fixes an issue that caused an unhandled exception.
+ - This version now supports Azure Stack HCI by retrieving VM metadata from the Azure Arc service.
+ - This version now allows built-in stacks to be automatically updated if its version number is beneath a certain threshold.
+ - The UrlsAccessibleCheck health check now only gets the URL until the path delimiter to prevent 404 errors.
- Version 1.0.3719.1700: This update was released November 2021 and has the following changes: - Updated agent error messages. - Fixes an issue with the agent restarting every time the side-by-side stack was updated.
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/policy-reference.md
Title: Built-in policy definitions for Azure virtual machine scale sets description: Lists Azure Policy built-in policy definitions for Azure virtual machine scale sets. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
The following JSON shows the schema for the Application Health extension. The ex
"settings": { "protocol": "<protocol>", "port": "<port>",
- "requestPath": "</requestPath>"
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": "5.0",
+ "numberOfProbes": "1.0"
} } }
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
Virtual machine scale sets in Flexible Orchestration mode manages standard Azure
You can choose the number of fault domains for the Flexible orchestration scale set. By default, when you add a VM to a Flexible scale set, Azure evenly spreads instances across fault domains. While it is recommended to let Azure assign the fault domain, for advanced or troubleshooting scenarios you can override this default behavior and specify the fault domain where the instance will land. ```azurecli-interactive
-az vm create ΓÇôvmss "myVMSS" ΓÇô-platform_fault_domain 1
+az vm create ΓÇôvmss "myVMSS" ΓÇô-platform-fault-domain 1
``` ### Instance naming
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/quick-create-portal.md
Previously updated : 06/25/2020 Last updated : 12/13/2021
If you don't have an Azure subscription, create a [free account](https://azure.m
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com) if you haven't already.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create virtual machine 1. Type **virtual machines** in the search. 1. Under **Services**, select **Virtual machines**.
-1. In the **Virtual machines** page, select **Add**. The **Create a virtual machine** page opens.
+1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
+ 1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name.*. ![Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine](./media/quick-create-portal/project-details.png)
-1. Under **Instance details**, type *myVM* for the **Virtual machine name**, choose *East US* for your **Region**, and choose *Ubuntu 18.04 LTS* for your **Image**. Leave the other defaults.
+1. Under **Instance details**, type *myVM* for the **Virtual machine name**, and choose *Ubuntu 18.04 LTS - Gen2* for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing is dependent on your region and subscription.
+
+ :::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image, and size.":::
- ![Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size](./media/quick-create-portal/instance-details.png)
1. Under **Administrator account**, select **SSH public key**.
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/quick-create-powershell.md
Previously updated : 07/31/2020 Last updated : 01/14/2022 -+ # Quickstart: Create a Linux virtual machine in Azure with PowerShell
The Azure Cloud Shell is a free interactive shell that you can use to run the st
To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
-## Create SSH key pair
-
-Use [ssh-keygen](https://www.ssh.com/ssh/keygen/) to create an SSH key pair. If you already have an SSH key pair, you can skip this step.
--
-```azurepowershell-interactive
-ssh-keygen -t rsa -b 4096
-```
-
-You will be prompted to provide a filename for the key pair or you can hit **Enter** to use the default location of `/home/<username>/.ssh/id_rsa`. You will also be able to create a password for the keys, if you like.
-
-For more detailed information on how to create SSH key pairs, see [How to use SSH keys with Windows](ssh-from-windows.md).
-
-If you create your SSH key pair using the Cloud Shell, it will be stored in a [storage account that is automatically created by Cloud Shell](../../cloud-shell/persisting-shell-storage.md). Don't delete the storage account, or the files share in it, until after you have retrieved your keys or you will lose access to the VM.
## Create a resource group
Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.
New-AzResourceGroup -Name "myResourceGroup" -Location "EastUS" ```
-## Create virtual network resources
-Create a virtual network, subnet, and a public IP address. These resources are used to provide network connectivity to the VM and connect it to the internet:
+## Create a virtual machine
-```azurepowershell-interactive
-# Create a subnet configuration
-$subnetConfig = New-AzVirtualNetworkSubnetConfig `
- -Name "mySubnet" `
- -AddressPrefix 192.168.1.0/24
-
-# Create a virtual network
-$vnet = New-AzVirtualNetwork `
- -ResourceGroupName "myResourceGroup" `
- -Location "EastUS" `
- -Name "myVNET" `
- -AddressPrefix 192.168.0.0/16 `
- -Subnet $subnetConfig
-
-# Create a public IP address and specify a DNS name
-$pip = New-AzPublicIpAddress `
- -ResourceGroupName "myResourceGroup" `
- -Location "EastUS" `
- -AllocationMethod Static `
- -IdleTimeoutInMinutes 4 `
- -Name "mypublicdns$(Get-Random)"
-```
+We will be automatically generating an SSH key pair to use for connecting to the VM. The public key that is created using `-GenerateSshKey` will be stored in Azure as a resource, using the name you provide as `SshKeyName`. The SSH key resource can be reused for creating additional VMs. Both the public and private keys will also downloaded for you. When you create your SSH key pair using the Cloud Shell, the keys are stored in a [storage account that is automatically created by Cloud Shell](../../cloud-shell/persisting-shell-storage.md). Don't delete the storage account, or the file share in it, until after you have retrieved your keys or you will lose access to the VM.
+
+You will be prompted for a user name that will be used when you connect to the VM. You will also be asked for a password, which you can leave blank. Password login for the VM is disabled when using an SSH key.
-Create an Azure Network Security Group and traffic rule. The Network Security Group secures the VM with inbound and outbound rules. In the following example, an inbound rule is created for TCP port 22 that allows SSH connections. To allow incoming web traffic, an inbound rule for TCP port 80 is also created.
+In this example, you create a VM named *myVM*, in *East US*, using the *Standard_B2s* VM size.
```azurepowershell-interactive
-# Create an inbound network security group rule for port 22
-$nsgRuleSSH = New-AzNetworkSecurityRuleConfig `
- -Name "myNetworkSecurityGroupRuleSSH" `
- -Protocol "Tcp" `
- -Direction "Inbound" `
- -Priority 1000 `
- -SourceAddressPrefix * `
- -SourcePortRange * `
- -DestinationAddressPrefix * `
- -DestinationPortRange 22 `
- -Access "Allow"
-
-# Create an inbound network security group rule for port 80
-$nsgRuleWeb = New-AzNetworkSecurityRuleConfig `
- -Name "myNetworkSecurityGroupRuleWWW" `
- -Protocol "Tcp" `
- -Direction "Inbound" `
- -Priority 1001 `
- -SourceAddressPrefix * `
- -SourcePortRange * `
- -DestinationAddressPrefix * `
- -DestinationPortRange 80 `
- -Access "Allow"
-
-# Create a network security group
-$nsg = New-AzNetworkSecurityGroup `
- -ResourceGroupName "myResourceGroup" `
- -Location "EastUS" `
- -Name "myNetworkSecurityGroup" `
- -SecurityRules $nsgRuleSSH,$nsgRuleWeb
+New-AzVm `
+ -ResourceGroupName "myResourceGroup" `
+ -Name "myVM" `
+ -Location "East US" `
+ -Image UbuntuLTS `
+ -size Standard_B2s `
+ -PublicIpAddressName myPubIP `
+ -OpenPorts 80,22 `
+ -GenerateSshKey `
+ -SshKeyName mySSHKey
```
-Create a virtual network interface card (NIC) with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface). The virtual NIC connects the VM to a subnet, Network Security Group, and public IP address.
+The output will give you the location of the local copy of the SSH key. For example:
-```azurepowershell-interactive
-# Create a virtual network card and associate with public IP address and NSG
-$nic = New-AzNetworkInterface `
- -Name "myNic" `
- -ResourceGroupName "myResourceGroup" `
- -Location "EastUS" `
- -SubnetId $vnet.Subnets[0].Id `
- -PublicIpAddressId $pip.Id `
- -NetworkSecurityGroupId $nsg.Id
+```output
+Private key is saved to /home/user/.ssh/1234567891
+Public key is saved to /home/user/.ssh/1234567891.pub
```
-## Create a virtual machine
+Make a note of the path to your private key to use later.
-To create a VM in PowerShell, you create a configuration that has settings like the image to use, size, and authentication options. Then the configuration is used to build the VM.
+It will take a few minutes for your VM to be deployed. When the deployment is finished, move on to the next section.
-Define the SSH credentials, OS information, and VM size. In this example, the SSH key is stored in `~/.ssh/id_rsa.pub`.
-```azurepowershell-interactive
-# Define a credential object
-$securePassword = ConvertTo-SecureString ' ' -AsPlainText -Force
-$cred = New-Object System.Management.Automation.PSCredential ("azureuser", $securePassword)
-
-# Create a virtual machine configuration
-$vmConfig = New-AzVMConfig `
- -VMName "myVM" `
- -VMSize "Standard_D1_v2" | `
-Set-AzVMOperatingSystem `
- -Linux `
- -ComputerName "myVM" `
- -Credential $cred `
- -DisablePasswordAuthentication | `
-Set-AzVMSourceImage `
- -PublisherName "Canonical" `
- -Offer "UbuntuServer" `
- -Skus "18.04-LTS" `
- -Version "latest" | `
-Add-AzVMNetworkInterface `
- -Id $nic.Id
-
-# Configure the SSH key
-$sshPublicKey = cat ~/.ssh/id_rsa.pub
-Add-AzVMSshPublicKey `
- -VM $vmconfig `
- -KeyData $sshPublicKey `
- -Path "/home/azureuser/.ssh/authorized_keys"
-```
+## Connect to the VM
-Now, combine the previous configuration definitions to create with [New-AzVM](/powershell/module/az.compute/new-azvm):
+You need to change the permission on the SSH key using `chmod`. Replace *~/.ssh/1234567891* in the following example with the private key name and path from the earlier output.
```azurepowershell-interactive
-New-AzVM `
- -ResourceGroupName "myResourceGroup" `
- -Location eastus -VM $vmConfig
+chmod 600 ~/.ssh/1234567891
```
-It will take a few minutes for your VM to be deployed. When the deployment is finished, move on to the next section.
--
-## Connect to the VM
- Create an SSH connection with the VM using the public IP address. To see the public IP address of the VM, use the [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) cmdlet: ```azurepowershell-interactive Get-AzPublicIpAddress -ResourceGroupName "myResourceGroup" | Select "IpAddress" ```
-Using the same shell you used to create your SSH key pair, paste the the following command into the shell to create an SSH session. Replace *10.111.12.123* with the IP address of your VM.
+Using the same shell you used to create your SSH key pair, paste the the following command into the shell to create an SSH session. Replace *~/.ssh/1234567891* in the following example with the private key name and path from the earlier output. Replace *10.111.12.123* with the IP address of your VM and *azureuser* with the name you provided when you created the VM.
```bash
-ssh azureuser@10.111.12.123
+ssh -i ~/.ssh/1234567891 azureuser@10.111.12.123
```
-When prompted, the login user name is *azureuser*. If a passphrase is used with your SSH keys, you need to enter that when prompted.
-- ## Install NGINX To see your VM in action, install the NGINX web server. From your SSH session, update your package sources and then install the latest NGINX package.
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/quick-create-portal.md
Sign in to the Azure portal at https://portal.azure.com.
1. Type **virtual machines** in the search. 1. Under **Services**, select **Virtual machines**.
-1. In the **Virtual machines** page, select **Create** then **Virtual machine**.
+1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
+ 1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name. ![Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine](./media/quick-create-portal/project-details.png)
virtual-network Public Ip Upgrade Classic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ip-services/public-ip-upgrade-classic.md
In this section, you'll use the Azure PowerShell Service Management module to mi
```azurepowershell-interactive $validate = Move-AzureReservedIP -ReservedIPName 'myReservedIP' -Validate
-$validate.ValidationMessages
+$validate
```
-The previous command displays any warnings and errors that block migration. If validation is successful, you can continue with the following steps to **Prepare** and **Commit** the migration:
+The previous command displays the result of the operation or any warnings and errors that block migration. If validation is successful, you can continue with the following steps to **Prepare** and **Commit** the migration:
```azurepowershell-interactive Move-AzureReservedIP -ReservedIPName 'myReservedIP' -Prepare
A new resource group in Azure Resource Manager is created using the name of the
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP - Azure portal](./create-public-ip-portal.md)
+- [Create a public IP - Azure portal](./create-public-ip-portal.md)
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 12/15/2021 Last updated : 01/18/2022