Updates from: 07/26/2022 01:09:36
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Previously updated : 06/23/2021 Last updated : 07/20/2022
# Application and service principal objects in Azure Active Directory
-This article describes application registration, application objects, and service principals in Azure Active Directory (Azure AD): what they are, how they're used, and how they are related to each other. A multi-tenant example scenario is also presented to illustrate the relationship between an application's application object and corresponding service principal objects.
+This article describes application registration, application objects, and service principals in Azure Active Directory (Azure AD): what they're, how they're used, and how they're related to each other. A multi-tenant example scenario is also presented to illustrate the relationship between an application's application object and corresponding service principal objects.
## Application registration
-To delegate Identity and Access Management functions to Azure AD, an application must be registered with an Azure AD tenant. When you register your application with Azure AD, you're creating an identity configuration for your application that allows it to integrate with Azure AD. When you register an app in the Azure portal, you choose whether it's a single tenant (only accessible in your tenant) or multi-tenant (accessible in other tenants) and can optionally set a redirect URI (where the access token is sent to). For step-by-step instructions on registering an app, see the [app registration quickstart](quickstart-register-app.md).
+To delegate identity and access management functions to Azure AD, an application must be registered with an Azure AD tenant. When you register your application with Azure AD, you're creating an identity configuration for your application that allows it to integrate with Azure AD. When you register an app in the Azure portal, you choose whether it's a [single tenant](single-and-multi-tenant-apps.md#who-can-sign-in-to-your-app), or [multi-tenant](single-and-multi-tenant-apps.md#who-can-sign-in-to-your-app), and can optionally set a [redirect URI](reply-url.md). For step-by-step instructions on registering an app, see the [app registration quickstart](quickstart-register-app.md).
-When you've completed the app registration, you have a globally unique instance of the app (the application object) which lives within your home tenant or directory. You also have a globally unique ID for your app (the app or client ID). In the portal, you can then add secrets or certificates and scopes to make your app work, customize the branding of your app in the sign-in dialog, and more.
+When you've completed the app registration, you've a globally unique instance of the app (the application object) which lives within your home tenant or directory. You also have a globally unique ID for your app (the app or client ID). In the portal, you can then add secrets or certificates and scopes to make your app work, customize the branding of your app in the sign-in dialog, and more.
-If you register an application in the portal, an application object as well as a service principal object are automatically created in your home tenant. If you register/create an application using the Microsoft Graph APIs, creating the service principal object is a separate step.
+If you register an application in the portal, an application object and a service principal object are automatically created in your home tenant. If you register/create an application using the Microsoft Graph APIs, creating the service principal object is a separate step.
## Application object
-An Azure AD application is defined by its one and only application object, which resides in the Azure AD tenant where the application was registered (known as the application's "home" tenant). An application object is used as a template or blueprint to create one or more service principal objects. A service principal is created in every tenant where the application is used. Similar to a class in object-oriented programming, the application object has some static properties that are applied to all the created service principals (or application instances).
+An Azure AD application is defined by its one and only application object, which resides in the Azure AD tenant where the application was registered (known as the application's "home" tenant). An application object is used as a template or blueprint to create one or more service principal objects. A service principal is created in every tenant where the application is used. Similar to a class in object-oriented programming, the application object has some static properties that are applied to all the created service principals (or application instances).
-The application object describes three aspects of an application: how the service can issue tokens in order to access the application, resources that the application might need to access, and the actions that the application can take.
+The application object describes three aspects of an application:
-You can use the **App registrations** blade in the [Azure portal][AZURE-Portal] to list and manage the application objects in your home tenant.
+- How the service can issue tokens in order to access the application
+- The resources that the application might need to access
+- The actions that the application can take
+
+You can use the **App registrations** page in the [Azure portal][azure-portal] to list and manage the application objects in your home tenant.
![App registrations blade](./media/app-objects-and-service-principals/app-registrations-blade.png)
-The Microsoft Graph [Application entity][MS-Graph-App-Entity] defines the schema for an application object's properties.
+The Microsoft Graph [Application entity][ms-graph-app-entity] defines the schema for an application object's properties.
## Service principal object
To access resources that are secured by an Azure AD tenant, the entity that requ
There are three types of service principal: -- **Application** - The type of service principal is the local representation, or application instance, of a global application object in a single tenant or directory. In this case, a service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+- **Application** - The type of service principal is the local representation, or application instance, of a global application object in a single tenant or directory. In this case, a service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
- When an application is given permission to access resources in a tenant (upon registration or consent), a service principal object is created. When you register an application using the Azure portal, a service principal is created automatically. You can also create service principal objects in a tenant using Azure PowerShell, Azure CLI, Microsoft Graph, and other tools.
+ When an application is given permission to access resources in a tenant (upon registration or consent), a service principal object is created. When you register an application using the Azure portal, a service principal is created automatically. You can also create service principal objects in a tenant using Azure PowerShell, Azure CLI, Microsoft Graph, and other tools.
-- **Managed identity** - This type of service principal is used to represent a [managed identity](../managed-identities-azure-resources/overview.md). Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. When a managed identity is enabled, a service principal representing that managed identity is created in your tenant. Service principals representing managed identities can be granted access and permissions, but cannot be updated or modified directly.
+- **Managed identity** - This type of service principal is used to represent a [managed identity](../managed-identities-azure-resources/overview.md). Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. When a managed identity is enabled, a service principal representing that managed identity is created in your tenant. Service principals representing managed identities can be granted access and permissions, but can't be updated or modified directly.
-- **Legacy** - This type of service principal represents a legacy app, which is an app created before app registrations were introduced or an app created through legacy experiences. A legacy service principal can have credentials, service principal names, reply URLs, and other properties that an authorized user can edit, but does not have an associated app registration. The service principal can only be used in the tenant where it was created.
+- **Legacy** - This type of service principal represents a legacy app, which is an app created before app registrations were introduced or an app created through legacy experiences. A legacy service principal can have credentials, service principal names, reply URLs, and other properties that an authorized user can edit, but doesn't have an associated app registration. The service principal can only be used in the tenant where it was created.
-The Microsoft Graph [ServicePrincipal entity][MS-Graph-Sp-Entity] defines the schema for a service principal object's properties.
+The Microsoft Graph [ServicePrincipal entity][ms-graph-sp-entity] defines the schema for a service principal object's properties.
-You can use the **Enterprise applications** blade in the Azure portal to list and manage the service principals in a tenant. You can see the service principal's permissions, user consented permissions, which users have done that consent, sign in information, and more.
+You can use the **Enterprise applications** page in the Azure portal to list and manage the service principals in a tenant. You can see the service principal's permissions, user consented permissions, which users have done that consent, sign in information, and more.
![Enterprise apps blade](./media/app-objects-and-service-principals/enterprise-apps-blade.png) ## Relationship between application objects and service principals
-The application object is the *global* representation of your application for use across all tenants, and the service principal is the *local* representation for use in a specific tenant. The application object serves as the template from which common and default properties are *derived* for use in creating corresponding service principal objects.
+The application object is the _global_ representation of your application for use across all tenants, and the service principal is the _local_ representation for use in a specific tenant. The application object serves as the template from which common and default properties are _derived_ for use in creating corresponding service principal objects.
An application object has: -- A 1:1 relationship with the software application, and-- A 1:many relationship with its corresponding service principal object(s).
+- A one-to-one relationship with the software application, and
+- A one-to-many relationship with its corresponding service principal object(s)
A service principal must be created in each tenant where the application is used, enabling it to establish an identity for sign-in and/or access to resources being secured by the tenant. A single-tenant application has only one service principal (in its home tenant), created and consented for use during application registration. A multi-tenant application also has a service principal created in each tenant where a user from that tenant has consented to its use.
The following diagram illustrates the relationship between an application's appl
In this example scenario:
-| Step | Description |
-||-|
-| 1 | Is the process of creating the application and service principal objects in the application's home tenant. |
+| Step | Description |
+| - | -- |
+| 1 | The process of creating the application and service principal objects in the application's home tenant. |
| 2 | When Contoso and Fabrikam administrators complete consent, a service principal object is created in their company's Azure AD tenant and assigned the permissions that the administrator granted. Also note that the HR app could be configured/designed to allow consent by users for individual use. |
-| 3 | The consumer tenants of the HR application (Contoso and Fabrikam) each have their own service principal object. Each represents their use of an instance of the application at runtime, governed by the permissions consented by the respective administrator. |
+| 3 | The consumer tenants of the HR application (Contoso and Fabrikam) each have their own service principal object. Each represents their use of an instance of the application at runtime, governed by the permissions consented by the respective administrator. |
## Next steps
Learn how to create a service principal:
- [Using Microsoft Graph](/graph/api/serviceprincipal-post-serviceprincipals) and then use [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to query both the application and service principal objects. <!--Reference style links -->
-[MS-Graph-App-Entity]: /graph/api/resources/application
-[MS-Graph-Sp-Entity]: /graph/api/resources/serviceprincipal
-[AZURE-Portal]: https://portal.azure.com
+
+[ms-graph-app-entity]: /graph/api/resources/application
+[ms-graph-sp-entity]: /graph/api/resources/serviceprincipal
+[azure-portal]: https://portal.azure.com
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
na
Last updated 05/16/2021-+
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
na
Last updated 01/05/2022-+
In Azure AD entitlement management, you can see who has been assigned to access
To use Azure AD entitlement management and assign users to access packages, you must have one of the following licenses: + - Azure AD Premium P2 - Enterprise Mobility + Security (EMS) E5 license
To use Azure AD entitlement management and assign users to access packages, you
## View assignments programmatically ### View assignments with Microsoft Graph
-You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/entitlementmanagement-list-accesspackageassignments?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
+You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/entitlementmanagement-list-accesspackageassignments?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve assignments across all catalogs.
### View assignments with PowerShell
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
na
Last updated 06/18/2020-+
active-directory Entitlement Management Access Package Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-edit.md
na
Last updated 06/18/2020-+
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
na
Last updated 07/11/2022-+
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
na
Last updated 12/15/2021-+
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
na
Last updated 03/24/2022-+
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
na
Last updated 07/01/2021-+
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md
na
Last updated 9/20/2021-+
In Azure AD entitlement management, you can see who has requested access package
If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you can retry those requests by using the [reprocess functionality](entitlement-management-reprocess-access-package-requests.md).
-### View assignments with Microsoft Graph
-You can also retrieve requests for an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignmentRequests](/graph/api/entitlementmanagement-list-accesspackageassignmentrequests?view=graph-rest-beta&preserve-view=true). You can supply a filter to indicate a specific access package, such as: `$expand=accessPackage&$filter=accessPackage/id eq '9bbe5f7d-f1e7-4eb1-a586-38cdf6f8b1ea'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
+### View requests with Microsoft Graph
+You can also retrieve requests for an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignmentRequests](/graph/api/entitlementmanagement-list-accesspackageassignmentrequests?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access package requests from multiple catalogs, if user or application service principal is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$expand=accessPackage&$filter=accessPackage/id eq '9bbe5f7d-f1e7-4eb1-a586-38cdf6f8b1ea'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API to retrieve requests across all catalogs.
## Remove request (Preview)
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
na
Last updated 12/14/2020-+
active-directory Entitlement Management Access Package Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-settings.md
na
Last updated 06/18/2020-+
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
na
Last updated 10/26/2021-+
active-directory Entitlement Management Access Reviews Self Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-self-review.md
na
Last updated 06/18/2020-+
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
na
Last updated 8/31/2021-+
This article shows you how to create and manage a catalog of resources and acces
## Create a catalog
-A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. A user who has been delegated the [catalog creator](entitlement-management-delegate.md) role can create a catalog for resources that they own. Whoever creates the catalog becomes the first catalog owner. A catalog owner can add more catalog owners.
+A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. A user who has been delegated the [catalog creator](entitlement-management-delegate.md) role can create a catalog for resources that they own. Whoever creates the catalog becomes the first catalog owner. A catalog owner can add more users, groups of users, or application service principals as catalog owners.
**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md
na
Last updated 07/6/2021-+
active-directory Entitlement Management Delegate Managers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-managers.md
na
Last updated 06/18/2020-+
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md
na
Last updated 7/6/2021-+
After delegation, the marketing department might have roles similar to the follo
## Entitlement management roles
-Entitlement management has the following roles that are specific to entitlement management.
+Entitlement management has the following roles that apply across all catalogs.
| Entitlement management role | Role definition ID | Description | | | | -- | | Catalog creator | `ba92d953-d8e0-4e39-a797-0cbedb0a89e8` | Create and manage catalogs. Typically an IT administrator who isn't a Global administrator, or a resource owner for a collection of resources. The person that creates a catalog automatically becomes the catalog's first catalog owner, and can add more catalog owners. A catalog creator canΓÇÖt manage or see catalogs that they donΓÇÖt own and canΓÇÖt add resources they donΓÇÖt own to a catalog. If the catalog creator needs to manage another catalog or add resources they donΓÇÖt own, they can request to be a co-owner of that catalog or resource. |
-| Catalog owner | `ae79f266-94d4-4dab-b730-feca7e132178` | Edit and manage existing catalogs. Typically an IT administrator or resource owners, or a user who the catalog owner has chosen. |
+
+Entitlement management has the following roles that are defined for each particular catalog. An administrator or a catalog owner can add users, groups of users, or service principals to these roles.
+
+| Entitlement management role | Role definition ID | Description |
+| | | -- |
+| Catalog owner | `ae79f266-94d4-4dab-b730-feca7e132178` | Edit and manage access packages and other resources in a catalog. Typically an IT administrator or resource owners, or a user who the catalog owner has chosen. |
| Catalog reader | `44272f93-9762-48e8-af59-1b5351b1d6b3` | View existing access packages within a catalog. | | Access package manager | `7f480852-ebdc-47d4-87de-0d8498384a83` | Edit and manage all existing access packages within a catalog. | | Access package assignment manager | `e2182095-804a-4656-ae11-64734e9b7ae5` | Edit and manage all existing access packages' assignments. |
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
na
Last updated 12/23/2020-+
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
na
Last updated 11/02/2020-+
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
na
Last updated 5/19/2021-+
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
na
Last updated 12/11/2020-+
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
na
Last updated 11/23/2020-+
active-directory Entitlement Management Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-process.md
na
Last updated 5/17/2021-+
active-directory Entitlement Management Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reports.md
na
Last updated 12/23/2020-+
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
na
Last updated 06/25/2021-+
active-directory Entitlement Management Reprocess Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md
na
Last updated 06/25/2021-+
active-directory Entitlement Management Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md
na
Last updated 3/30/2022-+
active-directory Entitlement Management Request Approve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-approve.md
na
Last updated 06/18/2020-+
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
na
Last updated 06/18/2020-+
There are several ways that you can configure entitlement management for your or
## Programmatic administration
-You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api). An application with the those application permissions can also use many of those API functions, with the exception of managing resources in catalogs and access packages.
+You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api). An application with those application permissions can also use many of those API functions, with the exception of managing resources in catalogs and access packages. An an applications which only needs to operate within specific catalogs, can be added to the **Catalog owner** or **Catalog reader** roles of a catalog to be authorized to update or read within that catalog.
## Next steps
active-directory Entitlement Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-troubleshoot.md
na
Last updated 12/23/2020-+
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
Next, you will create an app registration in Azure AD, so that Azure AD will rec
1. Select each of the permissions that your Azure Automation account will require, then select **Add permissions**.
+ * If your runbook is only performing queries or updates within a single catalog, then you do not need to assign it tenant-wide application permissions; instead you can assign the service principal to the catalog's **Catalog owner** or **Catalog reader** role.
* If your runbook is only performing queries for entitlement management, then it can use the **EntitlementManagement.Read.All** permission.
- * If your runbook is making changes to entitlement management, for example to create assignments, then use the **EntitlementManagement.ReadWrite.All** permission.
+ * If your runbook is making changes to entitlement management, for example to create assignments across multiple catalogs, then use the **EntitlementManagement.ReadWrite.All** permission.
* For other APIs, ensure that the necessary permission is added. For example, for identity protection, the **IdentityRiskyUser.Read.All** permission should be added. 10. Select **Grant admin permissions** to give your app those permissions.
active-directory How To Assign App Role Managed Identity Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-cli.md
Title: Assign a managed identity to an application role using Azure CLI - Azure
description: Step-by-step instructions for assigning a managed identity access to another application's role, using Azure CLI. documentationcenter: -+ editor:
In this article, you learn how to assign a managed identity to an application ro
```azurecli appName="{name for your application}"
- serverSPOID=$(az ad sp list --filter "displayName eq 'My App'" --query '[0].objectId' -o tsv | tr -d '[:space:]')
+ serverSPOID=$(az ad sp list --filter "displayName eq '$appName'" --query '[0].id' -o tsv | tr -d '[:space:]')
echo "object id for server service principal is: $serverSPOID" ```
In this article, you learn how to assign a managed identity to an application ro
```azurecli appID="{application id for your application}"
- serverSPOID=$(az ad sp list --filter "appId eq '$appID'" --query '[0].objectId' -o tsv | tr -d '[:space:]')
+ serverSPOID=$(az ad sp list --filter "appId eq '$appID'" --query '[0].id' -o tsv | tr -d '[:space:]')
echo "object id for server service principal is: $serverSPOID" ```
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
To specify a custom rotation interval, use the `rotation-poll-interval` flag:
az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 5m ```
-To disable autorotation, use the flag `disable-secret-rotation`:
-
-```azurecli-interactive
-az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --disable-secret-rotation
-```
+To disable autorotation, first disable the addon. Then, re-enable the addon without the `enable-secret-rotation` flag.
### Sync mounted content with a Kubernetes secret
app-service Tutorial Networking Isolate Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-networking-isolate-vnet.md
The tutorial continues to use the following environment variables from the previ
1. Create a subnet for the App Service virtual network integration. ```azurecli-interactive
- az network vnet subnet create --resource-group $groupName --vnet-name $vnetName --name vnet-integration-subnet --address-prefixes 10.0.0.0/24 --delegations Microsoft.Web/serverfarms
+ az network vnet subnet create --resource-group $groupName --vnet-name $vnetName --name vnet-integration-subnet --address-prefixes 10.0.0.0/24 --delegations Microsoft.Web/serverfarms --disable-private-endpoint-network-policies false
``` For App Service, the virtual network integration subnet is recommended to have a CIDR block of `/26` at a minimum (see [Virtual network integration subnet requirements](overview-vnet-integration.md#subnet-requirements)). `/24` is more than sufficient. `--delegations Microsoft.Web/serverfarms` specifies that the subnet is [delegated for App Service virtual network integration](../virtual-network/subnet-delegation-overview.md).
The tutorial continues to use the following environment variables from the previ
1. Create another subnet for the private endpoints. ```azurecli-interactive
- az network vnet subnet create --resource-group $groupName --vnet-name $vnetName --name private-endpoint-subnet --address-prefixes 10.0.1.0/24 --disable-private-endpoint-network-policies
+ az network vnet subnet create --resource-group $groupName --vnet-name $vnetName --name private-endpoint-subnet --address-prefixes 10.0.1.0/24 --disable-private-endpoint-network-policies true
``` For private endpoint subnets, you must [disable private endpoint network policies](../private-link/disable-private-endpoint-network-policy.md).
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md
This is just a subset of the metrics available for Application Gateway. For more
## Azure Monitor Network Insights
-<!-- OPTIONAL SECTION. Only include if your service has an "insight" associated with it. Examples of insights include
- - CosmosDB https://docs.microsoft.com/azure/azure-monitor/insights/cosmosdb-insights-overview
- - If you still aren't sure, contact azmondocs@microsoft.com.>
>- Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights". <!-- Give a quick outline of what your "insight page" provides and refer to another article that gives details -->
azure-arc Onboard Ansible Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-ansible-playbooks.md
If you are onboarding machines to Azure Arc-enabled servers, copy the following
```yaml - name: Onboard Linux and Windows Servers to Azure Arc-enabled servers with public endpoint connectivity
- hosts: <INSERT-HOSTS>
- vars:
- azure:
- service_principal_id: 'INSERT-SERVICE-PRINCIPAL-CLIENT-ID'
- service_principal_secret: 'INSERT-SERVICE-PRINCIPAL-SECRET'
- resource_group: 'INSERT-RESOURCE-GROUP'
- tenant_id: 'INSERT-TENANT-ID'
- subscription_id: 'INSERT-SUBSCRIPTION-ID'
- location: 'INSERT-LOCATION'
+ hosts: all
+ # vars:
+ # azure:
+ # service_principal_id: 'INSERT-SERVICE-PRINCIPAL-CLIENT-ID'
+ # service_principal_secret: 'INSERT-SERVICE-PRINCIPAL-SECRET'
+ # resource_group: 'INSERT-RESOURCE-GROUP'
+ # tenant_id: 'INSERT-TENANT-ID'
+ # subscription_id: 'INSERT-SUBSCRIPTION-ID'
+ # location: 'INSERT-LOCATION'
tasks:
- - name: Check if the Connected Machine Agent has already been downloaded on Linux servers
- stat:
- path: /usr/bin/azcmagent
- get_attributes: False
- get_checksum: False
- get_mine: azcmagent_downloaded
- register: azcmagent_downloaded
- when: ansible_system == 'Linux'
- - name: Check if the Connected Machine Agent has already been downloaded on Windows servers
- stat:
- path: C:\Program Files\AzureConnectedMachineAgent
- get_attributes: False
- get_checksum: False
- get_mine: azcmagent_downloaded
- register: azcmagent_downloaded
- when: ansible_system == 'Windows'
- - name: Download the Connected Machine Agent on Linux servers
- become: yes
- get_url:
- url: https://aka.ms/azcmagent
- dest: ~/install_linux_azcmagent.sh
- mode: '700'
- when: (ansible_system == 'Linux') and (not azcmagent_downloaded.stat.exists)
- - name: Download the Connected Machine Agent on Windows servers
- win_get_url:
- url: https://aka.ms/AzureConnectedMachineAgent
- dest: C:\AzureConnectedMachineAgent.msi
- when: (ansible_os_family == 'Windows') and (not azcmagent_downloaded.stat.exists)
- - name: Install the Connected Machine Agent on Linux servers
- become: yes
- command:
- cmd: bash ~/install_linux_azcmagent.sh
- when: (ansible_system == 'Linux') and (not azcmagent_downloaded.stat.exists)
- - name: Install the Connected Machine Agent on Windows servers
- win_package:
- path: C:\AzureConnectedMachineAgent.msi
- when: (ansible_os_family == 'Windows') and (not azcmagent_downloaded.stat.exists)
- - name: Check if the Connected Machine Agent has already been connected
- become: true
- command:
- cmd: azcmagent show
- register: azcmagent_connected
- - name: Connect the Connected Machine Agent on Linux servers to Azure Arc
- become: yes
- command:
- cmd: azcmagent connect --service-principal-id {{ azure.service_principal_id }} --service-principal-secret {{ azure.service_principal_secret }} --resource-group {{ azure.resource_group }} --tenant-id {{ azure.tenant_id }} --location {{ azure.location }} --subscription-id {{ azure.subscription_id }}
- when: (azcmagent_connected.rc == 0) and (ansible_system == 'Linux')
- - name: Connect the Connected Machine Agent on Windows servers to Azure
- win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}"'
- when: (azcmagent_connected.rc == 0) and (ansible_os_family == 'Windows')
+ - name: Check if the Connected Machine Agent has already been downloaded on Linux servers
+ stat:
+ path: /usr/bin/azcmagent
+ get_attributes: False
+ get_checksum: False
+ register: azcmagent_lnx_downloaded
+ when: ansible_system == 'Linux'
+
+ - name: Download the Connected Machine Agent on Linux servers
+ become: yes
+ get_url:
+ url: https://aka.ms/azcmagent
+ dest: ~/install_linux_azcmagent.sh
+ mode: '700'
+ when: (ansible_system == 'Linux') and (azcmagent_lnx_downloaded.stat.exists == false)
+
+ - name: Install the Connected Machine Agent on Linux servers
+ become: yes
+ shell: bash ~/install_linux_azcmagent.sh
+ when: (ansible_system == 'Linux') and (not azcmagent_lnx_downloaded.stat.exists)
+
+ - name: Check if the Connected Machine Agent has already been downloaded on Windows servers
+ win_stat:
+ path: C:\Program Files\AzureConnectedMachineAgent
+ register: azcmagent_win_downloaded
+ when: ansible_os_family == 'Windows'
+
+ - name: Download the Connected Machine Agent on Windows servers
+ win_get_url:
+ url: https://aka.ms/AzureConnectedMachineAgent
+ dest: C:\AzureConnectedMachineAgent.msi
+ when: (ansible_os_family == 'Windows') and (not azcmagent_win_downloaded.stat.exists)
+
+ - name: Install the Connected Machine Agent on Windows servers
+ win_package:
+ path: C:\AzureConnectedMachineAgent.msi
+ when: (ansible_os_family == 'Windows') and (not azcmagent_win_downloaded.stat.exists)
+
+ - name: Check if the Connected Machine Agent has already been connected
+ become: true
+ command:
+ cmd: azcmagent check
+ register: azcmagent_lnx_connected
+ ignore_errors: yes
+ when: ansible_system == 'Linux'
+ failed_when: (azcmagent_lnx_connected.rc not in [ 0, 16 ])
+ changed_when: False
+
+ - name: Check if the Connected Machine Agent has already been connected on windows
+ win_command: azcmagent check
+ register: azcmagent_win_connected
+ when: ansible_os_family == 'Windows'
+ ignore_errors: yes
+ failed_when: (azcmagent_win_connected.rc not in [ 0, 16 ])
+ changed_when: False
+
+ - name: Connect the Connected Machine Agent on Linux servers to Azure Arc
+ become: yes
+ shell: azcmagent connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}"
+ when: (ansible_system == 'Linux') and (azcmagent_lnx_connected.rc is defined and azcmagent_lnx_connected.rc != 0)
+
+ - name: Connect the Connected Machine Agent on Windows servers to Azure
+ win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}"'
+ when: (ansible_os_family == 'Windows') and (azcmagent_win_connected.rc is defined and azcmagent_win_connected.rc != 0)
``` ## Modify the Ansible playbook
azure-cache-for-redis Redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/redis-cache-insights-overview.md
To view the utilization and performance of your storage accounts across all of y
1. Search for **Monitor**, and select **Monitor**.
- ![Search box with the word "Monitor" and the Services search result that shows "Monitor" with a speedometer symbol](../azure-monitor/insights/media/cosmosdb-insights-overview/search-monitor.png)
+ ![Search box with the word "Monitor" and the Services search result that shows "Monitor" with a speedometer symbol](../cosmos-db/media/cosmosdb-insights-overview/search-monitor.png)
1. Select **Azure Cache for Redis**. If this option isn't present, select **More** > **Azure Cache for Redis**.
Selecting any of the other tabs for **Performance** or **Operations** opens the
To pin any metric section to an [Azure dashboard](../azure-portal/azure-portal-dashboards.md), select the pushpin symbol in the section's upper right.
-![A metric section with the pushpin symbol highlighted](../azure-monitor/insights/media/cosmosdb-insights-overview/pin.png)
+![A metric section with the pushpin symbol highlighted](../cosmos-db/media/cosmosdb-insights-overview/pin.png)
To export your data into an Excel format, select the down arrow symbol to the left of the pushpin symbol.
-![A highlighted export-workbook symbol](../azure-monitor/insights/media/cosmosdb-insights-overview/export.png)
+![A highlighted export-workbook symbol](../cosmos-db/media/cosmosdb-insights-overview/export.png)
To expand or collapse all views in a workbook, select the expand symbol to the left of the export symbol.
-![A highlighted expand-workbook symbol](../azure-monitor/insights/media/cosmosdb-insights-overview/expand.png)
+![A highlighted expand-workbook symbol](../cosmos-db/media/cosmosdb-insights-overview/expand.png)
## Customize Azure Monitor for Azure Cache for Redis Because this experience is built atop Azure Monitor workbook templates, you can select **Customize** > **Edit** > **Save** to save a copy of your modified version into a custom workbook.
-![A command bar with Customize highlighted](../azure-monitor/insights/media/cosmosdb-insights-overview/customize.png)
+![A command bar with Customize highlighted](../cosmos-db/media/cosmosdb-insights-overview/customize.png)
Workbooks are saved within a resource group in either the **My Reports** section or the **Shared Reports** section. **My Reports** is available only to you. **Shared Reports** is available to everyone with access to the resource group. After you save a custom workbook, go to the workbook gallery to open it.
-![A command bar with Gallery highlighted](../azure-monitor/insights/media/cosmosdb-insights-overview/gallery.png)
+![A command bar with Gallery highlighted](../cosmos-db/media/cosmosdb-insights-overview/gallery.png)
## Troubleshooting
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The Map Control API is a convenient client library. This API allows you to easil
About this code:
- * A `ready` event is added to the map, which fires when the map resources finnish loading and the map is ready to be accessed.
+ * A `ready` event is added to the map, which fires when the map resources finish loading and the map is ready to be accessed.
* In the map `ready` event handler, a data source is created to store result data. * A symbol layer is created and attached to the data source. This layer specifies how the result data in the data source should be rendered. In this case, the result is rendered with a dark blue round pin icon, centered over the results coordinate, that allows other icons to overlap. * The result layer is added to the map layers.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that are supported by the Azure
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|::
-| AlmaLinux | X | X | |
+| AlmaLinux 8.* | X | X | |
| Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | | | CentOS Linux 8 | X <sup>3</sup> | X | |
The following tables list the operating systems that are supported by the Azure
| Red Hat Enterprise Linux Server 7 | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X |
-| Rocky Linux | X | X | |
+| Rocky Linux 8.* | X | X | |
| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | | | SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | | | SUSE Linux Enterprise Server 15 SP1 | X | X | |
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
The telemetry is available in the `customEvents` table on the [Application Insig
If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackEvent()`, the sampling process transmitted only one of them. To get a correct count of custom events, use code such as `customEvents | summarize sum(itemCount)`.
+> [!NOTE]
+> itemCount has a minimum value of one; the record itself represents an entry.
+ ## GetMetric To learn how to effectively use the `GetMetric()` call to capture locally pre-aggregated metrics for .NET and .NET Core applications, see [Custom metric collection in .NET and .NET Core](./get-metric.md).
The telemetry is available in the `customMetrics` table in [Application Insights
* `valueSum`: The sum of the measurements. To get the mean value, divide by `valueCount`. * `valueCount`: The number of measurements that were aggregated into this `trackMetric(..)` call.
+> [!NOTE]
+> valueCount has a minimum value of one; the record itself represents an entry.
+ ## Page views In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded. But you can change the default to track page views at more or different times. For example, in an app that displays tabs or blades, you might want to track a page whenever the user opens a new blade.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Download the [applicationinsights-agent-3.3.1.jar](https://github.com/microsoft/
> [!WARNING] >
-> If you're upgrading from 3.2.x to 3.3.1:
+> If you're upgrading from 3.2.x:
>
-> - Starting from 3.3.1, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
+> - Starting from 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
> - Exception records are no longer recorded for failed dependencies, they are only recorded for failed requests. > > If you're upgrading from 3.1.x:
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Instrumentation key overrides allow you to override the [default instrumentation
## Cloud role name overrides (preview)
-This feature is in preview, starting from 3.3.1.
+This feature is in preview, starting from 3.3.0.
Cloud role name overrides allow you to override the [default cloud role name](#cloud-role-name), for example: * Set one cloud role name for one http path prefix `/myapp1`.
These are the valid `level` values that you can specify in the `applicationinsig
### LoggingLevel
-Starting from version 3.3.1, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
+Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
If needed, you can re-enable the previous behavior:
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from version 3.3.1, you can capture request and response headers on your server (request) telemetry:
+Starting from version 3.3.0, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.3.1, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.3.0, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.3.1
+> Vertx HTTP Library instrumentation is available starting from version 3.3.0
## Metric interval
When sending telemetry to the Application Insights service fails, Application In
to disk and continue retrying from disk. The default limit for disk persistence is 50 Mb. If you have high telemetry volume, or need to be able to recover from
-longer network or ingestion service outages, you can increase this limit starting from version 3.3.1:
+longer network or ingestion service outages, you can increase this limit starting from version 3.3.0:
```json {
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md
npm install @microsoft/applicationinsights-angularplugin-js
Set up an instance of Application Insights in the entry component in your app: + ```js import { Component } from '@angular/core'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
export class AppComponent {
){ var angularPlugin = new AngularPlugin(); const appInsights = new ApplicationInsights({ config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
extensions: [angularPlugin], extensionConfig: { [angularPlugin.identifier]: { router: this.router }
azure-monitor Javascript React Native Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-native-plugin.md
react-native link react-native-device-info
To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance. + ```typescript import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
var RNPlugin = new ReactNativePlugin(); var appInsights = new ApplicationInsights({ config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
extensions: [RNPlugin] } });
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
npm install @microsoft/applicationinsights-react-js @microsoft/applicationinsigh
Initialize a connection to Application Insights: + ```javascript import React from 'react'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
const browserHistory = createBrowserHistory({ basename: '' });
var reactPlugin = new ReactPlugin(); var appInsights = new ApplicationInsights({ config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
extensions: [reactPlugin], extensionConfig: { [reactPlugin.identifier]: { history: browserHistory }
For `react-router v6` or other scenarios where router history is not exposed, ap
var reactPlugin = new ReactPlugin(); var appInsights = new ApplicationInsights({ config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
enableAutoRouteTracking: true, extensions: [reactPlugin] }
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
To address the problems introduced by sampling pre-aggregated metrics are used i
## Frequently asked questions
+*Does sampling affect alerting accuracy?*
+* Yes. Alerts can only trigger upon sampled data. Aggressive filtering may result in alerts not firing as expected.
+
+> [!NOTE]
+> Sampling is not applied to Metrics, but Metrics can be derived from sampled data. In this way sampling may indirectly affect alerting accuracy.
+ *What is the default sampling behavior in the ASP.NET and ASP.NET Core SDKs?* * If you are using one of the latest versions of the above SDK, Adaptive Sampling is enabled by default with five telemetry items per second.
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
If you want to disable telemetry conditionally and dynamically, you may resolve
## Frequently asked questions
+### Which package should I use?
+
+| .Net Core app scenario | Package |
+|||
+| Without HostedServices | AspNetCore |
+| With HostedServices | AspNetCore (not WorkerService) |
+| With HostedServices, monitoring only HostedServices | WorkerService (rare scenario) |
+
+### Can HostedServices inside a .NET Core app using the AspNetCore package have TelemetryClient injected to it?
+
+* Yes. The config will be shared with the rest of the web application.
+ ### How can I track telemetry that's not automatically collected? Get an instance of `TelemetryClient` by using constructor injection, and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` instances. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry.
For the latest updates and bug fixes, [consult the release notes](./release-note
## Next steps * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage.
-* [Track additional dependencies not automatically tracked](./auto-collect-dependencies.md).
+* [Track more dependencies not automatically tracked](./auto-collect-dependencies.md).
* [Enrich or Filter auto collected telemetry](./api-filtering-sampling.md). * [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection).
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
While you should avoid sending duplicate data to multiple workspaces because of
### Data access control When you grant a user [access to a workspace](manage-access.md#azure-rbac), they have access to all data in that workspace. This is appropriate for a member of a central administration or security team who must access data for all resources. Access to the workspace is also determined by resource-context RBAC and table-level RBAC.
-Resource-context RBAC](manage-access.md#access-mode)
+[Resource-context RBAC](manage-access.md#access-mode)
By default, if a user has read access to an Azure resource, they inherit permissions to any of that resource's monitoring data sent to the workspace. This allows users to access information about resources they manage without being granted explicit access to the workspace. If you need to block this access, you can change the [access control mode](manage-access.md#access-control-mode) to require explicit workspace permissions. - **If you want users to be able to access data for their resources**, keep the default access control mode of *Use resource or workspace permissions*.
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The table below lists the available curated visualizations and more detailed inf
| [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | GA (General availability) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, to troubleshoot sign-in failures, and to identify legacy authentications. | | [Azure Backup](../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health |
-| [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
+| [Azure Cosmos DB Insights](../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. | | [Azure Data Explorer insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. | | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status|
The following table lists Azure services and the data they collect into Azure Mo
| [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/IotHubs | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesiothubs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesiothubs) | | | | [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml) | Microsoft.Devices/ProvisioningServices | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesprovisioningservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesprovisioningservices) | | | | [Azure Digital Twins](../digital-twins/overview.md) | Microsoft.DigitalTwins/digitalTwinsInstances | [**Yes**](./essentials/metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdigitaltwinsdigitaltwinsinstances) | | |
- | [Azure Cosmos DB](../cosmos-db/index.yml) | Microsoft.DocumentDB/databaseAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdocumentdbdatabaseaccounts) | [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | |
+ | [Azure Cosmos DB](../cosmos-db/index.yml) | Microsoft.DocumentDB/databaseAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdocumentdbdatabaseaccounts) | [Azure Cosmos DB Insights](../cosmos-db/cosmosdb-insights-overview.md) | |
| [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/domains | [**Yes**](./essentials/metrics-supported.md#microsofteventgriddomains) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgriddomains) | | | | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/eventSubscriptions | [**Yes**](./essentials/metrics-supported.md#microsofteventgrideventsubscriptions) | No | | | | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/extensionTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridextensiontopics) | No | | |
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 07/11/2022 Last updated : 07/25/2022 # Solution architectures using Azure NetApp Files
This section provides solutions for Azure platform services.
* [Magento e-commerce platform in Azure Kubernetes Service (AKS)](/azure/architecture/example-scenario/magento/magento-azure) * [Protecting Magento e-commerce platform in AKS against disasters with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-magento-e-commerce-platform-in-aks-against-disasters/ba-p/3285525) * [Protecting applications on private Azure Kubernetes Service clusters with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-applications-on-private-azure-kubernetes-service/ba-p/3289422)
+* [Providing Disaster Recovery to CloudBees-Jenkins in AKS with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/providing-disaster-recovery-to-cloudbees-jenkins-in-aks-with/ba-p/3553412)
### Azure Red Hat Openshift
azure-sql-edge Tutorial Deploy Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-deploy-azure-resources.md
Deploy the Azure resources required by this Azure SQL Edge tutorial. These can b
14. Get the device primary connection string. This will be needed later for the VM. The following command uses Azure CLI for deployments. ```powershell
- $deviceConnectionString = az iot hub device-identity show-connection-string --device-id $EdgeDeviceId --hub-name $IoTHubName --resource-group $ResourceGroup --subscription $SubscriptionName
+ $deviceConnectionString = az iot hub device-identity connection-string show --device-id $EdgeDeviceId --hub-name $IoTHubName --resource-group $ResourceGroup --subscription $SubscriptionName
$connString = $deviceConnectionString[1].Substring(23,$deviceConnectionString[1].Length-24) $connString ```
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
This article describes the monitoring data generated by Azure Video Indexer. Azu
<!--The **Overview** page in the Azure portal for each *Azure Video Indexer account* includes *[provide a description of the data in the Overview page.]*.
-## *Azure Video Indexer* insights
-
-<!-- OPTIONAL SECTION. Only include if your service has an "insight" associated with it. Examples of insights include
- - CosmosDB https://docs.microsoft.com/azure/azure-monitor/insights/cosmosdb-insights-overview
- - If you still aren't sure, contact azmondocs@microsoft.com.>
>
+## *Azure Video Indexer* insights -->
Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Title: Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)
description: This article shows how to enable internet access for your Azure VMware Solution. Previously updated : 05/12/2022 Last updated : 07/21/2022 # Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)
There are three options for configuring your reserved Public IP down to the NSX
A Sourced Network Translation Service (SNAT) with Port Address Translation (PAT) is used to allow many VMs to one SNAT service. This connection means you can provide Internet connectivity for many VMs.
+>[!IMPORTANT]
+> To enable SNAT for your specified address ranges, you must [configure a gateway firewall rule](#gateway-firewall-used-to-filter-traffic-to-vms-at-t1-gateways) and SNAT for the specific address ranges you desire. If you don't want SNAT enabled for specific address ranges, you must create a [No-NAT rule](#no-nat-rule-for-specific-address-ranges) for the address ranges to exclude. For this functionality to work as expected, make the No-NAT rule a higher priority than the SNAT rule.
+ **Add rule** 1. From your Azure VMware Solution private cloud, select **vCenter Credentials** 2. Locate your NSX-T URL and credentials. 3. Log in to **VMWare NSX-T**. 4. Navigate to **NAT Rules**. 5. Select the T1 Router.
-1. select **ADD NAT RULE**.
+1. Select **ADD NAT RULE**.
**Configure rule** 1. Enter a name. 1. Select **SNAT**.
-1. Optionally enter a source such as a subnet to SNAT or destination.
+1. Optionally, enter a source such as a subnet to SNAT or destination.
1. Enter the translated IP. This IP is from the range of Public IPs you reserved from the Azure VMware Solution Portal.
-1. Optionally give the rule a higher priority number. This prioritization will move the rule further down the rule list to ensure more specific rules are matched first.
+1. Optionally, give the rule a higher priority number. This prioritization will move the rule further down the rule list to ensure more specific rules are matched first.
1. Click **SAVE**. Logging can be enabled by way of the logging slider. For more information on NSX-T NAT configuration and options, see the [NSX-T NAT Administration Guide](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-7AD2C384-4303-4D6C-A44A-DEF45AA18A92.html)+
+## No NAT rule for specific address ranges
+
+A No NAT rule can be used to exclude certain matches from performing Network Address Translation. This policy can be used to allow private IP traffic to bypass the NAT rule.
+
+1. From your Azure VMware Solution private cloud, select **vCenter Credentials**.
+2. Locate your NSX-T URL and credentials.
+3. Log in to **VMWare NSX-T** and then select **3 NAT Rules**.
+1. Select the T1 Router and then select **ADD NAT RULE**.
+1. Select **SAVE**.
+ ### Inbound Internet Access for VMs A Destination Network Translation Service (DNAT) is used to expose a VM on a specific Public IP address and/or a specific port. This service provides inbound internet access to your workload VMs.
The VM is now exposed to the internet on the specific Public IP and/or specific
### Gateway Firewall used to filter traffic to VMs at T1 Gateways
-You can provide security protection for your network traffic in and out of the public Internet through your Gateway Firewall.
-1. From your Azure VMware Solution Private Cloud, select **VMware credentials**
+You can provide security protection for your network traffic in and out of the public internet through your Gateway Firewall.
+1. From your Azure VMware Solution Private Cloud, select **VMware credentials**.
2. Locate your NSX-T URL and credentials. 3. Log in to **VMware NSX-T**. 4. From the NSX-T home screen, select **Gateway Policies**.
cognitive-services Create Publish Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Quickstarts/create-publish-knowledge-base.md
When you are done creating the resource in the Azure portal, return to the QnA M
|--|--| |**Enable multi-turn extraction from URLs, .pdf or .docx files.**|Checked| |**Multi-turn default text**| Select an option|
- |**+ Add URL**|`https://www.microsoft.com/en-us/software-download/faq`|
+ |**+ Add URL**|`https://www.microsoft.com/download/faq.aspx`|
|**Chit-chat**|Select **Professional**| 7. In **Step 5**, Select **Create your KB**.
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
When preparing your text file, make sure it:
* For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs. See [SSML text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/SSMLTextInputSample.txt). > [!NOTE]
-> When using SSML text, be sure to use the [supported SSML elements](speech-synthesis-markup.md?tabs=csharp#supported-ssml-elements) except the `audio` and `mstts:backgroundaudio` elements. The `audio` and `mstts:backgroundaudio` elements are not supported by Long Audio API. The `audio` element will be ignored without any error message. The `mstts:backgroundaudio` element will cause the systhesis task failure. If your synthesis task fails, download the audio result (.zip file) and check the error report with suffix name "err.txt" within the zip file for details.
+> When using SSML text, be sure to use the [supported SSML elements](speech-synthesis-markup.md?tabs=csharp#supported-ssml-elements) except the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The `audio`, `mstts:backgroundaudio`, and `lexicon` elements are not supported by Long Audio API. The `audio` and `lexicon` elements will be ignored without any error message. The `mstts:backgroundaudio` element will cause the systhesis task failure. If your synthesis task fails, download the audio result (.zip file) and check the error report with suffix name "err.txt" within the zip file for details.
## Sample code
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The custom lexicon currently supports UTF-8 encoding.
> [!NOTE] > Custom lexicon feature may not work for some new locales.
+>
+> The `lexicon` element is not supported by the [Long Audio API](long-audio-api.md).
**Syntax**
Any audio included in the SSML document must meet these requirements:
* The audio must not contain any customer-specific or other sensitive information. > [!NOTE]
-> The 'audio' element is not supported by the Long Audio API.
+> The 'audio' element is not supported by the [Long Audio API](long-audio-api.md).
**Syntax**
If the background audio provided is shorter than the text-to-speech or the fade
Only one background audio file is allowed per SSML document. You can intersperse `audio` tags within the `voice` element to add more audio to your SSML document.
-> [!NOTE]
-> The `mstts:backgroundaudio` element is not supported by the Long Audio API.
- > [!NOTE] > The `mstts:backgroundaudio` element should be put in front of all `voice` elements, i.e., the first child of the `speak` element.
+>
+> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](long-audio-api.md).
**Syntax**
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md
The multi-service resource is named **Cognitive Services** in the portal. The mu
* **Speech** - Speech * **Vision** - Computer Vision, Custom Vision, Face
-1. You can select this link to create an Azure Cognitive multi-service resource: [Create a Cognitive Services resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne).
+1. You can select this link to create an Azure Cognitive multi-service resource: [Create a Cognitive Services resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne).
1. On the **Create** page, provide the following information:
The multi-service resource is named **Cognitive Services** in the portal. The mu
### [Decision](#tab/decision) 1. You can select one of these links to create a Decision resource:
- - [Anomaly Detector](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector)
- - [Content Moderator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator)
- - [Metrics Advisor](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesMetricsAdvisor)
- - [Personalizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer)
+ - [Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector)
+ - [Content Moderator](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator)
+ - [Metrics Advisor](https://portal.azure.com/#create/Microsoft.CognitiveServicesMetricsAdvisor)
+ - [Personalizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer)
1. On the **Create** page, provide the following information:
The multi-service resource is named **Cognitive Services** in the portal. The mu
### [Language](#tab/language) 1. You can select one of these links to create a Language resource:
- - [Immersive reader](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesImmersiveReader)
- - [Language Understanding (LUIS)](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne)
- - [Language service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics)
- - [Translator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation)
- - [QnA Maker](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker)
+ - [Immersive reader](https://portal.azure.com/#create/Microsoft.CognitiveServicesImmersiveReader)
+ - [Language Understanding (LUIS)](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne)
+ - [Language service](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics)
+ - [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation)
+ - [QnA Maker](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker)
1. On the **Create** page, provide the following information:
The multi-service resource is named **Cognitive Services** in the portal. The mu
### [Speech](#tab/speech)
-1. You can select this link to create a Speech resource: [Speech Services](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)
+1. You can select this link to create a Speech resource: [Speech Services](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)
1. On the **Create** page, provide the following information:
The multi-service resource is named **Cognitive Services** in the portal. The mu
### [Vision](#tab/vision) 1. You can select one of these links to create a Vision resource:
- - [Computer vision](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
- - [Custom vision service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesCustomVision)
- - [Face](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFace)
+ - [Computer vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
+ - [Custom vision service](https://portal.azure.com/#create/Microsoft.CognitiveServicesCustomVision)
+ - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
1. On the **Create** page, provide the following information:
The multi-service resource is named **Cognitive Services** in the portal. The mu
1. From the quickstart pane that opens, you can access the resource endpoint and manage keys. <!--
-1. If you missed the previous steps or need to find your resource later, go to the [Azure services](https://ms.portal.azure.com/#home) home page. From here you can view recent resources, select **My resources**, or use the search box to find your resource by name.
+1. If you missed the previous steps or need to find your resource later, go to the [Azure services](https://portal.azure.com/#home) home page. From here you can view recent resources, select **My resources**, or use the search box to find your resource by name.
:::image type="content" source="media/cognitive-services-apis-create-account/home-my-resources.png" alt-text="Find resource keys from home screen"::: -->
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md
The following limits are observed for the custom named entity recognition.
|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. | |Count of entity types | 1 | 200 | |Entity length in characters | 1 | 500 |
-|Count of trained models per project| 0 | 50 |
+|Count of trained models per project| 0 | 10 |
|Count of deployments per project| 0 | 10 | ## Naming limits
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/service-limits.md
The following limits are observed for the custom text classification.
|Documents count | 10 | 100,000 | |Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. | |Count of classes | 1 | 200 |
-|Count of trained models per project| 0 | 50 |
+|Count of trained models per project| 0 | 10 |
|Count of deployments per project| 0 | 10 | ## Naming limits
cognitive-services Use Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/tutorials/use-kubernetes-service.md
This procedure requires several tools that must be installed and run locally. Do
app: keyphrase-app ```
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
+ 1. Save the file, and close the text editor. 1. Run the Kubernetes `apply` command with the *keyphrase.yaml* file as its target:
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
For large data files, we recommend you import from Azure Blob. Large files can b
The following python code will create a sample dataset and show how to upload a file and print the returned ID. Make sure to save the IDs returned as you'll need them for the fine-tuning training job creation.
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
+ ```python import openai from openai import cli
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
The Azure Communication Services SMS SDK uses the following error codes to help
| 4006 | The Destination/To number isn't reachable| Try resending the message at a later time | | 4007 | The Destination/To number has opted out of receiving messages from you| Mark the Destination/To number as opted out so that no further message attempts are made to the number| | 4008 | You've exceeded the maximum number of messages allowed for your profile| Ensure you aren't exceeding the maximum number of messages allowed for your number or use queues to batch the messages |
+| 4009 | Message is rejected by Microsoft Entitlement System| Most often it happens if fraudulent activity is detected. Please contact support for more details |
| 5000 | Message failed to deliver. Please reach out Microsoft support team for more details| File a support request through the Azure portal | | 5001 | Message failed to deliver due to temporary unavailability of application/system| | | 5002 | Message Delivery Timeout| Try resending the message |
confidential-computing Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/application-development.md
With Azure confidential computing, you can create application enclaves for virtu
Application enclaves are isolated environments that protect specific code and data. When creating enclaves, you must determine what part of the application runs within the enclave. When you create or manage enclaves, be sure to use compatible SDKs and frameworks for the chosen deployment stack.
-Confidential computing currently offers application enclaves. Specifically, you can deploy and develop with application enclaves using [confidential VMs with Intel SGX enabled](virtual-machine-solutions-sgx.md).
-
-## Intel SGX
-
-With Intel SGX technology, you can encrypt application enclaves, or Trusted Execution Environments, with an inaccessible key stored within the CPU. Decryption of the code and data inside the enclave happens inside the processor. Only the CPU has access. This level of isolation protects data-in-use and protects against both hardware and software attacks. For more information, see the [Intel SGX website](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html).
-
-Azure offers Intel SGX in a virtualization environment through various VM sizes in the DC series. Multiple VM sizes allow for various Enclave Page Cache (EPC) sizes. EPC is the maximum amount of memory area for an enclave on that VM. Currently, Intel SGX VMs are available on [DCsv2-Series](../virtual-machines/dcv2-series.md) VMs and [DCsv3/DCdsv3-series](../virtual-machines/dcv3-series.md) VMs.
-
+You can develop and deploy application enclaves using [confidential VMs with Intel SGX enabled](virtual-machine-solutions-sgx.md).
### Developing applications
As you design an application, identify and determine what part of needs to run i
## Next steps -- [Deploy a confidential computing Intel SGX VM](quick-create-portal.md) - [Start developing applications with open-source software](enclave-development-oss.md)
confidential-computing Choose Confidential Containers Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/choose-confidential-containers-offerings.md
Last updated 11/01/2021-+
-# Choosing confidential container offerings
+# Choosing container compute offerings for confidential computing
-Azure confidential computing offers multiple types of confidential containers. You can use these containers to support data integrity and confidentiality, and code integrity.
+Azure confidential computing offers multiple types of containers with varying tiers of confidentiality. You can use these containers to support data integrity and confidentiality, and code integrity.
Confidential containers also help with code protection through encryption. You can create hardware-based assurances and hardware root of trust. You can also lower your attack surface area with confidential containers.
-## Enclaves confidential containers
+The diagram below will guide different offerings in this portfolio
+++
+## Links to container compute offerings
+
+**Azure Container Instances with Confidential containers (AMD SEV_SNP)** are the first serverless offering that helps protect your container deployments with confidential computing through AMD SEV-SNP technology. Read more on the product [here](https://aka.ms/ccacipreview).
-You can deploy confidential containers with enclaves. This method of container deployments has the strongest security and compute isolation, with a lower Trusted Computing Base (TCB). Confidential containers based on Intel Software Guard Extensions (SGX) that run in the hardware-based Trusted Execution Environment (TEE) are available. These containers support lifting and shifting your existing container apps. Another option is to allow building custom apps with enclave awareness.
There are two programming and deployment models on Azure Kubernetes Service (AKS).
+<!-- You can deploy containers with confidential application enclaves. This method of container deployments has the strongest security and compute isolation, with a lower Trusted Computing Base (TCB). Confidential containers based on Intel Software Guard Extensions (SGX) that run in the hardware-based Trusted Execution Environment (TEE) are available. These containers support lifting and shifting your existing container apps. Another option is to allow building custom apps with enclave awareness. -->
**Unmodified containers** support higher programming languages on Intel SGX through the Azure Partner ecosystem of OSS projects. For more information, see the [unmodified containers deployment flow and samples](./confidential-containers.md). **Enclave-aware containers** use a custom Intel SGX programming model. For more information, see the [the enclave-aware containers deployment flow and samples](./enclave-aware-containers.md).
-![Diagram of enclave confidential containers with Intel SGX, showing isolation and security boundaries.](./media/confidential-containers/confidential-container-intel-sgx.png)
+<!-- ![Diagram of enclave confidential containers with Intel SGX, showing isolation and security boundaries.](./media/confidential-containers/confidential-container-intel-sgx.png) -->
## Learn more
confidential-computing Confidential Computing Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-computing-enclaves.md
-# Build with SGX enclaves
+# SGX enclaves
-Azure confidential computing offers [DCsv2-series](../virtual-machines/dcv2-series.md) and [DCsv3/DCdsv3-series](../virtual-machines/dcv3-series.md)* virtual machines (VMs). These VMs have [Intel® Software Guard Extensions (SGX)](https://intel.com/sgx).
-
-Intel SGX technology allows customers to create enclaves that protect data, and keep data encrypted while the CPU processes the data. The operating system (OS) and hypervisor can't access the data. Data center administrators with physical access also can't access the data.
-
-## Enclaves concept
+Intel SGX technology allows customers to create enclaves that protect data, and keep data encrypted while the CPU processes the data.
Enclaves are secured portions of the hardware's processor and memory. You can't view data or code inside the enclave, even with a debugger. If untrusted code tries to change content in enclave memory, SGX disables the environment and denies the operations. These unique capabilities help you protect your secrets from being accessible in the clear.
Enclaves are secured portions of the hardware's processor and memory. You can't
Think of an enclave as a secured lockbox. You put encrypted code and data inside the lockbox. From the outside, you can't see anything. You give the enclave a key to decrypt the data. The enclave processes and re-encrypts the data, before sending the data back out.
-Each enclave has an encrypted page cache (EPC) with a set size. The EPC determines the amount of memory that an enclave can hold. [DCsv2-series](../virtual-machines/dcv2-series.md) VMs hold up to 168 MiB. [DCsv3/DCdsv3-series](../virtual-machines/dcv3-series.md)* VMs hold up to 256 GB for more memory-intensive workloads.
-> [!NOTE]
-> *DCsv3 and DCdsv3 are in **public preview** as of November 1, 2021.
+Azure confidential computing offers [DCsv2-series](../virtual-machines/dcv2-series.md) and [DCsv3/DCdsv3-series](../virtual-machines/dcv3-series.md) virtual machines (VMs). These VMs have support for [Intel® Software Guard Extensions (SGX)](https://intel.com/sgx).
+
+Each enclave has an encrypted page cache (EPC) with a set size. The EPC determines the amount of memory that an enclave can hold. [DCsv2-series](../virtual-machines/dcv2-series.md) VMs hold up to 168 MiB. [DCsv3/DCdsv3-series](../virtual-machines/dcv3-series.md) VMs hold up to 256 GB for more memory-intensive workloads.
-For more information, see [how to deploy Intel SGX VMs with hardware-based trusted enclaves](virtual-machine-solutions-sgx.md).
## Developing for enclaves You can use various [software tools for developing applications that run in enclaves](application-development.md). These tools help you shield portions of your code and data inside the enclave. Make sure nobody outside your trusted environment can view or modify your data with these tools. ## Next Steps-- [Deploy a DCsv2 or DCsv3/DCdsv3-series virtual machine](quick-create-portal.md)-- [Develop an enclave-aware application](application-development.md) using the OE SDK
+- [Develop an enclave-aware application](application-development.md)
+- [Deploy a Intel SGX VM](quick-create-portal.md)
confidential-computing Confidential Containers Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-enclaves.md
+
+ Title: Confidential containers with Intex SGX enclaves on Azure
+description: Learn about unmodified container support with confidential containers on Intel SGX through OSS and partner solutions
+++ Last updated : 7/15/2022+++++
+# Confidential containers on Azure Kubernetes Service(AKS) with Intel SGX enclaves
+
+[Confidential containers](confidential-containers.md) help you run existing unmodified container applications of most **common programming languages** runtimes (Python, Node, Java etc.) in the Intel SGX based Trusted Execution Environment(TEE).
+This packaging model typically does not need any source-code modifications or recompilation and is the fastest method to run in Intel SGX enclaves. Typical deployment process for running your standard docker containers requires an Open-Source SGX Wrapper or Azure Partner Solution.
+In this packaging and execution model each container application is loaded in the trusted boundary (enclave) and with a hardware-based isolation enforced by Intel SGX CPU. Each container running in an enclave receives its own memory encryption key delivered from the Intel SGX CPU.
+This model works well for off the shelf container applications available in the market or custom apps currently running on general purpose nodes
+To run an existing Docker container, applications on confidential computing nodes require an Intel Software Guard Extensions (SGX) wrapper software to help the container execution within the bounds of special CPU instruction set.
+SGX creates a direct execution to the CPU to remove the guest operating system (OS), host OS, or hypervisor from the trust boundary. This step reduces the overall surface attack areas and vulnerabilities while achieving process level isolation within a single node.
+
+The overall process for running unmodified containers involves changes to how your container is packaged today as detailed below.
++
+The SGX wrapper software needed to help run standard containers are offered by Azure software partners or Open Source Software (OSS)solutions.
+
+## Partner enablers
+
+Developers can choose software providers based on their features, integration with Azure services and tooling support.
+
+> [!IMPORTANT]
+> Azure software partners often involve licensing fees on top of your Azure infrastructure. Please verify all partner software terms independently.
+
+### Fortanix
+
+[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://hubs.li/Q017JnNt0).
+
+![Diagram of Fortanix deployment process, showing steps to move applications to confidential containers and deploy.](./media/confidential-containers/fortanix-confidential-containers-flow.png)
+
+### SCONE (Scontain)
+
+[SCONE](https://scontain.com/) (Scontain) security policies generate certificates, keys, and secrets. Only services with attestation for an application see these credentials. Application services automatically do attestation for each other through TLS. You don't need to modify the applications or TLS. For more explanation, see SCONE's [Flask application demo](https://sconedocs.github.io/flask_demo/).
+
+SCONE can convert most existing binaries into applications that run inside enclaves. SCONE also protects interpreted languages like Python by encrypting both data files and Python code files. You can use SCONE security policies to protect encrypted files against unauthorized access, modifications, and rollbacks. For more information, see SCONE's documentation on [how to use SCONE with an existing Python application](https://sconedocs.github.io/sconify_image/).
+
+![Diagram of SCONE workflow, showing how SCONE processes binary images.](./media/confidential-containers/scone-workflow.png)
+
+You can deploy SCONE on Azure confidential computing nodes with AKS following this [SCONE sample AKS application deployment](https://sconedocs.github.io/aks/).
+
+### Anjuna
+
+[Anjuna](https://www.anjuna.io/) provides SGX platform software to run unmodified containers on AKS. For more information, see Anjuna's [documentation about functionality and sample applications](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp).
+
+Get started with a sample Redis Cache and Python Custom Application [here](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp)
+
+![Diagram of Anjuna's process, showing how containers are run on Azure confidential computing.](media/confidential-containers/anjuna-process-flow.png)
+
+## OSS enablers
+
+> [!NOTE]
+> Azure confidential computing and Microsoft aren't directly affiliated with these projects and solutions.
+
+### Gramine
+
+[Gramine](https://grapheneproject.io/) is a lightweight guest OS, designed to run a single Linux application with minimal host requirements. Gramine can run applications in an isolated environment. There's tooling support for converting existing Docker container to SGX ready containers.
+
+For more information, see the Gramine's [sample application and deployment on AKS](https://github.com/gramineproject/contrib/tree/master/Examples/aks-attestation)
+
+### Occlum
+
+[Occlum](https://occlum.io/) is a memory-safe, multi-process library OS (LibOS) for Intel SGX. The OS enables legacy applications to run on SGX with little to no modifications to source code. Occlum transparently protects the confidentiality of user workloads while allowing an easy "lift and shift" to existing Docker applications.
+
+For more information, see Occlum's [deployment instructions and sample apps on AKS](https://github.com/occlum/occlum/blob/master/docs/azure_aks_deployment_guide.md).
+
+### Marblerun
+
+[Marblerun](https://marblerun.sh/) is an orchestration framework for confidential containers. You can run and scale confidential services on SGX-enabled Kubernetes. Marblerun takes care of boilerplate tasks like verifying the services in your cluster, managing secrets for them, and establishing enclave-to-enclave mTLS connections between them. Marblerun also ensures that your cluster of confidential containers adheres to a manifest defined in simple JSON. You can verify the manifest with external clients through remote attestation.
+
+This framework extends the confidentiality, integrity, and verifiability properties of a single enclave to a Kubernetes cluster.
+
+Marblerun supports confidential containers created with Graphene, Occlum, and EGo, with [examples for each SDK](https://docs.edgeless.systems/marblerun/#/examples?id=examples). The framework runs on Kubernetes alongside your existing cloud-native tooling. There's a CLI and helm charts. Marblerun also supports confidential computing nodes on AKS. Follow Marblerun's [guide to deploy Marblerun on AKS](https://docs.edgeless.systems/marblerun/#/deployment/cloud?id=cloud-deployment).
+
+## Confidential Containers reference architectures
+
+- [Confidential data messaging for healthcare reference architecture and sample with Intel SGX confidential containers](https://github.com/Azure-Samples/confidential-container-samples/blob/main/confidential-healthcare-scone-confinf-onnx/README.md).
+- [Confidential big-data processing with Apache Spark on AKS with Intel SGX confidential containers](/azure/architecture/example-scenario/confidential/data-analytics-containers-spark-kubernetes-azure-sql).
+
+## Get in touch
+
+Do you have questions about your implementation? Do you want to become an enabler for confidential containers? Send an email to <acconaks@microsoft.com>.
+
+## Next steps
+
+- [Deploy AKS cluster with Intel SGX Confidential VM Nodes](./confidential-enclave-nodes-aks-get-started.md)
+- [Microsoft Azure Attestation](../attestation/overview.md)
+- [Intel SGX Confidential Virtual Machines](virtual-machine-solutions-sgx.md)
+- [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
Title: Confidential containers on Azure
-description: Learn about unmodified container support with confidential containers.
+description: Learn about unmodified lift and shift container support to confidential containers.
Previously updated : 11/04/2021 Last updated : 7/15/2022
# Confidential containers on Azure
-Azure confidential computing offers confidential containers. There are multiple [options you can choose for confidential containers](choose-confidential-containers-offerings.md). Secured and isolated environments with attestation, improve the overall security of your container deployments.
+Confidential containers provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security by running them in a Trusted Execution Environment (TEE). Azure offers a portfolio of capabilities through different confidential container options as discussed below.
-A hardware-based Trusted Execution Environment (TEE) provides strong assurances. A TEE provides hardware and software measurements from trusted computing base (TCB) components. Confidential containers offerings on Azure allow verification of these measurements and validate if the container applications run in a verifiable execution environment.
+## Benefits
+Confidential containers on Azure run within an enclave-based TEE or VM based TEE environments. Both deployment models help achieve high-isolation and memory encryption through hardware-based assurances. Confidential computing can enhance your deployment security posture in Azure cloud by protecting your memory space through encryption.
-Confidential containers support custom applications developed with any programming languages. You can also run Docker containers off the shelf.
+Below are the qualities of confidential containers:
+- Allows running existing standard container images with no code changes (lift-and-shift) within a TEE
+- Allows establishing a hardware root of trust through remote guest attestation
+- Provides strong assurances of data confidentiality, code integrity and data integrity in a cloud environment
+- Helps isolate your containers from other container groups/pods, as well as VM node OS kernel
-## Enablers with Intel SGX on Azure Kubernetes Service(AKS)
+## VM Isolated Confidential containers on Azure Container Instances (ACI) - Private Preview
+Confidential Containers on ACI platform leverages VM-based trusted execution environments (TEEs) based on AMDΓÇÖs SEV-SNP technology. The TEE provides memory encryption and integrity of the utility VMΓÇÖs address space as well as hardware-level isolation from other container groups, the host operating system, and the hypervisor. The Root-of-Trust (RoT), which is responsible for managing the TEE, provides support for remote attestation, including issuing an attestation report which may be used by a relying party to verify that the utility VM has been created and configured on a genuine AMD SEV-SNP CPU. Read more on the product [here](https://aka.ms/ccacipreview)
- To run an existing Docker container, applications on confidential computing nodes require an abstraction layer or Intel Software Guard Extensions (SGX) software to use the special CPU instruction set. Configure SGX to protect your sensitive application code. SGX creates a direct execution to the CPU to remove the guest operating system (OS), host OS, or hypervisor from the trust boundary. This step reduces the overall surface attack areas and vulnerabilities.
+## Confidential containers in an Intel SGX enclave through OSS or partner software
+Azure Kubernetes Service (AKS) supports adding [Intel SGX confidential computing VM nodes](confidential-computing-enclaves.md) as agent pools in a cluster. These nodes allow you to run sensitive workloads within a hardware-based TEE. TEEs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect data confidentiality, data integrity and code integrity from other processes running on the same nodes, as well as Azure operator. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero-trust, security planning and defense-in-depth container strategy. Learn more on this capability [here](confidential-containers-enclaves.md)
-Azure Kubernetes Service (AKS) fully supports confidential containers. You can run existing containers confidentially on AKS.
-## Partner enablers
+## Questions?
-You can enable confidential containers in Azure Partners and Open Source Software (OSS) projects. Developers can choose software providers based on their features, integration with Azure services and tooling support.
-
-> [!IMPORTANT]
-> Azure Partners offer the following solutions. These solutions might incur licensing fees. Verify all partner software terms independently.
-
-### Fortanix
-
-[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://hubs.li/Q017JnNt0).
-
-![Diagram of Fortanix deployment process, showing steps to move applications to confidential containers and deploy.](./media/confidential-containers/fortanix-confidential-containers-flow.png)
-
-### SCONE (Scontain)
-
-[SCONE](https://scontain.com/) (Scontain) security policies generate certificates, keys, and secrets. Only services with attestation for an application see these credentials. Application services automatically do attestation for each other through TLS. You don't need to modify the applications or TLS. For more explanation, see SCONE's [Flask application demo](https://sconedocs.github.io/flask_demo/).
-
-SCONE can convert most existing binaries into applications that run inside enclaves. You don't need to change or recompile the application. SCONE also protects interpreted languages like Python by encrypting both data files and Python code files. You can use SCONE security policies to protect encrypted files against unauthorized access, modifications, and rollbacks. For more information, see SCONE's documentation on [how to use SCONE with an existing Python application](https://sconedocs.github.io/sconify_image/).
-
-![Diagram of SCONE workflow, showing how SCONE processes binary images.](./media/confidential-containers/scone-workflow.png)
-
-You can deploy SCONE on Azure confidential computing nodes with AKS. This process is fully supported and integrated. For more information, see the [SCONE sample AKS application](https://sconedocs.github.io/aks/).
-
-### Anjuna
-
-[Anjuna](https://www.anjuna.io/) provides SGX platform software to run unmodified containers on AKS. For more information, see Anjuna's [documentation about functionality and sample applications](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp).
-
-Get started with a sample Redis Cache and Python Custom Application [here](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp)
-
-![Diagram of Anjuna's process, showing how containers are run on Azure confidential computing.](media/confidential-containers/anjuna-process-flow.png)
-
-## OSS enablers
-
-> [!NOTE]
-> Open-source projects offer the following solutions. Azure confidential computing and Microsoft aren't directly affiliated with these projects and solutions.
-
-### Gramine
-
-[Gramine](https://grapheneproject.io/) is a lightweight guest OS, designed to run a single Linux application with minimal host requirements. Gramine can run applications in an isolated environment. There's tooling support for converting existing Docker container applications to Gramine Shielded Containers (GSCs).
-
-For more information, see the Gramine's [sample application and deployment on AKS](https://github.com/gramineproject/contrib/tree/master/Examples/aks-attestation)
-
-### Occlum
-
-[Occlum](https://occlum.io/) is a memory-safe, multi-process library OS (LibOS) for Intel SGX. The OS enables legacy applications to run on SGX with little to no modifications to source code. Occlum transparently protects the confidentiality of user workloads while allowing an easy "lift and shift" to existing Docker applications.
-
-Occlum supports AKS deployments. For more information, see Occlum's [deployment instructions and sample apps](https://github.com/occlum/occlum/blob/master/docs/azure_aks_deployment_guide.md).
-
-### Marblerun
-
-[Marblerun](https://marblerun.sh/) is an orchestration framework for confidential containers. You can run and scale confidential services on SGX-enabled Kubernetes. Marblerun takes care of boilerplate tasks like verifying the services in your cluster, managing secrets for them, and establishing enclave-to-enclave mTLS connections between them. Marblerun also ensures that your cluster of confidential containers adheres to a manifest defined in simple JSON. You can verify the manifest with external clients through remote attestation.
-
-This framework extends the confidentiality, integrity, and verifiability properties of a single enclave to a Kubernetes cluster.
-
-Marblerun supports confidential containers created with Graphene, Occlum, and EGo, with [examples for each SDK](https://docs.edgeless.systems/marblerun/#/examples?id=examples). The framework runs on Kubernetes alongside your existing cloud-native tooling. There's a CLI and helm charts. Marblerun also supports confidential computing nodes on AKS. Follow Marblerun's [guide to deploy Marblerun on AKS](https://docs.edgeless.systems/marblerun/#/deployment/cloud?id=cloud-deployment).
-
-## Confidential Containers reference architectures
--- [Confidential data messaging for healthcare reference architecture and sample with Intel SGX confidential containers](https://github.com/Azure-Samples/confidential-container-samples/blob/main/confidential-healthcare-scone-confinf-onnx/README.md). -- [Confidential big-data processing with Apache Spark on AKS with Intel SGX confidential containers](/azure/architecture/example-scenario/confidential/data-analytics-containers-spark-kubernetes-azure-sql). -
-## Get in touch
-
-Do you have questions about your implementation? Do you want to become an enabler for confidential containers? Send an email to <acconaks@microsoft.com>.
+If you have questions about container offerings, please reach out to <acconaks@microsoft.com>.
## Next steps
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-overview.md
Title: Confidential computing nodes on Azure Kubernetes Service (AKS)
-description: Confidential computing nodes on AKS
+ Title: Confidential computing application enclave nodes on Azure Kubernetes Service (AKS)
+description: Intel SGX based confidential computing VM nodes with application enclave support
Previously updated : 05/10/2022 Last updated : 07/15/2022
-# Confidential computing nodes on Azure Kubernetes Service
+# Application enclave support with Intel SGX based confidential computing nodes on Azure Kubernetes Service
-[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container applications to run in an isolated, hardware protected, integrity protected attestable Trusted Execution Environment(TEE).
+[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. Intel SGX based enclaves allows running application packaged as a container within AKS. Containers run within a Trusted Execution Environment(TEE) brings isolation from other containers, the node kernel in a hardware protected, integrity protected attestable environment.
## Overview
Azure Kubernetes Service (AKS) supports adding [Intel SGX confidential computing
:::image type="content" source="./media/confidential-nodes-aks-overview/sgx-aks-node.png" alt-text="Graphic of AKS Confidential Compute Node, showing confidential containers with code and data secured inside.":::
-## AKS Confidential Nodes Features
+## Intel SGX confidential computing nodes feature
-- Hardware based and process level container isolation through Intel SGX trusted execution environment (TEE)
+- Hardware based, process level container isolation through Intel SGX trusted execution environment (TEE)
- Heterogenous node pool clusters (mix confidential and non-confidential node pools)-- Encrypted Page Cache (EPC) memory-based pod scheduling (requires add-on)-- Intel SGX DCAP driver pre-installed
+- Encrypted Page Cache (EPC) memory-based pod scheduling through "Confcon" AKS addon
+- Intel SGX DCAP driver pre-installed and kernel dependency installed
- CPU consumption based horizontal pod autoscaling and cluster autoscaling - Linux Containers support through Ubuntu 18.04 Gen 2 VM worker nodes
Enclave applications that do remote attestation need to generate a quote. The qu
## Programming models
-### Confidential Containers
+### Confidential containers through partners and OSS
-[Confidential containers](confidential-containers.md) help you run existing unmodified container applications of most **common programming languages** runtimes (Python, Node, Java etc.) confidentially. This packaging model does not need any source-code modifications or recompilation. This is the fastest method to confidentiality that could be achieved by packaging your standard docker containers with Open-Source Projects or Azure Partner Solutions. In this packaging and execution model all parts of the container application are loaded in the trusted boundary (enclave). This model works well for off the shelf container applications available in the market or custom apps currently running on general purpose nodes.
+[Confidential containers](confidential-containers.md) help you run existing unmodified container applications of most **common programming languages** runtimes (Python, Node, Java etc.) confidentially. This packaging model does not need any source-code modifications or recompilation and is the fastest method to run in an Intel SGX enclaves achieved by packaging your standard docker containers with Open-Source Projects or Azure Partner Solutions. In this packaging and execution model all parts of the container application are loaded in the trusted boundary (enclave). This model works well for off the shelf container applications available in the market or custom apps currently running on general purpose nodes. Learn more on the prep and deployment process [here](confidential-containers-enclaves.md)
### Enclave aware containers Confidential computing nodes on AKS also support containers that are programmed to run in an enclave to utilize **special instruction set** available from the CPU. This programming model allows tighter control of your execution flow and requires use of special SDKs and frameworks. This programming model provides most control of application flow with a lowest Trusted Computing Base (TCB). Enclave aware container development involves untrusted and trusted parts to the container application thus allowing you to manage the regular memory and Encrypted Page Cache (EPC) memory where enclave is executed. [Read more](enclave-aware-containers.md) on enclave aware containers.
+## Frequently asked questions (FAQ)
+Find answers to some of the common questions about Azure Kubernetes Service (AKS) node pool support for Intel SGX based confidential computing nodes [here](confidential-nodes-aks-faq.yml)
+ ## Next Steps [Deploy AKS Cluster with confidential computing nodes](./confidential-enclave-nodes-aks-get-started.md)
confidential-computing Enclave Aware Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/enclave-aware-containers.md
# Enclave Aware Containers with Intel SGX
-An enclave is a protected memory region that provides confidentiality for data and code execution. It is an instance of a Trusted Execution Environment (TEE) which is secured by hardware. Confidential computing VM's support on AKS uses [Intel Software Guard Extensions (SGX)](https://software.intel.com/sgx) to create isolated enclave environments in the nodes between each container application.
+An enclave is a protected memory region that provides confidentiality for data and code execution. It's an instance of a Trusted Execution Environment (TEE) which is secured by hardware. Confidential computing VM's support on AKS uses [Intel Software Guard Extensions (SGX)](https://software.intel.com/sgx) to create isolated enclave environments in the nodes between each container application.
Just like Intel SGX virtual machines, container applications that are developed to run in enclaves have two components:
Enclave aware containers application architecture give you the most control on t
## Enablers ### Open Enclave SDK
-[Open Enclave SDK](https://github.com/openenclave/openenclave/tree/master/docs/GettingStartedDocs) is a hardware-agnostic open-source library for developing C, C++ applications that uses Hardware-based Trusted Execution Environments. The current implementation provides support for Intel SGX and preview support for [OP-TEE OS on Arm TrustZone](https://optee.readthedocs.io/en/latest/general/https://docsupdatetracker.net/about.html).
+[Open Enclave SDK](https://github.com/openenclave/openenclave/tree/master/docs/GettingStartedDocs) is a hardware-agnostic open-source library for developing C, C++ applications that use Hardware-based Trusted Execution Environments. The current implementation provides support for Intel SGX and preview support for [OP-TEE OS on Arm TrustZone](https://optee.readthedocs.io/en/latest/general/https://docsupdatetracker.net/about.html).
Get started with Open Enclave based container application [here](https://github.com/openenclave/openenclave/tree/master/docs/GettingStartedDocs)
confidential-computing Enclave Development Oss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/enclave-development-oss.md
Last updated 11/01/2021--++
This article goes over open-source solutions for building applications that use application enclaves. Before reading, make sure you read the [enclave applications](application-development.md) conceptual page. ## Intel SGX-Compatible Tools
-Azure offers application enclaves via [confidential virtual machines with Intel Software Guard Extensions (SGX) enabled](virtual-machine-solutions-sgx.md). After deploying an Intel SGX virtual machine, you'll need specialized tools to make your application "enclave aware". This way, you can build applications that have both trusted and untrusted portions of code.
+Azure offers application enclaves via [confidential virtual machines with Intel Software Guard Extensions (SGX) enabled](quick-create-portal.md). After deploying an Intel SGX virtual machine, you'll need specialized tools to make your application "enclave aware". This way, you can build applications that have both trusted and untrusted portions of code.
For example, you can use these open-source frameworks:
In the CCF, the decentralized ledger is made up of recorded changes to a Key-Val
## Next steps -- [Deploy a confidential computing Intel SGX virtual machine](quick-create-portal.md)
+- [Attesting application enclaves](attestation.md)
confidential-computing Overview Azure Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview-azure-products.md
-# Confidential computing on Azure
+# Confidential Computing on Azure
-Today customers encrypt their data at rest and in transit, but not while it is in use in memory. The [Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC), cofounded by Microsoft, defines confidential computing as the protection of data in use using hardware-based [Trusted Execution Environments](https://en.wikipedia.org/wiki/Trusted_execution_environment) (TEEs). These TEEs prevent unauthorized access or modification of applications and data while they are in use, thereby increasing the security level of organizations that manage sensitive and regulated data. The TEEs are a trusted environment that provides a level of assurance of data integrity, data confidentiality, and code integrity. The confidential computing threat model aims at removing or reducing the ability for a cloud provider operator and other actors in the tenant's domain to access code and data while being executed.
+ Azure already offers many tools to safeguard [**data at rest**](../security/fundamentals/encryption-atrest.md) through models such as client-side encryption and server-side encryption. Additionally, Azure offers mechanisms to encrypt [**data in transit**](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit) through secure protocols like TLS and HTTPS. This page introduces a third leg of data encryption - the encryption of **data in use**.
+
-Technologies like [Intel Software Guard Extensions](https://www.intel.com.au/content/www/au/en/architecture-and-technology/software-guard-extensions-enhanced-data-protection.html) (Intel SGX), or [AMD Secure Encrypted Virtualization](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) are recent CPU improvements supporting confidential computing implementations. These technologies are designed as virtualization extensions and provide feature sets including memory encryption and integrity, CPU-state confidentiality and integrity, and attestation, for building the confidential computing threat model.
+> [!VIDEO https://www.youtube.com/embed/rT6zMOoLEqI]
-When used with data encryption at rest and in transit, confidential computing eliminates the single largest barrier of encryption - encryption while in use - by protecting sensitive or highly regulated data sets and application workloads in a secure public cloud platform. Confidential computing extends beyond generic data protection. TEEs are also being used to protect proprietary business logic, analytics functions, machine learning algorithms, or entire applications.
+Azure confidential computing makes it easier to trust the cloud provider, by reducing the need for trust across various aspects of the compute cloud infrastructure. Azure confidential computing minimizes trust for the host OS kernel, the hypervisor, the VM admin, and the host admin.
-## Navigating Azure confidential computing
+Azure confidential computing can help you:
-[Microsoft's offerings](https://aka.ms/azurecc) for confidential computing extend from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and as well as developer tools to support your journey to data and code confidentiality in the cloud.
+- **Prevent unauthorized access**: Run sensitive data in the cloud. Trust that Azure provides the best data protection possible, with little to no change from what gets done today.
-## Reducing the attack surface
-The trusted computing base (TCB) refers to all of a system's hardware, firmware, and software components that provide a secure environment. The components inside the TCB are considered "critical". If one component inside the TCB is compromised, the entire system's security may be jeopardized. A lower TCB means higher security. There's less risk of exposure to various vulnerabilities, malware, attacks, and malicious people. Azure confidential computing aims to lower the TCB for your cloud workloads by offering TEEs.
+- **Meet regulatory compliance**: Migrate to the cloud and keep full control of data to satisfy government regulations for protecting personal information and secure organizational IP.
-### Reducing your TCB in Azure
+- **Ensure secure and untrusted collaboration**: Tackle industry-wide work-scale problems by combing data across organizations, even competitors, to unlock broad data analytics and deeper insights.
-When you deploy Azure confidential virtual machines, you can reduce your TCB. For confidential VM deployment solutions running on AMD SEV-SNP, you can lift-and-shift existing workloads and protect data from the cloud operator with VM-level confidentiality. Confidential VMs with Intel SGX application enclaves provides line-of-code control in applications to minimize your TCB and protect data from both cloud operators and your operators. Application enclaves with Intel SGX may require some changes to configuration policies or application code. You can also leverage an Independent Software Vendor (ISV) partner or open source software (OSS) to run your existing apps inside an application enclave.
+- **Isolate processing**: Offer a new wave of products that remove liability on private data with blind processing. User data can't even be retrieved by the service provider.
-### Trust ladder
+## Azure offerings
+Verifying that applications are running confidentially form the very foundation of confidential computing. This verification is multi-pronged and relies on the following suite of Azure offerings:
-Azure offers different virtual machines for confidential computing IaaS workloads and customers can choose what's best for them depending on their preferred security posture. The "trust ladder" figure shows what customers can expect from a security posture perspective on these IaaS offerings.
+- [Microsoft Azure Attestation](../attestation/overview.md), a remote attestation service for validating the trustworthiness of multiple Trusted Execution Environments (TEEs) and verifying integrity of the binaries running inside the TEEs.
-![Screenshot of the Azure trust ladder, showing enclaves with Intel SGX at the top.](media/overview-azure-products/trust-ladder.png)
+- [Azure Key Vault Managed HSM](../key-vault/managed-hsm/index.yml), a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated Hardware Security Modules (HSM).
-## Azure offerings
+- [Trusted Launch](../virtual-machines/trusted-launch.md) is available across all Generation 2 VMs bringing hardened security features ΓÇô secure boot, virtual trusted platform module, and boot integrity monitoring ΓÇô that protect against boot kits, rootkits, and kernel-level malware.
-Our services currently generally available to the public include:
+- [Azure Confidential Ledger](../confidential-ledger/overview.md). ACL is a tamper-proof register for storing sensitive data for record keeping and auditing or for data transparency in multi-party scenarios. It offers Write-Once-Read-Many guarantees, which make data non-erasable and non-modifiable. The service is built on Microsoft Research's [Confidential Consortium Framework](https://www.microsoft.com/research/project/confidential-consortium-framework/).
-- [Confidential VMs with Intel SGX application enclaves](confidential-computing-enclaves.md). Azure offers the [DCsv2](../virtual-machines/dcv2-series.md), [DCsv3, and DCdsv3](../virtual-machines/dcv3-series.md) series built on Intel SGX technology for hardware-based enclave creation. You can build secure enclave-based applications to run in a series of VMs to protect your application data and code in use.-- [Enclave aware containers](enclave-aware-containers.md) running on Azure Kubernetes Service (AKS). Confidential computing nodes on AKS use Intel SGX to create isolated enclave environments in the nodes between each container application.-- [Always Encrypted with secure enclaves in Azure SQL](/sql/relational-databases/security/encryption/always-encrypted-enclaves). The confidentiality of sensitive data is protected from malware and high-privileged unauthorized users by running SQL queries directly inside a TEE when the SQL statement contains any operations on encrypted data that require the use of the secure enclave where the database engine runs.-- [Microsoft Azure Attestation](../attestation/overview.md), a remote attestation service for validating the trustworthiness of multiple Trusted Execution Environments (TEEs) and verifying integrity of the binaries running inside the TEEs.-- [Azure Key Vault Managed HSM](../key-vault/managed-hsm/index.yml), a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated Hardware Security Modules (HSM). - [Azure IoT Edge](../iot-edge/deploy-confidential-applications.md) supports confidential applications that run within secure enclaves on an Internet of Things (IoT) device. IoT devices are often exposed to tampering and forgery because they're physically accessible by bad actors. Confidential IoT Edge devices add trust and integrity at the edge by protecting the access to data captured by and stored inside the device itself before streaming it to the cloud.
-Other services are currently in preview, including:
+- [Always Encrypted with secure enclaves in Azure SQL](/sql/relational-databases/security/encryption/always-encrypted-enclaves). The confidentiality of sensitive data is protected from malware and high-privileged unauthorized users by running SQL queries directly inside a TEE.
++
+Technologies like [Intel Software Guard Extensions](https://www.intel.com.au/content/www/au/en/architecture-and-technology/software-guard-extensions-enhanced-data-protection.html) (Intel SGX), or [AMD Secure Encrypted Virtualization](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) are recent CPU improvements supporting confidential computing implementations. These technologies are designed as virtualization extensions and provide feature sets including memory encryption and integrity, CPU-state confidentiality and integrity, and attestation, for building the confidential computing threat model. Azure Computational Computing leverages these technologies in the following computation resources:
+
+- [Confidential VMs with Intel SGX application enclaves](confidential-computing-enclaves.md). Azure offers the [DCsv2](../virtual-machines/dcv2-series.md), [DCsv3, and DCdsv3](../virtual-machines/dcv3-series.md) series built on Intel SGX technology for hardware-based enclave creation. You can build secure enclave-based applications to run in a series of VMs to protect your application data and code in use.
+
+- [App-enclave aware containers](enclave-aware-containers.md) running on Azure Kubernetes Service (AKS). Confidential computing nodes on AKS use Intel SGX to create isolated enclave environments in the nodes between each container application.
+
+- Confidential VMs based on [AMD SEV-SNP technology](https://azure.microsoft.com/blog/azure-and-amd-enable-lift-and-shift-confidential-computing/) enable lift-and-shift of existing workloads and protect data from the cloud operator with VM-level confidentiality.
-- Confidential VMs based on [AMD SEV-SNP technology](https://azure.microsoft.com/blog/azure-and-amd-enable-lift-and-shift-confidential-computing/) are currently in preview and available to selected customers.-- [Trusted Launch](../virtual-machines/trusted-launch.md) is available across all Generation 2 VMs bringing hardened security features ΓÇô secure boot, virtual trusted platform module, and boot integrity monitoring ΓÇô that protect against boot kits, rootkits, and kernel-level malware.-- [Azure Confidential Ledger](../confidential-ledger/overview.md). ACL is a tamper-proof register for storing sensitive data for record keeping and auditing or for data transparency in multi-party scenarios. It offers Write-Once-Read-Many guarantees, which make data non-erasable and non-modifiable. The service is built on Microsoft Research's [Confidential Consortium Framework](https://www.microsoft.com/research/project/confidential-consortium-framework/). - [Confidential Inference ONNX Runtime](https://github.com/microsoft/onnx-server-openenclave), a Machine Learning (ML) inference server that restricts the ML hosting party from accessing both the inferencing request and its corresponding response. ## Next steps -- [Learn about application enclave development](application-development.md)
+- [Learn common confidential computing scenarios](use-cases-scenarios.md)
confidential-computing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview.md
Last updated 11/01/2021--++ # What is confidential computing?
-Confidential computing allows you to isolate your sensitive data while it's being processed. Many industries use confidential computing to protect their data by using confidential computing to:
--- Secure financial data-- Protect patient information-- Run machine learning processes on sensitive information-- Perform algorithms on encrypted data sets from multiple sources--
-## Overview
-<p><p>
-
+Confidential computing is an industry term defined by the [Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC) - a foundation dedicated to defining and accelerating the adoption of confidential computing. The CCC defines confidential computing as: The protection of data in use by performing computations in a hardware-based Trusted Execution Environment (TEE).
-> [!VIDEO https://www.youtube.com/embed/rT6zMOoLEqI]
+A TEE is an environment that enforces execution of only authorized code. Any data in the TEE can't be read or tampered with by any code outside that environment. The confidential computing threat model aims at removing or reducing the ability for a cloud provider operator and other actors in the tenant's domain to access code and data while being executed.
-We know that securing your cloud data is important. We hear your concerns. Here's just a few questions that our customers may have when moving sensitive workloads to the cloud:
+<!-- Confidential computing allows you to isolate your sensitive data while it's being processed. Many industries use confidential computing to protect their data by using confidential computing to:
-- How do I make sure Microsoft can't access data that isn't encrypted?-- How do I prevent security threats from privileged admins inside my company?-- What are more ways that I can prevent third-parties from accessing sensitive customer data?
+- Run machine learning processes on sensitive information
+- Perform algorithms on encrypted data sets from multiple sources
+- Secure financial data
+- Protect patient information -->
-Azure helps you minimize your attack surface to gain stronger data protection. Azure already offers many tools to safeguard [**data at rest**](../security/fundamentals/encryption-atrest.md) through models such as client-side encryption and server-side encryption. Additionally, Azure offers mechanisms to encrypt [**data in transit**](../security/fundamentals/data-encryption-best-practices.md#protect-data-in-transit) through secure protocols like TLS and HTTPS. This page introduces a third leg of data encryption - the encryption of **data in use**.
-## Introduction to confidential computing
-Confidential computing is an industry term defined by the [Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC) - a foundation dedicated to defining and accelerating the adoption of confidential computing. The CCC defines confidential computing as: The protection of data in use by performing computations in a hardware-based Trusted Execution Environment (TEE).
+When used with data encryption at rest and in transit, confidential computing eliminates the single largest barrier of encryption - encryption while in use - by protecting sensitive or highly regulated data sets and application workloads in a secure public cloud platform. Confidential computing extends beyond generic data protection. TEEs are also being used to protect proprietary business logic, analytics functions, machine learning algorithms, or entire applications.
-A TEE is an environment that enforces execution of only authorized code. Any data in the TEE can't be read or tampered with by any code outside that environment.
-### Lessen the need for trust
+## Lessen the need for trust
Running workloads on the cloud requires trust. You give this trust to various providers enabling different components of your application.
+- **App software vendors**: Trust software by deploying on-premises, using open-source, or by building in-house application software.
-**App software vendors**: Trust software by deploying on-prem, using open-source, or by building in-house application software.
-
-**Hardware vendors**: Trust hardware by using on-premises hardware or in-house hardware.
+- **Hardware vendors**: Trust hardware by using on-premises hardware or in-house hardware.
-**Infrastructure providers**: Trust cloud providers or manage your own on-premises data centers.
+- **Infrastructure providers**: Trust cloud providers or manage your own on-premises data centers.
-Azure confidential computing makes it easier to trust the cloud provider, by reducing the need for trust across various aspects of the compute cloud infrastructure. Azure confidential computing minimizes trust for the host OS kernel, the hypervisor, the VM admin, and the host admin.
+## Reducing the attack surface
+The trusted computing base (TCB) refers to all of a system's hardware, firmware, and software components that provide a secure environment. The components inside the TCB are considered "critical". If one component inside the TCB is compromised, the entire system's security may be jeopardized. A lower TCB means higher security. There's less risk of exposure to various vulnerabilities, malware, attacks, and malicious people.
-## Next steps
-Learn about all the confidential computing products on Azure.
+### Next steps
+[Microsoft's offerings](https://aka.ms/azurecc) for confidential computing extend from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and as well as developer tools to support your journey to data and code confidentiality in the cloud.
+Learn more about confidential computing on Azure
> [!div class="nextstepaction"]
-> [Overview of Azure confidential computing services](overview-azure-products.md)
+> [Overview of Azure Confidential Computing](overview-azure-products.md)
confidential-computing Use Cases Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/use-cases-scenarios.md
After reading this article, you'll be able to answer the following questions:
- What are some scenarios for Azure confidential computing? - What are the benefits of using Azure confidential computing for multi-party scenarios, enhanced customer data privacy, and blockchain networks?
-## Motivations
-Azure confidential computing allows you to leverage confidential computing capabilities in a virtualized environment. You can now use tools, software, and cloud infrastructure to build on top of secure hardware.
-
-**Prevent unauthorized access**: Run sensitive data in the cloud. Trust that Azure provides the best data protection possible, with little to no change from what gets done today.
-
-**Regulatory compliance**: Migrate to the cloud and keep full control of data to satisfy government regulations for protecting personal information and secure organizational IP.
-
-**Secure and untrusted collaboration**: Tackle industry-wide work-scale problems by combing data across organizations, even competitors, to unlock broad data analytics and deeper insights.
-
-**Isolated processing**: Offer a new wave of products that remove liability on private data with blind processing. User data cannot even be retrieved by the service provider.
- ## Secure multi-party computation
-Business transactions and project collaboration require sharing information amongst multiple parties. Often, the data being shared is confidential. The data may be personal information, financial records, medical records, private citizen data, etc. Public and private organizations require their data be protected from unauthorized access. Sometimes these organizations even want protect data from computing infrastructure operators or engineers, security architects, business consultants, and data scientists.
+Business transactions and project collaboration require sharing information amongst multiple parties. Often, the data being shared is confidential. The data may be personal information, financial records, medical records, private citizen data, etc. Public and private organizations require their data be protected from unauthorized access. Sometimes these organizations even want to protect data from computing infrastructure operators or engineers, security architects, business consultants, and data scientists.
For example, using machine learning for healthcare services has grown massively as we've obtained access to larger datasets and imagery of patients captured by medical devices. Disease diagnostic and drug development benefit from multiple data sources. Hospitals and health institutes can collaborate by sharing their patient medical records with a centralized trusted execution environment (TEE). Machine learning services running in the TEE aggregate and analyze data. This aggregated data analysis can provide higher prediction accuracy due to training models on consolidated datasets. With confidential computing, the hospitals can minimize risks of compromising the privacy of their patients.
Confidential computing goes in this direction by allowing customers incremental
### Data sovereignty
-In Government and public agencies, Azure confidential computing is a solution to raise the degree of trust towards the ability to protect data sovereignty in the public cloud. Moreover, thanks to the increasingly adoption of confidential computing capabilities into PaaS services in Azure, this higher degree of trust can be achieved with a reduced impact to the innovation ability provided by public cloud services. This combination of factors makes Azure confidential computing a very effective response to the needs of sovereignty and digital transformation of Government services.
+In Government and public agencies, Azure confidential computing is a solution to raise the degree of trust towards the ability to protect data sovereignty in the public cloud. Moreover, thanks to the increasingly adoption of confidential computing capabilities into PaaS services in Azure, a higher degree of trust can be achieved with a reduced impact to the innovation ability provided by public cloud services. This combination of protecting data sovereignity with a reduced impact to the innovation ability makes Azure confidential computing a very effective response to the needs of sovereignty and digital transformation of Government services.
### Reduced chain of trust
-Enormous investment and revolutionary innovation in confidential computing has enabled the removal of the cloud service provider from the trust chain to an unprecedented degree. Azure confidential computing delivers the highest level of sovereignty available in the market today, This allows customer and governments to meet their sovereignty needs today and still leverage innovation tomorrow.
+Enormous investment and revolutionary innovation in confidential computing has enabled the removal of the cloud service provider from the trust chain to an unprecedented degree. Azure confidential computing delivers the highest level of sovereignty available in the market today. This allows customer and governments to meet their sovereignty needs today and still leverage innovation tomorrow.
Confidential computing can expand the number of workloads eligible for public cloud deployment. This can result in a rapid adoption of public services for migrations and new workloads, rapidly improving the security posture of customers, and quickly enabling innovative scenarios.
container-instances Container Instances Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-reference-yaml.md
properties: # Properties of container group
type: string ip: string dnsNameLabel: string
+ dnsNameLabelReusePolicy: string
osType: string volumes: # Array of volumes available to the instances - name: string
container-instances How To Reuse Dns Names https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/how-to-reuse-dns-names.md
+
+ Title: Deploy an Azure Container Instances (ACI) container group with DNS name reuse policy
+description: Set the DNS name reuse policy for your container groups to avoid subdomain takeover when you release your DNS names.
+++++ Last updated : 05/25/2022++
+# Deploy an Azure Container Instances (ACI) container group with DNS name reuse policy (preview)
+
+DNS name reuse is convenient for DevOps within any modern company. The idea of redeploying an application by reusing the DNS name fulfills an on-demand philosophy that secures cloud development. Therefore, it's important to note that DNS names that are available to anyone become a problem when one customer releases a name only to have that same name taken by another customer. This is called subdomain takeover. A customer releases a resource using a particular name, and another customer creates a new resource with that same DNS name. If there were any records pointing to the old resource, they now also point to the new resource.
+
+In order to avoid this, ACI will now allow customers to reuse DNS names while preventing DNS names from being reused by different customers. ACI secures DNS names by randomly generating a hash value to associate with the DNS name, making it difficult for another customer to accidentally create an ACI with the same name and get linked to the past customer's ACI information.
+
+> [!IMPORTANT]
+> DNS name reuse policy support is only available on ACI API version `10-01-2021` or later.
+
+## Prerequisites
+
+* An **active Azure subscription**. If you don't have an active Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
+
+* A **resource group** to manage all the resources you use in this how-to guide. We use the example resource group name **ACIResourceGroup** throughout this article.
+
+ ```azurecli-interactive
+ az group create --name ACIResourceGroup --location westus
+
+## Understand the DNS name reuse policy
+
+You now have the choice when creating an ACI to choose what level of reuse you want your DNS name label to have.
+
+| Policy name | Policy definition |
+| - | - |
+| unsecure | Hash will be generated based on only the DNS name. Avoiding subdomain takeover is not guaranteed if another customer uses the same DNS name. |
+| tenantReuse | **Default** Hash will be generated based on the DNS name and the tenant ID. Object's domain name label can be reused within the same tenant. |
+| subscriptionReuse | Hash will be generated based on the DNS name and the tenant ID and subscription ID. Object's domain name label can be reused within the same subscription. |
+| resourceGroupReuse | Hash will be generated based on the DNS name and the tenant ID, subscription ID, and resource group name. Object's domain name label can be reused within the same resource group. |
+| noReuse | Hash will not be generated. Object's domain label can't be reused within resource group, subscription, or tenant. |
+
+## Create a container instance
+
+> [!IMPORTANT]
+> Setting the DNS name reuse policy is not currently supported through the Azure CLI .
+
+The process for deploying your container instance remains same. If you want the full process of how to deploy a container instance, see the quickstart featuring your preferred deployment method. For example, the [ARM template quickstart](container-instances-quickstart-template.md).
+
+For [Azure portal](https://portal.azure.com) users, you can set the DNS name reuse policy on the **Networking** tab during the container instance creation process using the **DNS name label scope reuse** field.
+
+![Screenshot of DNS name reuse policy dropdown menu, PNG.](./media/how-to-reuse-dns-names/portal-dns-name-reuse-policy.png)
+
+For ARM template users, see the [Resource Manager reference](/azure/templates/microsoft.containerinstance/containergroups.md) to see how the dnsNameLabelReusePolicy field fits into the existing schema.
+
+For YAML template users, see the [YAML reference](container-instances-reference-yaml.md) to see how the dnsNameLabelReusePolicy field fits into the existing schema.
+
+## Next steps
+
+See the Azure Quickstart Template [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), to deploy a container group within a virtual network.
cosmos-db Audit Control Plane Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-control-plane-logs.md
AzureDiagnostics
## Next steps * [Prevent Azure Cosmos DB resources from being deleted or changed](resource-locks.md)
-* [Explore Azure Monitor for Azure Cosmos DB](../azure-monitor/insights/cosmosdb-insights-overview.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
+* [Explore Azure Monitor for Azure Cosmos DB](cosmosdb-insights-overview.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json)
* [Monitor and debug with metrics in Azure Cosmos DB](use-metrics.md)
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
For example, if you have 1 TB of data in two regions then:
* Restore cost is calculated as (1000 \* 0.15) = $150 per restore > [!TIP]
-> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continous 7-day tier does not incur charges for backup of the data.
+> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continous 7-day tier does not incur charges for backup of the data.
## Continuous 30-day tier vs Continuous 7-day tier
cosmos-db Cosmosdb Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-insights-overview.md
+
+ Title: Monitor Azure Cosmos DB with Azure Monitor Cosmos DB insights| Microsoft Docs
+description: This article describes the Cosmos DB insights feature of Azure Monitor that provides Cosmos DB owners with a quick understanding of performance and utilization issues with their Cosmos DB accounts.
++++ Last updated : 05/11/2020+++++
+# Explore Azure Monitor Cosmos DB insights
+
+Cosmos DB insights provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. This article will help you understand the benefits of this new monitoring experience, and how you can modify and adapt the experience to fit the unique needs of your organization.
+
+## Introduction
+
+Before diving into the experience, you should understand how it presents and visualizes information.
+
+It delivers:
+
+* **At scale perspective** of your Azure Cosmos DB resources across all your subscriptions in a single location, with the ability to selectively scope to only those subscriptions and resources you are interested in evaluating.
+
+* **Drill down analysis** of a particular Azure Cosmos DB resource to help diagnose issues or perform detailed analysis by category - utilization, failures, capacity, and operations. Selecting any one of those options provides an in-depth view of the relevant Azure Cosmos DB metrics.
+
+* **Customizable** - This experience is built on top of Azure Monitor workbook templates allowing you to change what metrics are displayed, modify or set thresholds that align with your limits, and then save into a custom workbook. Charts in the workbooks can then be pinned to Azure dashboards.
+
+This feature does not require you to enable or configure anything, these Azure Cosmos DB metrics are collected by default.
+
+>[!NOTE]
+>There is no charge to access this feature and you will only be charged for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
+
+## View utilization and performance metrics for Azure Cosmos DB
+
+To view the utilization and performance of your storage accounts across all of your subscriptions, perform the following steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Search for **Monitor** and select **Monitor**.
+
+ ![Search box with the word "Monitor" and a dropdown that says Services "Monitor" with a speedometer style image](./media/cosmosdb-insights-overview/search-monitor.png)
+
+3. Select **Cosmos DB**.
+
+ ![Screenshot of Cosmos DB overview workbook](./media/cosmosdb-insights-overview/cosmos-db.png)
+
+### Overview
+
+On **Overview**, the table displays interactive Azure Cosmos DB metrics. You can filter the results based on the options you select from the following drop-down lists:
+
+* **Subscriptions** - only subscriptions that have an Azure Cosmos DB resource are listed.
+
+* **Cosmos DB** - You can select all, a subset, or single Azure Cosmos DB resource.
+
+* **Time Range** - by default, displays the last 4 hours of information based on the corresponding selections made.
+
+The counter tile under the drop-down lists rolls-up the total number of Azure Cosmos DB resources are in the selected subscriptions. There is conditional color-coding or heatmaps for columns in the workbook that report transaction metrics. The deepest color has the highest value and a lighter color is based on the lowest values.
+
+Selecting a drop-down arrow next to one of the Azure Cosmos DB resources will reveal a breakdown of the performance metrics at the individual database container level:
+
+![Expanded drop down revealing individual database containers and associated performance breakdown](./media/cosmosdb-insights-overview/container-view.png)
+
+Selecting the Azure Cosmos DB resource name highlighted in blue will take you to the default **Overview** for the associated Azure Cosmos DB account.
+
+### Failures
+
+Select **Failures** at the top of the page and the **Failures** portion of the workbook template opens. It shows you total requests with the distribution of responses that make up those requests:
+
+![Screenshot of failures with breakdown by HTTP request type](./media/cosmosdb-insights-overview/failures.png)
+
+| Code | Description |
+|--|:--|
+| `200 OK` | One of the following REST operations were successful: </br>- GET on a resource. </br> - PUT on a resource. </br> - POST on a resource. </br> - POST on a stored procedure resource to execute the stored procedure.|
+| `201 Created` | A POST operation to create a resource is successful. |
+| `404 Not Found` | The operation is attempting to act on a resource that no longer exists. For example, the resource may have already been deleted. |
+
+For a full list of status codes, consult the [Azure Cosmos DB HTTP status code article](/rest/api/cosmos-db/http-status-codes-for-cosmosdb).
+
+### Capacity
+
+Select **Capacity** at the top of the page and the **Capacity** portion of the workbook template opens. It shows you how many documents you have, your document growth over time, data usage, and the total amount of available storage that you have left. This can be used to help identify potential storage and data utilization issues.
+
+![Capacity workbook](./media/cosmosdb-insights-overview/capacity.png)
+
+As with the overview workbook, selecting the drop-down next to an Azure Cosmos DB resource in the **Subscription** column will reveal a breakdown by the individual containers that make up the database.
+
+### Operations
+
+Select **Operations** at the top of the page and the **Operations** portion of the workbook template opens. It gives you the ability to see your requests broken down by the type of requests made.
+
+So in the example below you see that `eastus-billingint` is predominantly receiving read requests, but with a small number of upsert and create requests. Whereas `westeurope-billingint` is read-only from a request perspective, at least over the past four hours that the workbook is currently scoped to via its time range parameter.
+
+![Operations workbook](./media/cosmosdb-insights-overview/operation.png)
+
+## View from an Azure Cosmos DB resource
+
+1. Search for or select any of your existing Azure Cosmos DB accounts.
++
+2. Once you've navigated to your Azure Cosmos DB account, in the Monitoring section select **Insights (preview)** or **Workbooks** to perform further analysis on throughput, requests, storage, availability, latency, system, and account management.
++
+### Time range
+
+By default, the **Time Range** field displays data from the **Last 24 hours**. You can modify the time range to display data anywhere from the last 5 minutes to the last seven days. The time range selector also includes a **Custom** mode that allows you to type in the start/end dates to view a custom time frame based on available data for the selected account.
++
+### Insights overview
+
+The **Overview** tab provides the most common metrics for the selected Azure Cosmos DB account including:
+
+* Total Requests
+* Failed Requests (429s)
+* Normalized RU Consumption (max)
+* Data & Index Usage
+* Cosmos DB Account Metrics by Collection
+
+**Total Requests:** This graph provides a view of the total requests for the account broken down by status code. The units at the bottom of the graph are a sum of the total requests for the period.
++
+**Failed Requests (429s)**: This graph provides a view of failed requests with a status code of 429. The units at the bottom of the graph are a sum of the total failed requests for the period.
++
+**Normalized RU Consumption (max)**: This graph provides the max percentage between 0-100% of Normalized RU Consumption units for the specified period.
++
+## Pin, export, and expand
+
+You can pin any one of the metric sections to an [Azure Dashboard](../azure-portal/azure-portal-dashboards.md) by selecting the pushpin icon at the top right of the section.
+
+![Metric section pin to dashboard example](./media/cosmosdb-insights-overview/pin.png)
+
+To export your data into the Excel format, select the down arrow icon to the left of the pushpin icon.
+
+![Export workbook icon](./media/cosmosdb-insights-overview/export.png)
+
+To expand or collapse all drop-down views in the workbook, select the expand icon to the left of the export icon:
+
+![Expand workbook icon](./media/cosmosdb-insights-overview/expand.png)
+
+## Customize Cosmos DB insights
+
+Since this experience is built on top of Azure Monitor workbook templates, you have the ability to **Customize** > **Edit** and **Save** a copy of your modified version into a custom workbook.
+
+![Customize bar](./media/cosmosdb-insights-overview/customize.png)
+
+Workbooks are saved within a resource group, either in the **My Reports** section that's private to you or in the **Shared Reports** section that's accessible to everyone with access to the resource group. After you save the custom workbook, you need to go to the workbook gallery to launch it.
+
+![Launch workbook gallery from command bar](./media/cosmosdb-insights-overview/gallery.png)
+
+## Troubleshooting
+
+For troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+
+## Next steps
+
+* Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerting to aid in detecting issues.
+
+* Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
cosmos-db How To Choose Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-choose-offer.md
Use the Azure Cosmos DB [capacity calculator](estimate-ru-with-capacity-planner.
### Existing applications ###
-If you have an existing application using standard (manual) provisioned throughput, you can use [Azure Monitor metrics](../azure-monitor/insights/cosmosdb-insights-overview.md) to determine if your traffic pattern is suitable for autoscale.
+If you have an existing application using standard (manual) provisioned throughput, you can use [Azure Monitor metrics](cosmosdb-insights-overview.md) to determine if your traffic pattern is suitable for autoscale.
First, find the [normalized request unit consumption metric](monitor-normalized-request-units.md#view-the-normalized-request-unit-consumption-metric) of your database or container. Normalized utilization is a measure of how much you are currently using your standard (manual) provisioned throughput. The closer the number is to 100%, the more you are fully using your provisioned RU/s. [Learn more](monitor-normalized-request-units.md#view-the-normalized-request-unit-consumption-metric) about the metric.
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-get-started.md
+
+ Title: Get started with Azure Cosmos DB MongoDB API and .NET
+description: Get started developing a .NET application that works with Azure Cosmos DB MongoDB API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB MongoDB API database.
++++
+ms.devlang: dotnet
+ Last updated : 07/22/2022+++
+# Get started with Azure Cosmos DB MongoDB API and .NET Core
+
+This article shows you how to connect to Azure Cosmos DB MongoDB API using .NET Core and the relevant NuGet packages. Once connected, you can perform operations on databases, collections, and documents.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET Core project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
++
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [.NET 6.0](https://dotnet.microsoft.com/en-us/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+* [Azure Cosmos DB MongoDB API resource](quickstart-dotnet.md#create-an-azure-cosmos-db-account)
+
+## Create a new .NET Core app
+
+1. Create a new .NET Core application in an empty folder using your preferred terminal. For this scenario you'll use a console application. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command to create and name the console app.
+
+ ```console
+ dotnet new console -o app
+ ```
+
+2. Add the [MongoDB](https://www.nuget.org/packages/MongoDB.Driver) NuGet package to the console project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
+
+ ```console
+ dotnet add package MongoDB.Driver
+ ```
+
+3. To run the app, use a terminal to navigate to the application directory and run the application.
+
+ ```console
+ dotnet run
+ ```
+
+## Connect to Azure Cosmos DB MongoDB API with the MongoDB native driver
+
+To connect to Azure Cosmos DB with the MongoDB native driver, create an instance of the [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_MongoClient.htm) class. This class is the starting point to perform all operations against MongoDb databases. The most common constructor for **MongoClient** accepts a connection string, which you can retrieve using the following steps:
+
+## Get resource name
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+Skip this step and use the information for the portal in the next step.
+++
+## Retrieve your connection string
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos``.
++++
+## Configure environment variables
++
+## Create MongoClient with connection string
+
+Define a new instance of the ``MongoClient`` class using the constructor and the connection string variable you set previously.
++
+## Use the MongoDB client classes with Cosmos DB for MongoDB API
++
+Each type of resource is represented by one or more associated C# classes. Here's a list of the most common classes:
+
+| Class | Description |
+|||
+|[``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)|This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.|
+|[``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm)|This class is a reference to a database that may, or may not, exist in the service yet. The database is validated or created server-side when you attempt to perform an operation against it.|
+|[``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm)|This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.|
+
+The following guides show you how to use each of these classes to build your application and manage data.
+
+**Guide**:
+
+* [Manage databases](how-to-dotnet-manage-databases.md)
+* [Manage collections](how-to-dotnet-manage-collections.md)
+* [Manage documents](how-to-dotnet-manage-documents.md)
+* [Use queries to find documents](how-to-dotnet-manage-queries.md)
+
+## See also
+
+- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+- [API reference](https://docs.mongodb.com/drivers/csharp)
+
+## Next steps
+
+Now that you've connected to a MongoDB API account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB MongoDB API using .NET](how-to-dotnet-manage-databases.md)
cosmos-db How To Dotnet Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-collections.md
+
+ Title: Create a collection in Azure Cosmos DB MongoDB API using .NET
+description: Learn how to work with a collection in your Azure Cosmos DB MongoDB API database using the .NET SDK.
++++
+ms.devlang: dotnet
+ Last updated : 07/22/2022+++
+# Manage a collection in Azure Cosmos DB MongoDB API using .NET
++
+Manage your MongoDB collection stored in Cosmos DB with the native MongoDB client driver.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+
+## Name a collection
+
+In Azure Cosmos DB, a collection is analogous to a table in a relational database. When you create a collection, the collection name forms a segment of the URI used to access the collection resource and any child docs.
+
+Here are some quick rules when naming a collection:
+
+* Keep collection names between 3 and 63 characters long
+* Collection names can only contain lowercase letters, numbers, or the dash (-) character.
+* Container names must start with a lowercase letter or number.
+
+## Get collection instance
+
+Use an instance of the **Collection** class to access the collection on the server.
+
+* [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+
+The following code snippets assume you've already created your [client connection](how-to-dotnet-get-started.md#create-mongoclient-with-connection-string).
+
+## Create a collection
+
+To create a collection, insert a document into the collection.
+
+* [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
+* [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+* [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
++
+## Drop a collection
+
+* [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+
+Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
++
+## Get collection indexes
+
+An index is used by the MongoDB query engine to improve performance to database queries.
+
+* [MongoClient.Database.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
+++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and .NET](how-to-dotnet-get-started.md)
+- [Create a database](how-to-dotnet-manage-databases.md)
cosmos-db How To Dotnet Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-databases.md
+
+ Title: Manage a MongoDB database using .NET
+description: Learn how to manage your Cosmos DB resource when it provides the MongoDB API with a .NET SDK.
++++
+ms.devlang: dotnet
+ Last updated : 07/22/2022+++
+# Manage a MongoDB database using .NET
++
+Your MongoDB server in Azure Cosmos DB is available from the [MongoDB](https://www.nuget.org/packages/MongoDB.Driver) NuGet package.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+* Keep database names between 3 and 63 characters long
+* Database names can only contain lowercase letters, numbers, or the dash (-) character.
+* Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>``
+
+## Create a database instance
+
+You can use the `MongoClient` to get an instance of a database, or create one if it doesn't exist already. The `MongoDatabase` class provides access to collections and their documents.
+
+* [MongoClient](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)
+* [MongoClient.Database](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm)
+
+The following code snippet creates a new database by inserting a document into a collection. Remember, the database will not be created until it is needed for this type of operation.
++
+## Get an existing database
+
+You can also retrieve an existing database by name using the `GetDatabase` method to access its collections and documents.
+
+* [MongoClient.GetDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_GetDatabase.htm)
++
+## Get a list of all databases
+
+You can retrieve a list of all the databases on the server using the `MongoClient`.
+
+* [MongoClient.Database.ListDatabaseNames](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_ListDatabaseNames_3.htm)
++
+This technique can then be used to check if a database already exists.
++
+## Drop a database
+
+A database is removed from the server using the `DropDatabase` method on the DB class.
+
+* [MongoClient.DropDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_DropDatabase_1.htm)
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and .NET](how-to-dotnet-get-started.md)
+- Work with a collection](how-to-dotnet-manage-collections.md)
cosmos-db How To Dotnet Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-documents.md
+
+ Title: Create a document in Azure Cosmos DB MongoDB API using .NET
+description: Learn how to work with a document in your Azure Cosmos DB MongoDB API database using the .NET SDK.
++++
+ms.devlang: dotnet
+ Last updated : 07/22/2022+++
+# Manage a document in Azure Cosmos DB MongoDB API using .NET
++
+Manage your MongoDB documents with the ability to insert, update, and delete documents.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+
+## Insert a document
+
+Insert one or many documents, defined with a JSON schema, into your collection.
+
+* [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertOne_1.htm)
+* [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertMany_1.htm)
++
+## Update a document
+
+To update a document, specify the query filter used to find the document along with a set of properties of the document that should be updated.
+
+* [MongoClient.Database.Collection.UpdateOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateOne_1.htm)
+* [MongoClient.Database.Collection.UpdateMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateMany_1.htm)
++
+## Bulk updates to a collection
+
+You can perform several different types of operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+
+The following bulk operations are available:
+
+* [MongoClient.Database.Collection.BulkWrite](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_BulkWrite_1.htm)
+
+ * insertOne
+ * updateOne
+ * updateMany
+ * deleteOne
+ * deleteMany
++
+## Delete a document
+
+To delete documents, use a query to define how the documents are found.
+
+* [MongoClient.Database.Collection.DeleteOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteOne_1.htm)
+* [MongoClient.Database.Collection.DeleteMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteMany_1.htm)
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Dotnet Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-queries.md
+
+ Title: Query documents in Azure Cosmos DB MongoDB API using .NET
+description: Learn how to query documents in your Azure Cosmos DB MongoDB API database using the .NET SDK.
++++
+ms.devlang: dotnet
+ Last updated : 07/22/2022+++
+# Query documents in Azure Cosmos DB MongoDB API using .NET
++
+Use queries to find documents in a collection.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+
+## Query for documents
+
+To find documents, use a query filter on the collection to define how the documents are found.
+
+* [MongoClient.Database.Collection.Find](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollectionExtensions_Find__1_3.htme)
+* [FilterDefinition](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinition_1.htm)
+* [FilterDefinitionBuilder](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinitionBuilder_1.htm)
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and .NET](how-to-dotnet-get-started.md)
+- [Create a database](how-to-dotnet-manage-databases.md)
cosmos-db How To Javascript Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-collections.md
The preceding code snippet displays the following example console output:
## See also - [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)-- [Create a database](how-to-javascript-manage-databases.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
You'll use the following MongoDB classes to interact with these resources:
* [Get an item](#get-an-item) * [Query items](#query-items)
-The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+The sample code demonstrated in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
### Authenticate the client
cosmos-db Monitor Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db.md
The following sections build on this article by describing the specific data gat
## Cosmos DB insights
-Cosmos DB insights is a feature based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and use the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the [Explore Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md) article.
+Cosmos DB insights is a feature based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and use the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the [Explore Cosmos DB insights](cosmosdb-insights-overview.md) article.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
This quickstart will create a single Azure Cosmos DB account using the Table API
### Create a new .NET app
-Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-newt) to create a new console app.
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-new) to create a new console app.
```console dotnet new console -output <app-name>
Remove-AzResourceGroup @parameters
In this quickstart, you learned how to create an Azure Cosmos DB Table API account, create a table, and manage entries using the .NET SDK. You can now dive deeper into the SDK to learn how to perform more advanced data queries and management tasks in your Azure Cosmos DB Table API resources. > [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB Table API and .NET](./how-to-dotnet-get-started.md)
+> [Get started with Azure Cosmos DB Table API and .NET](./how-to-dotnet-get-started.md)
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Last updated 07/22/2022
-+ # View your usage summary details and download reports for direct EA enrollments
If you want to update the PO number after your invoice is generated, then contac
To update the PO number for a billing account:
-1. Sign in to theΓÇ»[Azure portal](https://portal.azure.com).
-1. Search for **Cost Management + Billing** and then select **Billing scopes**.
-1. Select your billing scope, and then in the left menu underΓÇ»**Settings**, selectΓÇ»**Properties**.
-1. SelectΓÇ»**Update PO number**.
-1. Enter a PO number and then selectΓÇ»**Update**.
+1. Sign in to the ΓÇ»[Azure portal](https://portal.azure.com).
+1. Search for  **Cost Management + Billing** and then select  **Billing scopes**.
+1. Select your billing scope, and then in the left menu underΓÇ» **Settings**, select ΓÇ»**Properties**.
+1. SelectΓÇ» **Update PO number**.
+1. Enter a PO number and then select ΓÇ»**Update**.
-Or you can update the PO number from in the Invoices list for the upcoming invoice:
+Or you can update the PO number from Invoice blade for the upcoming invoice:
-1. Sign in to theΓÇ»[Azure portal](https://portal.azure.com).
-1. Search for **Cost Management + Billing** and then select **Billing scopes**.
-1. Select your billing scope, then in the left menu underΓÇ»**Billing**, selectΓÇ»**Invoices**.
-1. SelectΓÇ»**Update PO number**.
-1. Enter a PO number and then selectΓÇ»**Update**.
+1. Sign in to the ΓÇ»[Azure portal](https://portal.azure.com).
+1. Search for  **Cost Management + Billing** and then select  **Billing scopes**.
+1. Select your billing scope, then in the left menu under ΓÇ»**Billing**, select ΓÇ»**Invoices**.
+1. SelectΓÇ» **Update PO number**.
+1. Enter a PO number and then select ΓÇ»**Update**.
## Review credit charges
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 01/28/2022 Last updated : 07/25/2022
Azure portal supports the following type of billing accounts:
- **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/). -- **Enterprise Agreement**: A billing account for an Enterprise Agreement is created when your organization signs an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. You can have a maximum of 5000 subscriptions in an Enterprise Agreement. You can also have an unlimited number of enrollment accounts, effectively allowing an unlimited number of subscriptions.
+- **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts. However, an EA account has a subscription limit of 5000. If you need more subscriptions than the limit, create more EA accounts. Generally speaking, a subscription is a billing container. We recommend that you avoid creating multiple subscriptions to implement access boundaries. To separate resources with an access boundary, consider using a resource group. For more information about resource groups, see [Manage Azure resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md).
-- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 20 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise doesn't have a limit on the number of subscriptions. For more information, see [Get started with your billing account for Microsoft Customer Agreement](../understand/mca-overview.md).
+- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 20 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise can have up to 5000 subscriptions under it.
- **Microsoft Partner Agreement**: A billing account for a Microsoft Partner Agreement is created for Cloud Solution Provider (CSP) partners to manage their customers in the new commerce experience. Partners need to have at least one customer with an [Azure plan](/partner-center/purchase-azure-plan) to manage their billing account in the Azure portal. For more information, see [Get started with your billing account for Microsoft Partner Agreement](../understand/mpa-overview.md).
data-factory Data Flow Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-errors.md
This article lists common error codes and messages reported by mapping data flow
- **Cause**: An invalid staging configuration is provided in the Hive. - **Recommendation**: Please update the related ADLS Gen2 linked service that is used as staging. Currently, only the service principal key credential is supported. -- **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnKey or miServiceUri/miServiceToken is required.
+- **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spn Credential/spnCredentialType or miServiceUri/miServiceToken is required.
- **Cause**: An invalid staging configuration is provided in the Hive. - **Recommendation**: Update the related ADLS Gen2 linked service with right credentials that are used as staging in the Hive.
This article lists common error codes and messages reported by mapping data flow
- **Cause**: Transient error - **Recommendation**: Retry the request after a wait period.
+## Error code: DF-Executor-OutOfMemorySparkError
+
+- **Message**: The data may be too large to fit in the memory.
+- **Cause**: The size of the data far exceeds the limit of the node memory.
+- **Recommendation**: Increase the core count and switch to the memory optimized compute type.
+
+## Error code: DF-SQLDW-InternalErrorUsingMSI
+
+- **Message**: An internal error occurred while authenticating against Managed Service Identity in Azure Synapse Analytics instance. Please restart the Azure Synapse Analytics instance or contact Azure Synapse Analytics Dedicated SQL Pool support if this problem persists.
+- **Cause**: An internal error occurred in Azure Synapse Analytics.
+- **Recommendation**: Restart the Azure Synapse Analytics instance or contact Azure Synapse Analytics Dedicated SQL Pool support if this problem persists.
+
+## Error code: DF-Executor-IncorrectLinkedServiceConfiguration
+
+- **Message**: Possible causes are,
+ - The linked service is incorrectly configured as type 'Azure Blob Storage' instead of 'Azure DataLake Storage Gen2' and it has 'Hierarchical namespace' enabled. Please create a new linked service of type 'Azure DataLake Storage Gen2' for the storage account in question.
+ - Certain scenarios with any combinations of 'Clear the folder', non-default 'File name option', 'Key' partitioning may fail with a Blob linked service on a 'Hierarchical namespace' enabled storage account. You can disable these dataflow settings (if enabled) and try again in case you do not want to create a new Gen2 linked service.
+- **Cause**: Delete operation on the Azure Data Lake Storage Gen2 account failed since its linked service is incorrectly configured as Azure Blob Storage.
+- **Recommendation**: Create a new Azure Data Lake Storage Gen2 linked service for the storage account. If that's not feasible, some known scenarios like **Clear the folder**, non-default **File name option**, **Key** partitioning in any combinations may fail with an Azure Blob Storage linked service on a hierarchical namespace enabled storage account. You can disable these data flow settings if you enabled them and try again.
+
+## Error code: DF-Delta-InvalidProtocolVersion
+
+- **Message**: Unsupported Delta table protocol version, Refer https://docs.delta.io/latest/versioning.html#-table-version for versioning information.
+- **Cause**: Data flows don't support this version of the Delta table protocol.
+- **Recommendation**: Use a lower version of the Delta table protocol.
## Next steps
databox-online Azure Stack Edge Gpu Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-connect.md
Previously updated : 03/21/2022 Last updated : 07/05/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect to Azure Stack Edge Pro GPU so I can use it to transfer data to Azure.
Before you configure and set up your Azure Stack Edge Pro GPU device, make sure
1. Configure the Ethernet adapter on your computer to connect to the Azure Stack Edge Pro device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
-
- ![Backplane of a cabled device](./media/azure-stack-edge-gpu-deploy-install/two-pci-slots.png)
-
- The backplane of the device may look slightly different depending on the exact model you have received. For more information, see [Cable your device](azure-stack-edge-gpu-deploy-install.md#cable-the-device).
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter.
+ The backplane of the device may look slightly different depending on the exact model you have received. Use the illustrations in [Cable your device](azure-stack-edge-gpu-deploy-install.md#cable-the-device) to identify Port 1 on your device.
3. Open a browser window and access the local web UI of the device at `https://192.168.100.10`. This action may take a few minutes after you've turned on the device.
You're now at the **Overview** page of your device. The next step is to configur
1. Connect the computer to PORT 1 on the first node of your 2-node device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter.
+ The backplane of the device may look slightly different depending on the exact model you have received. Use the illustrations in [Cable your device](azure-stack-edge-gpu-deploy-install.md#cable-the-device) to identify Port 1 on your device.
+ 1. Open a browser window and access the local web UI of the device at `https://192.168.100.10`. This action may take a few minutes after you've turned on the device.
databox-online Azure Stack Edge Gpu Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-install.md
Previously updated : 05/17/2022 Last updated : 07/05/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro in datacenter so I can use it to transfer data to Azure.
Take the following steps to cable your device for power and network.
1. Identify the various ports on the back plane of your device. <!--You may have received one of the following devices from the factory depending on the number of GPUs in your device.-->
- ![Back plane of a cabled device](./media/azure-stack-edge-gpu-deploy-install/backplane-ports.png)
+ - Device with two Peripheral Component Interconnect (PCI) slots and one GPU
+
+ ![Back plane of a cabled device 1.](./media/azure-stack-edge-gpu-deploy-install/backplane-ports.png)
+
+ - Device with three PCI slots and one GPU
+
+ ![Back plane of a cabled device 2.](./media/azure-stack-edge-gpu-deploy-install/backplane-ports-3.png)
+
+ - Device with three PCI slots and two GPUs
+ ![Back plane of a cabled device 3.](./media/azure-stack-edge-gpu-deploy-install/backplane-ports-2.png)
2. Locate the disk slots and the power button on the front of the device.
databox-online Azure Stack Edge Gpu Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-quickstart.md
Previously updated : 02/15/2022 Last updated : 07/08/2022 #Customer intent: As an IT admin, I need to understand how to prepare the portal to quickly deploy Azure Stack Edge so I can use it to transfer data to Azure.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
If you enabled the integration, but still don't see the extension running on you
### What are the licensing requirements for Microsoft Defender for Endpoint? Defender for Endpoint is included at no extra cost with **Microsoft Defender for Servers**. Alternatively, it can be purchased separately for 50 machines or more.
+### Do I need to buy a separate anti-malware solution to protect my machines?
+No. With MDE integration in Defender for Servers, you'll also get malware protection on your machines.
+- On Windows Server 2012 R2 with MDE unified solution integration enabled, Defender for Servers will deploy [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows) in *active mode*.
+- On newer Windows Server operating systems, Microsoft Defender Antivirus is part of the operating system and will be enabled in *active mode*.
+- On Linux, Defender for Servers will deploy MDE including the anti-malware component, and set the component in *passive mode*.
+ ### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for Servers? If you already have a license for **Microsoft Defender for Endpoint for Servers** , you won't pay for that part of your [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers) license. Learn more about [the Microsoft 365 license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 07/19/2022 Last updated : 07/25/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in July include: - [General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection](#general-availability-ga-of-the-cloud-native-security-agent-for-kubernetes-runtime-protection)
+- [Defender for Container's VA adds support for the detection of language specific packages (Preview)](#defender-for-containers-va-adds-support-for-the-detection-of-language-specific-packages-preview)
+ ### General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection We're excited to share that the Cloud-native security agent for Kubernetes runtime protection is now generally available (GA)!
You can also review [all available alerts](alerts-reference.md#alerts-k8scluster
Note, if you're using the preview version, the `AKS-AzureDefender` feature flag is no longer required.
+### Defender for Container's VA adds support for the detection of language specific packages (Preview)
+
+Defender for Container's vulnerability assessment (VA) is able to detect vulnerabilities in OS packages deployed via the OS package manager. We have now extended VA's abilities to detect vulnerabilities included in language specific packages.
+
+This feature is in `preview` and is only available for Linux images.
+
+To see all of the included language specific packages that have been added, check out Defender for Container's full list of [features and their availability](supported-machines-endpoint-solutions-clouds-containers.md#registries-and-images).
+ ## June 2022 Updates in June include:
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 07/07/2022 Last updated : 07/26/2022
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--| | Compliance | Docker CIS | VM, VMSS | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds |
| Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Preview | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
The **tabs** below show the features that are available, by environment, for Mic
| Discovery and provisioning | Auto provisioning of Defender profile | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+<sup><a name="footnote2"></a>2</sup> VA can detect vulnerabilities for these [OS packages](#registries-and-images).
+
+<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images).
+
+## Additional information
+
+### Registries and images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6,6,7,8 <br> ΓÇó Amazon Linux 1,2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11,12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
+| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
+
+### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+
+<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
### [**AWS (EKS)**](#tab/aws-eks)
The **tabs** below show the features that are available, by environment, for Mic
| Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - | | Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Additional information
+
+### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+
+<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
### [**GCP (GKE)**](#tab/gcp-gke)
The **tabs** below show the features that are available, by environment, for Mic
| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | - | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | - | Agentless | Defender for Containers |
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### Additional information
+
+### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+
+<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
### [**On-prem/IaaS (Arc)**](#tab/iaas-arc) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
-| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
-| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
+| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
+| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
The **tabs** below show the features that are available, by environment, for Mic
| Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
+<sup><a name="footnote2"></a>2</sup> VA can detect vulnerabilities for these [OS packages](#registries-and-images-1).
+
+<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images-1).
## Additional information
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |-
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6,6,7,8 <br> ΓÇó Amazon Linux 1,2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11,12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35 |
+| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
### Kubernetes distributions and configurations
The **tabs** below show the features that are available, by environment, for Mic
|--|--| | Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
-<sup><a name="footnote1"></a>1</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+ <sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. > [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). ++ ## Next steps - Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-event-aggregation.md
Title: Micro agent event collection (Preview) description: Defender for IoT security agents collects data and system events from your local device, and sends the data to the Azure cloud for processing, and analytics. Previously updated : 11/09/2021 Last updated : 04/26/2022 # Micro agent event collection (Preview)
-Defender for IoT security agents collects data, and system events from your local device, and sends the data to the Azure cloud for processing, and analytics. The Defender for IoT micro agent collects many types of device events including new processes, and all new connection events. Both the new process, and new connection events may occur frequently on a device. This capability is important for comprehensive security, however, the number of messages the security agents send may quickly meet, or exceed your IoT Hub quota, and cost limits. These messages, and events contain highly valuable security information that is crucial to protecting your device.
+Defender for IoT security agents collects data, and system events from your local device, and sends the data to the Azure cloud for processing.
+
+If you've configured and connected a Log Analytics workspace, you'll see these events in Log Analytics. For more information, see [Tutorial: Investigate security alerts](tutorial-investigate-security-alerts.md).
+
+The Defender for IoT micro agent collects many types of device events including new processes, and all new connection events. Both the new process, and new connection events may occur frequently on a device. This capability is important for comprehensive security, however, the number of messages the security agents send may quickly meet, or exceed your IoT Hub quota, and cost limits. These messages, and events contain highly valuable security information that is crucial to protecting your device.
To reduce the number of messages, and costs while maintaining your device's security, Defender for IoT agents aggregate the following types of events:
The Login collector, collects user sign-ins, sign-outs, and failed sign-in attem
The Login collector supports the following types of collection methods: -- **Syslog**. If syslog is running on the device, the Login collector collects SSH sign-in events via the syslog file named **auth.log**.
+- **UTMP and SYSLOG**. UTMP catches SSH interactive events, telnet events, and terminal logins, as well as all failed login events from SSH, telnet, and terminal. If SYSLOG is enabled on the device, the Login collector also collects SSH sign-in events via the SYSLOG file named **auth.log**.
- **Pluggable Authentication Modules (PAM)**. Collects SSH, telnet, and local sign-in events. For more information, see [Configure Pluggable Authentication Modules (PAM) to audit sign-in events](configure-pam-to-audit-sign-in-events.md).
The following data is collected:
| **user_name** | The Linux user. | | **executable** | The terminal device. For example, `tty1..6` or `pts/n`. | | **remote_address** | The source of connection, either a remote IP address in IPv6 or IPv4 format, or `127.0.0.1/0.0.0.0` to indicate local connection. |
+| **Login_UsePAM** | Boolean: <br>- **True**: Only the PAM Login collector is used <br>- **False**: The UTMP Login collector is used, with SYSLOG if SYSLOG is enabled |
-## System information (trigger based collector))
+## System information (trigger based collector)
The data collected for each event is:
The **nics** properties are composed of the following;
## Baseline (trigger based)
-The baseline collector performs CIS checks periodically. Only the failed results are sent to the cloud. The cloud aggregates the results, and provides recommendations.
+The baseline collector performs periodic CIS checks, and *failed*, *pass*, and *skip* check results are sent to the Defender for IoT cloud service. Defender for IoT aggregates the results and provides recommendations based on any failures.
### Data collection
The data collected for each event is:
| Parameter | Description| |--|--| | **Check ID** | In CIS format. For example, `CIS-debian-9-Filesystem-1.1.2`. |
-| **Check result** | Can be `Error`, or `Fail`. For example, `Error` in a situation where the check canΓÇÖt run. |
+| **Check result** | Can be `Fail`, `Pass`, `Skip`, or `Error`. For example, `Error` in a situation where the check canΓÇÖt run. |
| **Error** | The error's information, and description. | | **Description** | The description of the check from CIS. | | **Remediation** | The recommendation for remediation from CIS. |
The data collected on each package includes:
|**Name** | The package name | |**Version** | The package version | |**Vendor** | The package's vendor, which is the **Maintainer** field in deb packages |
-| | |
+> [!NOTE]
+> The SBoM collector currently only collects the first 500 packages ingested.
+ ## Next steps Check your [Defender for IoT security alerts](concept-security-alerts.md).
defender-for-iot Concept Micro Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
Title: Micro agent configurations (Preview) description: The collector sends all current data immediately after any configuration change is made. The changes are then applied. Previously updated : 12/22/2021 Last updated : 05/03/2022
Default values are as follows:
| **Low** | 1440 (24 hours) | | **Medium** | 120 (2 hours) | | **High** | 30 (.5 hours) |
-| | |
To reduce the number of messages sent to cloud, each priority should be set as a multiple of the one below it. For example, High: 60 minutes, Medium: 120 minutes, Low: 480 minutes.
For example:
|--|--|--|--| | **Baseline_GroupsDisabled** | A list of Baseline group names, separated by a comma. <br><br>For example: `Time Synchronization, Network Parameters Host` | Defines the full list of Baseline group names that should be disabled. | Null | | **Baseline_ChecksDisabled** |A list of Baseline check IDs, separated by a comma. <br><br>For example: `3.3.5,2.2.1.1` | Defines the full list of Baseline check IDs that should be disabled. | Null |
-| | | | |
## Event-based collector configurations
These configurations include process, and network activity collectors.
| **Aggregation mode** | `True` <br>`False` | Determines whether to process event aggregation for an identical event. | `True` | | **Cache size** | cycle FIFO | Defines the number of events collected in between the times that data is sent. | `256` | | **Disable collector** | `True` <br> `False` | Determines whether or not the collector is operational. | `False` |
-| | | | |
## IoT Hub Module-specific settings | Setting Name | Setting options | Description | Default | |--|--|--|--| | **IothubModule_MessageTimeout** | Positive integer, including limits | Defines the number of minutes to retain messages in the outbound queue to the IoT Hub, after which point the messages are dropped. | `2880` (=2 days) |
-| | | | |
## Network activity collector-specific settings | Setting Name | Setting options | Description | Default |
These configurations include process, and network activity collectors.
|--|--|--|--| | **Process_Mode** | `1` = Auto <br>`2` = Netlink <br>`3`= Polling | Determines the process collector mode. In `Auto` mode, the agent first tries to enable the Netlink mode. <br><br>If that fails, it will automatically fall back / switch to the Polling mode.| `1` | |**Process_PollingInterval** |Integer |Defines the polling interval in microseconds. This value is used when the **Process_Mode** is in `Polling` mode. | `100000` (=0.1 second) |
-| | | | |
## Trigger-based collector configurations
These configurations include system information, and baseline collectors.
|--|--|--|--| | **Interval** | `High` <br>`Medium`<br>`Low` | The frequency in which data is sent. | `Low` | | **Disable collector** | `True` <br> `False` | Whether or not the collector is operational. | `False` |
-| | | | |
## Next steps
defender-for-iot How To Investigate Cis Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-investigate-cis-benchmark.md
Title: Investigate CIS benchmark recommendation description: Perform basic and advanced investigations based on OS baseline recommendations. Previously updated : 11/09/2021 Last updated : 05/03/2022
Perform basic and advanced investigations based on OS baseline recommendations.
> [!NOTE] > The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
-## Basic OS baseline security recommendation investigation
+## Basic OS baseline security recommendation investigation
You can investigate OS baseline recommendations by navigating to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). For more information, see how to [Investigate security recommendations](quickstart-investigate-security-recommendations.md).
-## Advanced OS baseline security recommendation investigation
+## Advanced OS baseline security recommendation investigation
-This section describes how to better understand the OS baseline test results, and querying events in Azure Log Analytics.
+This section describes how to better understand the OS baseline test results, and querying events in Azure Log Analytics.
-The advanced OS baseline security recommendation investigation is only supported by using log analytics. Connect Defender for IoT to a Log Analytics workspace before continuing. For more information on advanced OS baseline security recommendations, see how to [Configure Microsoft Defender for IoT agent-based solution](how-to-configure-agent-based-solution.md).
+**Prerequisites**:
-To query your IoT security events in Log Analytics for alerts:
+The advanced OS baseline security recommendation investigation is only supported by using Azure Log Analytics and you must connect Defender for IoT to a Log Analytics workspace before continuing.
-1. Navigate to the **Alerts** page.
+For more information, see [Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md).
-1. Select **Investigate recommendations in Log Analytics workspace**.
+**To query your IoT security events in Log Analytics for alerts**:
-To query your IoT security events in Log Analytics for recommendations:
+1. In your Log Analytics workspace, go to **Logs** > **AzureSecurityOfThings** > **SecurityAlert**.
-1. Navigate to the **Recommendations** page.
+1. In the query editor on the right, enter a KQL query to display the alerts you want to see.
-1. Select **Investigate recommendations in Log Analytics workspace**.
+1. Select **Run** to display the alerts that match your query.
-1. Select **Show Operation system (OS) baseline rules details** from the **Recommendation details** quick view page to see the details of a specific device.
+For example:
- :::image type="content" source="media/how-to-investigate-cis-benchmark/recommendation-details.png" alt-text="See the details of a specific device.":::
-To query your IoT security events in Log Analytics workspace directly:
-
-1. Navigate to the **Logs** page.
-
- :::image type="content" source="media/how-to-investigate-cis-benchmark/logs.png" alt-text="Select logs from the left side pane.":::
-
-1. Select **Investigate the alerts** or, select the **Investigate the alerts in Log Analytics** option from any security recommendation, or alert.
+> [!NOTE]
+> In addition to alerts, you can also use this same procedure to query for recommendations or raw event data.
+>
## Useful queries to investigate the OS baseline resources
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/overview.md
The Defender for IoT micro agent enables you to quickly improve your organizatio
The Defender for IoT micro agent provides deep security protection, and visibility into device behavior. - The micro agent collects, aggregates, and analyzes raw security events from your devices. Events can include IP connections, process creation, user logons, and other security-relevant information.-- Defender for IoT device agents handles event aggregation, to help avoid high network throughput.-- The micro agent has flexible deployment options. The micro agent includes source code, so you can incorporate it into firmware, or customize it to include only what you need. It's also available as a binary package, or integrated directly into other Azure IoT solutions. The micro agent is available for standard IoT operating systems, such as Linux and Azure RTOS.
+- Defender for IoT device agents handle event aggregation, to help avoid high network throughput.
+- The micro agent has flexible deployment options. The micro agent includes source code, so you can incorporate it into firmware, or customize it to include only what you need. It's also available as a binary package, or integrated directly into other Azure IoT solutions. The micro agent is available for standard IoT operating systems, such as Linux and Azure RTOS.
- The agents are highly customizable, allowing you to use them for specific tasks, such as sending only important information at the fastest SLA, or for aggregating extensive security information and context into larger segments, avoiding higher service costs.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/release-notes.md
Title: What's new in Microsoft Defender for IoT for device builders description: Learn about the latest updates for Defender for IoT device builders. Previously updated : 02/20/2022 Last updated : 04/26/2022 # What's new
This article lists new features and feature enhancements in Microsoft Defender f
Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Versioning and support
+For more information, see [Upgrade the Microsoft Defender for IoT micro agent](upgrade-micro-agent.md).
-Listed below are the support, breaking change policies for Defender for IoT, and the versions of Defender for IoT that are currently available.
+## July 2022
+
+**Version 4.2.4**
+
+- **Proxy connection updates**: Now you can connect your micro-agent to an IoT Hub via a proxy. For more information, see [Connect via a proxy](tutorial-standalone-agent-binary-installation.md#connect-via-a-proxy).
+
+- **Support for TPM-backed certificates**: Now you can use OpenSSL certificates backed by TPM. For more information, see [Authenticate using a certificate](tutorial-standalone-agent-binary-installation.md#authenticate-using-a-certificate).
+
+- **AMQP support**: Now you can add AMQP support after installing your micro-agent. For more information, see [Add AMQP protocol support](tutorial-standalone-agent-binary-installation.md#add-amqp-protocol-support).
+
+- **Baseline collector updates**: The baseline collector now sends *pass* and *skip* checks to the cloud in addition to *failed* results. For more information, see [Micro agent event collection](concept-event-aggregation.md#baseline-trigger-based).
+
+- **Login collector via UTMP**: The login collector now supports UTMP to catch SSH interactive events, telnet events, and terminal logins, including failed login events. For more information, see [Login collector (event-based collector)](concept-event-aggregation.md#login-collector-event-based-collector).
+
+- **SBoM collector known issue**: The SBoM collector currently only collects the first 500 packages ingested. For more information, see [SBoM (trigger based)](concept-event-aggregation.md#sbom-trigger-based) collection.
## February 2022
Listed below are the support, breaking change policies for Defender for IoT, and
For more information, see [Network Connection events (event-based collector)](concept-event-aggregation.md#network-connection-events-event-based-collector). -- **Login Collector**: Now supporting login collector using: SYSLOG collecting SSH login events and PAM collecting SSH, telnet and local login events using the pluggable authentication modules stack. For more information, see [Micro agent event collection (Preview)](concept-event-aggregation.md).-
+- **Login Collector**: Now supporting login collector using: SYSLOG collecting SSH login events and PAM collecting SSH, telnet and local login events using the pluggable authentication modules stack. For more information, see [Login collector (event-based collector)](concept-event-aggregation.md#login-collector-event-based-collector).
## November 2021
Listed below are the support, breaking change policies for Defender for IoT, and
- **[Login collector](concept-event-aggregation.md#login-collector-event-based-collector)** - The login collectors gather user logins, logouts, and failed login attempts. Such as SSH & telnet. -- **[System information collector](concept-event-aggregation.md#system-information-trigger-based-collector)** - The system information collector gatherers information related to the deviceΓÇÖs operating system and hardware details.
+- **[System information collector](concept-event-aggregation.md#system-information-trigger-based-collector)** - The system information collector gathers information related to the deviceΓÇÖs operating system and hardware details.
- **[Event aggregation](concept-event-aggregation.md#how-does-event-aggregation-work)** - The Defender for IoT agent aggregates events such as process, login, network events that reduce the number of messages sent and costs, all while maintaining your device's security.
This feature set is available with the current public preview cloud release.
## Next steps
-[Onboard to Defender for IoT](quickstart-onboard-iot-hub.md)
+- [Onboard to Defender for IoT](quickstart-onboard-iot-hub.md)
+- [Upgrade the Microsoft Defender for IoT micro agent](upgrade-micro-agent.md)
defender-for-iot Tutorial Configure Agent Based Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-configure-agent-based-solution.md
In this tutorial you'll learn how to:
> [!div class="checklist"] > - Enable data collection
-> - Create a log analytics workspace
+> - Create a Log Analytics workspace
> - Enable geolocation and IP address handling ## Prerequisites
In this tutorial you'll learn how to:
1. Select **Save**.
-## Create a log analytics workspace
+## Create a Log Analytics workspace
-Defender for IoT allows you to store security alerts, recommendations, and raw security data, in your Log Analytics workspace. Log Analytics ingestion in IoT Hub is set to **off** by default in the Defender for IoT solution. It is possible, to attach Defender for IoT to a Log Analytic workspace, and to store the security data there as well.
+Defender for IoT allows you to store security alerts, recommendations, and raw security data, in your Log Analytics workspace. Log Analytics ingestion in IoT Hub is set to **off** by default in the Defender for IoT solution. It is possible, to attach Defender for IoT to a Log Analytics workspace, and to store the security data there as well.
There are two types of information stored by default in your Log Analytics workspace by Defender for IoT:
You can choose to add storage of an additional information type as `raw events`.
1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Settings** > **Data Collection**.
-1. Under the Workspace configuration, switch the Log Analytics toggle to **On**.
+1. Under the **Workspace configuration**, switch the Log Analytics toggle to **On**.
1. Select a subscription from the drop-down menu. 1. Select a workspace from the drop-down menu. If you don't already have an existing Log Analytics workspace, you can select **Create New Workspace** to create a new one.
-1. Verify that the **Access to raw security data** option is selected.
+1. Verify that the **Access to raw security data** option is selected.
:::image type="content" source="media/how-to-configure-agent-based-solution/data-settings.png" alt-text="Ensure Access to raw security data is selected.":::
defender-for-iot Tutorial Configure Your Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-configure-your-solution.md
In this tutorial you'll learn how to:
A new resource group will now be added to your IoT solution.
-Defender for IoT will now monitor you're newly added resource groups, and surfaces relevant security recommendations, and alerts as part of your IoT solution.
+Defender for IoT will now monitor your newly added resource groups, and surfaces relevant security recommendations and alerts as part of your IoT solution.
## Next steps
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
Title: Install the Microsoft Defender for IoT micro agent (Preview) description: Learn how to install and authenticate the Defender for IoT micro agent. Previously updated : 02/20/2022 Last updated : 04/26/2022 #Customer intent: As an Azure admin I want to install the Defender for IoT agent on devices connected to an Azure IoT Hub
Depending on your setup, the appropriate Microsoft package will need to be insta
sudo apt-get install defender-iot-micro-agent ```
+## Connect via a proxy
+
+This procedure describes how you can connect the Defender for IoT micro-agent to the IoT Hub via a proxy.
+
+**To configure connections via a proxy**:
+
+1. On your micro-agent machine, create a `/etc/defender_iot_micro_agent/conf.json` file with the following content:
+
+ ```json
+ {
+ "IothubModule_ProxyConfig": "<proxy_ipv4>,<port>,<username>,<password>",
+ "IothubModule_TransportProtocol": "MQTT_WebSocket_Protocol"
+ }
+ ```
+
+ User and password fields are optional. If you don't need them, use the following syntax instead:
+
+ ```json
+ {
+ "IothubModule_ProxyConfig": "<proxy_ipv4>,<port>",
+ "IothubModule_TransportProtocol": "MQTT_WebSocket_Protocol"
+ }
+
+1. Delete any cached file at **/var/lib/defender_iot_micro_agent/cache.json**.
+
+1. Restart the micro-agent. Run:
+
+ ```bash
+ sudo systemctl restart defender-iot-micro-agent.service
+ ```
+
+## Add AMQP protocol support
+
+This procedure describes additional steps required to support the AMQP protocol.
+
+**To add AMQP protocol support**:
+
+1. On your micro-agent machine, open the `/etc/defender_iot_micro_agent/conf.json` file and add the following content:
+
+ ```json
+ {
+ "IothubModule_TransportProtocol": "AMQP_Protocol"
+ }
+ ```
+1. Delete any cached file at **/var/lib/defender_iot_micro_agent/cache.json**.
+
+1. Restart the micro-agent. Run
+
+ ```bash
+ sudo systemctl restart defender-iot-micro-agent.service
+ ```
+ ## Authenticate the micro agent There are two options that can be used to authenticate the Defender for IoT micro agent:
event-grid Availability Zone Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zone-resiliency.md
- Title: Resiliency in Azure Event Grid | Microsoft Docs
-description: Describes how Azure Event Grid supports resiliency.
- Previously updated : 06/21/2022--
-# Resiliency in Azure Event Grid
-
-Azure availability zones are designed to help you achieve resiliency and reliability for your business-critical workloads. Azure maintains multiple geographies. These discrete demarcations define disaster recovery and data residency boundaries across one or multiple Azure regions. Maintaining many regions ensures customers are supported across the world.
-
-## Availability zones
-
-Azure Event Grid event subscription configurations and events are automatically replicated across data centers in the availability zone, and replicated in the three availability zones (when available) in the region specified to provide automatic in-region recovery of your data in case of a failure in the region. See [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) to learn more about the supported regions with availability zones.
-
-Azure availability zones are connected by a high-performance network with a round-trip latency of less than 2ms. They help your data stay synchronized and accessible when things go wrong. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. Availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
-
-With availability zones, you can design and operate applications and databases that automatically transition between zones without interruption. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures.
-
-If a region supports availability zones, the event data is replicated across availability zones though.
--
-## Next steps
--- If you want to understand the geo disaster recovery concepts, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md)--- If you want to implement your own disaster recovery plan, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md)--- If you want to implement your own client-side failover logic, see [# Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery-client-side.md)
event-grid Availability Zones Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zones-disaster-recovery.md
+
+ Title: Availability zones and disaster recovery | Microsoft Docs
+description: Describes how Azure Event Grid supports availability zones and disaster recovery.
+ Last updated : 07/18/2022++
+# Availability zones and disaster recovery
+Azure availability zones are designed to help you achieve resiliency and reliability for your business-critical workloads. Event Grid supports automatic geo-disaster recovery of event subscription configuration data (metadata) for topics, system topics, domains, and partner topics. This article gives you more details about Event Grid's support for availability zones and disaster recovery.
+
+## Availability zones
+
+Azure availability zones are designed to help you achieve resiliency and reliability for your business-critical workloads. Azure maintains multiple geographies. These discrete demarcations define disaster recovery and data residency boundaries across one or multiple Azure regions. Maintaining many regions ensures customers are supported across the world.
+
+Azure Event Grid event subscription configurations and events are automatically replicated across data centers in the availability zone, and replicated in the three availability zones (when available) in the region specified to provide automatic in-region recovery of your data in case of a failure in the region. See [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) to learn more about the supported regions with availability zones.
+
+Azure availability zones are connected by a high-performance network with a round-trip latency of less than 2ms. They help your data stay synchronized and accessible when things go wrong. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. Availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
+
+With availability zones, you can design and operate applications and databases that automatically transition between zones without interruption. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures.
+
+If a region supports availability zones, the event data is replicated across availability zones though.
++
+## Disaster recovery
+
+Event Grid supports automatic geo-disaster recovery of event subscription configuration data (metadata) for topics, system topics, domains, and partner topics. Event Grid automatically syncs your event-related infrastructure to a paired region. If an entire Azure region goes down, the events will begin to flow to the geo-paired region with no intervention from you.
+
+> [!NOTE]
+> Event data is not replicated to the paired region, only the metadata is replicated.
+
+Microsoft offers options to recover from a failure, you can opt to enable recovery to a paired region where available or disable recovery to a paired region to manage your own recovery. See [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to learn more about the supported paired regions. The failover is nearly instantaneous once initiated. To learn more about how to implement your own failover strategy, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md) .
+
+Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over all the Event Grid resources from an affected region to the corresponding geo-paired region. This process is a default option and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve a user consent before the user's traffic is failed over.
+
+If you have decided not to replicate any data to a paired region, you'll need to invest in some practices to build your own disaster recovery scenario and recover from a severe loss of application functionality using more than 2 regions. See [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md) for more details. See [Build your own client-side disaster recovery for Azure Event Grid topics](custom-disaster-recovery-client-side.md) in case you want to implement client-side disaster recovery for Azure Event Grid topics.
+
+## RTO and RPO
+
+Disaster recovery is measured with two metrics:
+
+- Recovery Point Objective (RPO): the minutes or hours of data that may be lost.
+- Recovery Time Objective (RTO): the minutes or hours the service may be down.
+
+Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata (topics, domains, event subscriptions.) and data (events). If you need different specification from the following ones, you can still implement your own [client-side fail over using the topic health apis](custom-disaster-recovery.md).
+
+### Recovery point objective (RPO)
+- **Metadata RPO**: zero minutes. Anytime a resource is created in Event Grid, it's instantly replicated across regions. When a failover occurs, no metadata is lost.
+- **Data RPO**: If your system is healthy and caught up on existing traffic at the time of regional failover, the RPO for events is about 5 minutes.
+
+### Recovery time objective (RTO)
+- **Metadata RTO**: Though generally it happens much more quickly, within 60 minutes, Event Grid will begin to accept create/update/delete calls for topics and subscriptions.
+- **Data RTO**: Like metadata, it generally happens much more quickly, however within 60 minutes, Event Grid will begin accepting new traffic after a regional failover.
+
+> [!IMPORTANT]
+> - There is no service level agreement (SLA) for server-side disaster recovery. If the paired region has no extra capacity to take on the additional traffic, Event Grid cannot initiate failover. Service level objectives are best-effort only.
+> - The cost for metadata GeoDR on Event Grid is: $0.
+> - Geo-disaster recovery isn't supported for partner topics.
+
+## Metrics
+
+Event Grid also provides [diagnostic logs schemas](diagnostic-logs.md) and [metrics](metrics.md) that helps you to identify a problem when there is a failure when publishing or delivering events. See the [troubleshoot](troubleshoot-issues.md) article in case you need with solving an issue in Azure Event Grid.
+
+## More information
+
+You may find more information availability zone resiliency and disaster recovery in Azure Event Grid in our [FAQ](/azure/event-grid/event-grid-faq).
+
+## Next steps
+
+- If you want to implement your own disaster recovery plan for Azure Event Grid topics and domains, see [Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md).
event-grid Geo Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/geo-disaster-recovery.md
- Title: Geo disaster recovery in Azure Event Grid | Microsoft Docs
-description: Describes how Azure Event Grid supports geo disaster recovery (GeoDR) automatically.
- Previously updated : 06/21/2022--
-# Server-side geo disaster recovery in Azure Event Grid
-
-Event Grid supports automatic geo-disaster recovery of event subscription configuration data (metadata) for topics, system topics, domains, and partner topics. Event Grid automatically syncs your event-related infrastructure to a paired region. If an entire Azure region goes down, the events will begin to flow to the geo-paired region with no intervention from you.
-
-> [!NOTE]
-> Event data is not replicated to the paired region, only the metadata is replicated.
-
-Microsoft offers options to recover from a failure, you can opt to enable recovery to a paired region where available or disable recovery to a paired region to manage your own recovery. See [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to learn more about the supported paired regions. The failover is nearly instantaneous once initiated. To learn more about how to implement your own failover strategy, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md) .
-
-Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over all the Event Grid resources from an affected region to the corresponding geo-paired region. This process is a default option and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve a user consent before the user's traffic is failed over.
-
-## Metrics
-
-Disaster recovery is measured with two metrics:
--- Recovery Point Objective (RPO): the minutes or hours of data that may be lost.-- Recovery Time Objective (RTO): the minutes or hours the service may be down.-
-Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata (topics, domains, event subscriptions.) and data (events). If you need different specification from the following ones, you can still implement your own [client-side fail over using the topic health apis](custom-disaster-recovery.md).
-
-### Recovery point objective (RPO)
-- **Metadata RPO**: zero minutes. Anytime a resource is created in Event Grid, it's instantly replicated across regions. When a failover occurs, no metadata is lost.-- **Data RPO**: If your system is healthy and caught up on existing traffic at the time of regional failover, the RPO for events is about 5 minutes.-
-### Recovery time objective (RTO)
-- **Metadata RTO**: Though generally it happens much more quickly, within 60 minutes, Event Grid will begin to accept create/update/delete calls for topics and subscriptions.-- **Data RTO**: Like metadata, it generally happens much more quickly, however within 60 minutes, Event Grid will begin accepting new traffic after a regional failover.-
-> [!IMPORTANT]
-> - There is no service level agreement (SLA) for server-side disaster recovery. If the paired region has no extra capacity to take on the additional traffic, Event Grid cannot initiate failover. Service level objectives are best-effort only.
-> - The cost for metadata GeoDR on Event Grid is: $0.
-> - Geo-disaster recovery isn't supported for partner topics.
--
-## Next steps
-
-If you want to implement your own client-side failover logic, see [# Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md)
event-grid Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/troubleshoot-issues.md
Title: Troubleshoot Event Grid issues description: This article provides different ways of troubleshooting Azure Event Grid issues Previously updated : 05/17/2022 Last updated : 07/18/2022 # Troubleshoot Azure Event Grid issues This article provides information that helps you with troubleshooting Azure Event Grid issues.
+## Azure Event Grid status in a region
+You can view status of Event Grid in a particular region using the [Azure status dashboard](https://status.azure.com/en-us/status).
+ ## Diagnostic logs Enable diagnostic settings for Event Grid topics or domains to capture and view publish and delivery failure logs. For more information, see [Diagnostic logs](enable-diagnostic-logs-topic.md).
firewall Compliance Certifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/compliance-certifications.md
The following Azure Firewall certifications are for Azure Government:
ICSA Labs is a leading vendor in third-party testing and certification of security and health IT products, as well as network-connected devices. They measure product compliance, reliability, and performance for most of the worldΓÇÖs top technology vendors.
-Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification. For the Azure Firewall certification report, see the [ICSA Labs Certification Testing and Audit Report](https://aka.ms/ICSALabsCertification). For more information, see the [ICSA Labs Firewall Certification Program](https://www.icsalabs.com/technology-program/firewalls) page.
+Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification.
+- For the Azure Firewall certification report, see the [ICSA Labs Certification Testing and Audit Report](https://www.icsalabs.com/sites/default/files/FINAL_Microsoft-Azure_Firewall_Report_210721.pdf).
+- For the Intrusion Prevention Systems (IPS) report, see [Network Intrusion Prevention Systems Certification Testing Report](https://www.icsalabs.com/sites/default/files/FINAL_Microsoft_NIPS_Cert_Testing_Report_20220715.pdf).
+
+For more information, see the [ICSA Labs Firewall Certification Program](https://www.icsalabs.com/technology-program/firewalls) page.
## Next steps
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 07/15/2022 Last updated : 07/25/2022
Enabling Policy Analytics on a Firewall Policy associated with a single firewall
### Enable Policy Analytics
-#### Firewall with no Azure Diagnostics settings configured
+Policy analytics starts monitoring the flows in the DNAT, Network, and Application rule analysis only after you enable the feature. It can't analyze rules hit before the feature is enabled.
+
+#### Firewall with no Diagnostics settings configured
1. Once all prerequisites are met, select **Policy analytics (preview)** in the table of contents.
Enabling Policy Analytics on a Firewall Policy associated with a single firewall
6. Go to the Firewall attached to the policy and enter the **Diagnostic settings** page. You'll see the **FirewallPolicySetting** added there as part of the policy analytics feature. 7. Select **Edit Setting**, and ensure the **Resource specific** toggle is checked, and the highlighted tables are checked. In the previous example, all logs are written to the log analytics workspace.
-#### Firewall with Azure Diagnostics settings already configured
+#### Firewall with Diagnostics settings already configured
-1. Ensure that the Firewall attached to the policy is connected to **Resource Specific** tables, and that the following three tables are enabled:
+1. Ensure that the Firewall attached to the policy is logging to **Resource Specific** tables, and that the following three tables are also selected:
- AZFWApplicationRuleAggregation - AZFWNetworkRuleAggregation - AZFWNatRuleAggregation
firewall Protect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-virtual-desktop.md
Previously updated : 07/14/2022 Last updated : 07/25/2022
You will need to create an Azure Firewall Policy and create Rule Collections for
| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 | | Rule Name | IP Address | VNet or Subnet IP Address | TCP | 443 | Service Tag | AzureCloud, WindowsVirtualDesktop, AzureFrontDoor.Frontend | | Rule Name | IP Address | VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * |
-|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 20.118.99.244, 40.83.235.53 (azkms.core.windows.net)|
+|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 20.118.99.224, 40.83.235.53 (azkms.core.windows.net)|
|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 (kms.core.windows.net)| > [!NOTE]
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
initiative definition.
|[Azure Key Vault should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](../../../key-vault/general/private-link-service.md). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](../../../container-registry/container-registry-private-link.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](../../../container-registry/container-registry-private-link.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
+|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit_v2_deprecated.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_InstallLaAgentOnVm.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
+|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit_v2_deprecated.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_InstallLaAgentOnVm.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
+|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit_v2_deprecated.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_InstallLaAgentOnVm.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
+|[Diagnostic logs in App Services should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit_v2_deprecated.json) |
|[Guest Configuration extension should be installed on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](../concepts/guest-configuration.md). |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_GCExtOnVm.json) | |[Log Analytics agent health issues should be resolved on your machines](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd62cfe2b-3ab0-4d41-980d-76803b58ca65) |Security Center uses the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA). To make sure your virtual machines are successfully monitored, you need to make sure the agent is installed on the virtual machines and properly collects security events to the configured workspace. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_ResolveLaHealthIssues.json) | |[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_InstallLaAgentOnVm.json) |
initiative definition.
||||| |[Azure DDoS Protection Standard should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
-|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
|[Web Application Firewall (WAF) should be enabled for Azure Front Door Service service](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | ### Boundary Protection
initiative definition.
|[Azure Key Vault should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](../../../key-vault/general/private-link-service.md). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) |
initiative definition.
|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | |[Storage accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](../../../private-link/private-link-overview.md) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) | |[Subnets should be associated with a Network Security Group](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
-|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
|[Web Application Firewall (WAF) should be enabled for Azure Front Door Service service](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | ### Access Points
initiative definition.
|[Azure Key Vault should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Disable public network access for your key vault so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at: [https://aka.ms/akvprivatelink](../../../key-vault/general/private-link-service.md). |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) |
initiative definition.
|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | |[Storage accounts should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](../../../private-link/private-link-overview.md) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) | |[Subnets should be associated with a Network Security Group](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
-|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
|[Web Application Firewall (WAF) should be enabled for Azure Front Door Service service](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | ### Transmission Confidentiality and Integrity
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
This release applies for HDInsight 4.0. HDInsight release is made available to a
The OS versions for this release are: - HDInsight 4.0: Ubuntu 18.04.5
-## Spark 3.1 is now generally available
+### Spark 3.1 is now generally available
Spark 3.1 is now Generally Available on HDInsight 4.0 release. This release includes
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Previously updated : 06/16/2022- Last updated : 06/29/2022+ # Release notes: Azure Health Data Services
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
-## May 2022
+## June 2022
### FHIR service
-### **Enhancement**
+#### **Bug fixes**
-|Enhancement |Related information |
+|Bug fixes |Related information |
| :-- | : |
-|FHIR service does not create a new version of the resource if the resource content has not changed. |If a user updates an existing resource and only meta.versionId or meta.lastUpdated have changed then we return OK with existing resource information without updating VersionId and lastUpdated. For more information, see [#2519](https://github.com/microsoft/fhir-server/pull/2519). |
+|Export Job not being queued for execution. |Fixes issue with export job not being queued due to duplicate job definition caused due to reference to container URL. For more information, see [#2648](https://github.com/microsoft/fhir-server/pull/2648). |
+|Queries not providing consistent result count after appended with the _sort operator. |Fixes the issue with the help of distinct operator to resolve inconsistency and record duplication in response. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). |
++
+## May 2022
+
+### FHIR service
#### **Bug fixes**
hpc-cache Hpc Cache Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-support-ticket.md
Title: Open a support ticket for Azure HPC Cache
-description: How to open a help request for Azure HPC Cache, including how to request a quota increase
+description: How to open a help request for Azure HPC Cache technical support
Previously updated : 07/18/2022 Last updated : 07/21/2022
Navigate to your cache instance, then click the **New support request** link tha
To open a ticket when you do not have an active cache, use the main Help + support page from the Azure portal. Open the portal menu from the control at the top left of the screen, then scroll to the bottom and click **Help + support**.
-Support requests are also used to request a quota increase. If you want to host more HPC Caches than your subscription currently allows, follow the instructions below in [Request a quota increase](#request-a-quota-increase).
+Support requests are also used to make quota requests. Follow the instructions in [Request an HPC Cache quota increase](increase-quota.md).
-## Request technical support
+After you choose either **New support request** from your cache or **Create a support request** from the main Help + support page, a form appears.
-Choose **Create a support request**. On the support request form, write a summary of the issue, and select **Technical** as the **Issue type**.
+![Screenshot of the support request - Problem description tab, filled out as described.](media/hpc-cache-support-request.png)
-Select your subscription from the list.
+1. On the support request form, select **Technical** as the **Issue type**.
-If you can't find the Azure HPC Cache service, click the **All services** button and search for HPC.
+1. Select your subscription from the list.
-![Screenshot of the support request - Basics tab, filled out as described.](media/hpc-cache-support-request.png)
+1. Select the service type **HPC Cache**. If you can't find the Azure HPC Cache service, click the **All services** button and search for HPC.
-Fill out the rest of the fields with your information and preferences, then submit the ticket when you are ready.
+1. Fill in the **Resource** name (if applicable), write a **Summary** of the problem, and select the most appropriate **Problem type**.
-After you submit the request, you will receive a confirmation email with a ticket number. A support staff member will contact you about the request.
-
-## Request a quota increase
-
-Use the quotas page in the Azure portal to check your current quotas and request increases.
-
-The default quota for Azure HPC Cache is four caches per subscription. If you want to create more than six caches in the same subscription, support approval is needed. One HPC Cache uses multiple virtual machines, network resources, storage containers, and other Azure services, so it's unlikely that the number of caches per subscription will be the limiting factor in how many you can have.
-
-Use the support request form described above to request a quota increase.
-
-* For **Issue type**, select **Service and subscription limits (quotas)**.
-
- ![Screenshot of portal "issue type" menu with the option "Service and subscription limits (quotas)" highlighted.](media/support-request-quota.png)
-
-* Select the **Subscription** for the quota increase.
-
-* Select the **Quota type** "HPC Cache".
+After you select a **Problem type**, the form might display tips or system information that can help you troubleshoot that issue.
- ![Screenshot of portal "quota type" field with "hpc" typed in the search box and a matching result "HPC Cache" showing on the menu to be selected.](media/quota-type-search-hpc.png)
+If you can't resolve the issue with the suggested solution, fill out the rest of the fields with your information and preferences. Submit the ticket when you are ready.
- Click **Next** to go to the **Additional details** page.
-
-* In **Request details**, click the link that says **Enter details**. An additional form opens to the right.
-
- ![Screenshot of Azure portal details form for HPC Cache, with options to select region and new limit.](media/quota-details.png)
-
-* For **Quota type** select **HPC Cache count**.
-
-* Select the **Region** where your cache is located.
-
- The form shows your HPC Cache limit and current usage in this region.
-
-* Type the limit you're requesting in the **New limit** field. Click **Save and continue**.
-
- Fill in the additional details required and create the request.
+After you submit the request, you will receive a confirmation email with a ticket number. A support staff member will contact you about the request.
hpc-cache Increase Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/increase-quota.md
+
+ Title: Request a quota increase for Azure HPC Cache
+description: How to request a quota increase for Azure HPC Cache by opening a support request ticket
+++ Last updated : 07/25/2022+++
+# Request an HPC Cache quota increase
+
+If you want to host more HPC Caches than your subscription currently allows, use the support request form in the Azure portal to request a quota increase.
+
+You also can use the [quotas page](https://ms.portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) in the Azure portal to check your current quotas and request increases.
+
+## Quota information
+
+The default quota for Azure HPC Cache is four caches per subscription. If you want to create more than six caches in the same subscription, support approval is needed.
+
+Each HPC Cache uses multiple virtual machines, network resources, storage containers, and other Azure services, so it's unlikely that the number of caches per subscription will be the limiting factor in how many you can have. Quotas for those other services are handled separately from HPC Cache quotas, so you must open different tickets to increase those quotas.
+
+## Request a quota increase
+
+Navigate to your cache instance, then click the **New support request** link that appears at the bottom of the sidebar.
+
+To open a ticket when you do not have an active cache, use the main Help + support page from the Azure portal. Open the portal menu from the control at the top left of the screen, then scroll to the bottom and click **Help + support**.
+
+Choose **Create a support request**.
+
+Select your subscription from the list.
+
+If you can't find the Azure HPC Cache service, click the **All services** button and search for HPC.
+
+* For **Issue type**, select **Service and subscription limits (quotas)**.
+
+ ![Screenshot of portal "issue type" menu with the option "Service and subscription limits (quotas)" highlighted.](media/support-request-quota.png)
+
+* Select the **Subscription** for the quota increase.
+
+* Select the **Quota type** "HPC Cache".
+
+ ![Screenshot of portal "quota type" field with "hpc" typed in the search box and a matching result "HPC Cache" showing on the menu to be selected.](media/quota-type-search-hpc.png)
+
+ Click **Next** to go to the **Additional details** page.
+
+* In **Request details**, click the link that says **Enter details**. An additional form opens to the right.
+
+ ![Screenshot of Azure portal details form for HPC Cache, with options to select region and new limit.](media/quota-details.png)
+
+* For **Quota type** select **HPC Cache count**.
+
+* Select the **Region** where your cache is located.
+
+ The form shows your HPC Cache limit and current usage in this region.
+
+* Type the limit you're requesting in the **New limit** field. Click **Save and continue**.
+
+ Fill in the additional details required and create the request.
+
+After you submit the request, you will receive a confirmation email with a ticket number. A support staff member will contact you about the request.
+
+Support requests are also used to [request technical support](hpc-cache-support-ticket.md).
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
To access and use the **Permissions** section, you must be in the **App Administ
## Add users
-Every user must have a user account before they can sign in and access an application. IoT Central currently supports Microsoft accounts and Azure Active Directory accounts, but not Azure Active Directory groups.
-
-For more information, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Azure Active Directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
+Every user must have a user account before they can sign in and access an application. IoT Central currently supports Microsoft user accounts, Azure Active Directory accounts, and Azure Active Directory service principals. IoT Central doesn't currently support Azure Active Directory groups. To learn more, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Azure Active Directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
1. To add a user to an IoT Central application, go to the **Users** page in the **Permissions** section.
- :::image type="content" source="media/howto-manage-users-roles/manage-users-pnp.png" alt-text="Screenshot of Manage users.":::
+ :::image type="content" source="media/howto-manage-users-roles/manage-users-pnp.png" alt-text="Screenshot of manage users page in IoT Central.":::
+
+1. To add a user on the **Users** page, choose **+ Assign user**. To add a service principal on the **Users** page, choose **+ Assign service principal**. Start typing the name of the service principal to auto-populate the form.
-1. To add a user, on the **Users** page, choose **+ Assign user**.
+ > [!NOTE]
+ > A service principal must belong to the same Azure Active Directory tenant as the Azure subscription associated with the IoT Central application.
1. If your application uses [organizations](howto-create-organizations.md), choose an organization to assign to the user from the **Organization** drop-down menu.
Users in the **App Operator** role can monitor device health and status. They ar
IoT Central adds this role automatically when you add an organization to your application. This role restricts organization administrators from accessing some application-wide capabilities such as billing, branding, colors, API tokens, and enrollment group information.
-Users in the **Org Administrator** role can invite users to the application, create sub-organizations within their organization hierarchy, and manage the devices within their organization.
+Users in the **Org Administrator** role can invite users to the application, create suborganizations within their organization hierarchy, and manage the devices within their organization.
### Org Operator
iot-edge How To Configure Iot Edge For Linux On Windows Iiot Dmz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md
+
+ Title: How to Configure Azure IoT Edge for Linux on Windows to work on a DMZ | Microsoft Docs
+description: How to configure the Azure IoT Edge for Linux (EFLOW) VM to support multiple network interface cards (NICs) and connect to multiple networks.
+++++ Last updated : 07/13/2022+++
+# How to configure Azure IoT Edge for Linux on Windows Industrial IoT & DMZ configuration
++
+This article describes how to configure the Azure IoT Edge for Linux (EFLOW) VM to support multiple network interface cards (NICs) and connect to multiple networks. By enabling multiple NIC support, applications running on the EFLOW VM can communicate with devices connected to the offline network, and at the same time, use IoT Edge to send data to the cloud.
+
+## Prerequisites
+
+- A Windows device with EFLOW already set up. For more information on EFLOW installation and configuration, see [Create and provision an IoT Edge for Linux on Windows device using symmetric keys](./how-to-provision-single-device-linux-on-windows-symmetric.md).
+- A virtual switch different from the default one used during EFLOW installation. For more information on creating a virtual switch, see [Create a virtual switch for Azure IoT Edge for Linux on Windows](./how-to-create-virtual-switch.md).
+
+## Industrial scenario
+
+Industrial IoT is transcurring the era of IT and OT convergence. However, making traditional OT assets more intelligent with IT technologies also means a larger exposure to cyberattacks. This is one of the main reasons why multiple environments are designed using demilitarized zones or also known as DMZs.
+
+Imagine a workflow scenario where you have a networking configuration divided into two different networks or zones. In the first zone, you may have a secure network defined as the offline network. The offline network has no internet connectivity and is limited to internal access. In the second zone, you may have a demilitarized zone (DMZ), in which you may have a couple of devices that have limited internet connectivity. When moving the workflow to run on the EFLOW VM, you may have problems accessing the different networks since the EFLOW VM by default has only one NIC attached.
+
+Suppose you have an environment with some devices like PLCs or OPC UA compatible devices connected to the offline network, and you want to upload all the device's information to Azure using the OPC Publisher module running on the EFLOW VM.
+
+Since the EFLOW host device and the PLC or OPC UA devices are physically connected to the offline network, you can use the [Azure IoT Edge for Linux on Windows virtual multiple NIC configurations](./how-to-configure-multiple-nics.md) to connect the EFLOW VM to the offline network. By using an *external virtual switch*, you can connect the EFLOW VM to the offline network and directly communicate with other offline devices.
+
+For the other network, the EFLOW host device is physically connected to the DMZ (online network) with internet and Azure connectivity. Using an *internal or external switch*, you can connect the EFLOW VM to Azure IoT Hub using IoT Edge modules and upload the information sent by the offline devices through the offline NIC.
+
+![EFLOW Industrial IoT scenario showing a EFLOW VM connected to offline and online network.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/iiot-multiple-nic.png)
+
+### Scenario summary
+
+Secure network:
+
+- No internet connectivity, access restricted.
+- PLCs or UPC UA compatible devices connected.
+- EFLOW VM connected using an External virtual switch.
+
+DMZ:
+
+- Internet connectivity - Azure connection allowed.
+- EFLOW VM connected to Azure IoT Hub, using either an Internal/External virtual switch.
+- OPC Publisher running as a module inside the EFLOW VM used to publish data to Azure.
+
+## Configure VM network virtual switches
+
+The following steps are specific for the networking described in the example scenario. Ensure that the virtual switches used and the configurations used align with your networking environment.
+
+> [!NOTE]
+> The steps in this article assume that the EFLOW VM was deployed with an *external virtual switch* connected to the *secure network (offline)*. You can change the following steps to your specific network configuration you want to achieve. For more information about EFLOW multiple NIcs support, see [Azure IoT Edge for Linux on Windows virtual multiple NIC configurations](./how-to-configure-multiple-nics.md).
+
+To finish the provisioning of the EFLOW VM and communicate with Azure, you need to assign another NIC that is connected to the DMZ network (online).
+
+For this scenario, you'll assign an *external virtual switch* connected to the DMZ network. For more information, review [Create a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines).
+
+To create an external virtual switch, follow these steps:
+
+1. Open Hyper-V Manager.
+2. In **Actions**, select **Virtual Switch Manager**.
+3. In **Virtual Switches**, select **New Virtual network switch**.
+4. Choose type **External** then select **Create Virtual Switch**.
+5. Enter a name that represents the secure network. For example, *OnlineOPCUA*.
+6. Under **Connection Type**, select **External Network** then choose the *network adapter* connected to your DMZ network.
+7. Select **Apply**.
+
+Once the external virtual switch is created, you need to attach it to the EFLOW VM using the following steps. For more information about attaching multiple NICs, see [EFLOW Multiple NICs](https://github.com/Azure/iotedge-eflow/wiki/Multiple-NICs).
+
+For the custom new *external virtual switch* you created, use the following PowerShell commands to attach it your EFLOW VM and set a static IP:
+
+1. `Add-EflowNetwork -vswitchName "OnlineOPCUA" -vswitchType "External"`
+
+ ![Screenshot of showing successful creation of the external network named OnlineOPCUA.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-network.png)
+
+2. `Add-EflowVmEndpoint -vswitchName "OnlineOPCUA" -vEndpointName "OnlineEndpoint" -ip4Address 192.168.0.103 -ip4PrefixLength 24 -ip4GatewayAddress 192.168.0.1`
+
+ ![Screenshot showing the successful configuration of the OnlineOPCUA switch.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-vm-endpoint.png)
+
+Once complete, you'll have the *OnlineOPCUA* switch assigned to the EFLOW VM. To check the multiple NIC attachment, use the following steps:
+
+1. Open an elevated PowerShell session by starting with **Run as Administrator**.
+
+1. Connect to the EFLOW virtual machine.
+ ```powershell
+ Connect-EflowVm
+ ```
+
+1. List all the network interfaces assigned to the EFLOW virtual machine.
+ ```bash
+ ifconfig
+ ```
+
+1. Review the IP configuration and verify you see the *eth0* interface (connected to the secure network) and the *eth1* interface (connected to the DMZ network).
+
+ ![Screenshot showing IP configuration of multiple NICs connected to two different networks.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/ifconfig-multiple-nic.png)
+
+## Configure VM network routing
+
+When using the EFLOW multiple NICs feature, you may want to set up the different route priorities. By default, EFLOW creates one *default* route per *ehtX* interface assigned to the VM. EFLOW assigns the default route a random priority. If all interfaces are connected to the internet, random priorities may not be a problem. However, if one of the NICs is connected to an *offline* network, you may want to prioritize the *online* NIC over the *offline* NIC to get the EFLOW VM connected to the internet.
+
+EFLOW uses the [route](https://man7.org/linux/man-pages/man8/route.8.html) service to manage the network routing alternatives. In order to check the available EFLOW VM routes, use the following steps:
+
+1. Open an elevated PowerShell session by starting with **Run as Administrator**.
+
+1. Connect to the EFLOW virtual machine.
+
+ ```powershell
+ Connect-EflowVm
+ ```
+
+1. List all the network routes configured in the EFLOW virtual machine.
+
+ ```bash
+ sudo route
+ ```
+
+ ![Screenshot listing routing table for the EFLOW VM.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/route-output.png)
+
+ >[!TIP]
+ >The previous image shows the route command output with the two NIC's assigned (*eth0* and *eth1*). The virtual machine creates two different *default* destinations rules with different metrics. A lower metric value has a higher priority. This routing table will vary depending on the networking scenario configured in the previous steps.
+
+### Static routes fix
+
+Every time EFLOW VM starts, the networking services recreates all routes, and any previously assigned priority could change. To work around this issue, you can assign the desired priority for each route every time the EFLOW VM starts. You can create a service that executes every time the VM starts and use the `route` command to set the desired route priorities.
+
+First, create a bash script that executes the necessary commands to set the routes. For example, following the networking scenario mentioned earlier, the EFLOW VM has two NICs (offline and online networks). NIC *eth0* is connected using the gateway IP xxx.xxx.xxx.xxx. NIC *eth1* is connected using the gateway IP yyy.yyy.yyy.yyy.
+
+The following script resets the *default* routes for both *eth0* and *eth1 then adds the routes with the desired **\<number\>** metric. Remember that *a lower metric value has higher priority*.
+
+```bash
+#!/bin/sh
+
+# Wait 30s for the interfaces to be up
+sleep 30
+
+# Delete previous eth0 route and create a new one with desired metric
+sudo ip route del default via xxx.xxx.xxx.xxx dev eth0
+sudo route add -net default gw xxx.xxx.xxx.xxx netmask 0.0.0.0 dev eth0 metric <number>
+
+# Delete previous eth1 route and create a new one with desired metric
+sudo ip route del default via yyy.yyy.yyy.yyy dev eth1
+sudo route add -net default gw yyy.yyy.yyy.yyy netmask 0.0.0.0 dev eth1 metric <number>
+```
+
+You can use the previous script to create your own custom script specific to your networking scenario. Once script is defined, save it, and assign execute permission. For example, if the script name is *route-setup.sh*, you can assign execute permission using the command `sudo chmod +x route-setup.sh`. You can test if the script works correctly by executing it manually using the command `sudo sh ./route-setup.sh` and then checking the routing table using the `sudo route` command.
+
+The final step is to create a Linux service that runs on startup, and executes the bash script to set the routes. You'll have to create a *systemd* unit file to load the service. The following is an example of that file.
+
+```systemd
+[Unit]
+after=network
+
+[Service]
+Type=simple
+ExecStart=/bin/bash /home/iotedge-user/route-setup.sh
+
+[Install]
+WantedBy=default.target
+```
+
+ To check the service works, reboot the EFLOW VM (`Stop-EflowVm` & `Start-EflowVm`) then `Connect-EflowVm` to connect to the VM. List the routes using `sudo route` and verify they're correct. You should be able to see the new *default* rules with the assigned metric.
+
+## Next steps
+
+Follow the steps in [How to configure networking for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md) to verify your networking configurations were applied correctly.
iot-hub Iot Hub How To Android Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-android-things.md
A device must be registered with your IoT hub before it can connect. In this qui
2. Run the following commands in Azure Cloud Shell to get the *device connection string* for the device you just registered. Replace `YourIoTHubName` below with the name you choose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name YourIoTHubName --device-id MyAndroidThingsDevice --output table
+ az iot hub device-identity connection-string show --hub-name YourIoTHubName --device-id MyAndroidThingsDevice --output table
``` Make a note of the device connection string, which looks like:
iot-hub Iot Hub Ios Swift C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ios-swift-c2d.md
Along with installing the pods required for your project, the installation comma
1. Retrieve the connection string for your device. You can copy this string from the [Azure portal](https://portal.azure.com) in the device details blade, or retrieve it with the following CLI command: ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id {YourDeviceID} --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id {YourDeviceID} --output table
``` 2. Open the sample workspace in XCode.
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
For more information about authentication to Key Vault, see [Authenticate to Azu
## Conditional access
-Key Vault provides support for Azure Azure Active Directory Conditional Access policies. By using Conditional Access policies, you can apply the right access controls to Key Vault when needed to keep your organization secure and stay out of your user's way when not needed.
+Key Vault provides support for Azure Active Directory Conditional Access policies. By using Conditional Access policies, you can apply the right access controls to Key Vault when needed to keep your organization secure and stay out of your user's way when not needed.
For more information, see [Conditional Access overview](../../active-directory/conditional-access/overview.md)
machine-learning Concept Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-ingestion.md
- Title: Data ingestion & automation -
-description: Learn the pros and cons of the available data ingestion options for training your machine learning models.
------- Previously updated : 10/21/2021----
-# Data ingestion options for Azure Machine Learning workflows
-
-In this article, you learn the pros and cons of data ingestion options available with Azure Machine Learning.
-
-Choose from:
-+ [Azure Data Factory](#azure-data-factory) pipelines, specifically built to extract, load, and transform data
-
-+ [Azure Machine Learning Python SDK](#azure-machine-learning-python-sdk), providing a custom code solution for data ingestion tasks.
-
-+ a combination of both
-
-Data ingestion is the process in which unstructured data is extracted from one or multiple sources and then prepared for training machine learning models. It's also time intensive, especially if done manually, and if you have large amounts of data from multiple sources. Automating this effort frees up resources and ensures your models use the most recent and applicable data.
-
-> [!Important]
-> Azure Machine Learning doesn't store or process your data outside of the region where you deploy.
->
-
-## Azure Data Factory
-
-[Azure Data Factory](../data-factory/introduction.md) offers native support for data source monitoring and triggers for data ingestion pipelines.
-
-The following table summarizes the pros and cons for using Azure Data Factory for your data ingestion workflows.
-
-|Pros|Cons
-|
-Specifically built to extract, load, and transform data.|Currently offers a limited set of Azure Data Factory pipeline tasks
-Allows you to create data-driven workflows for orchestrating data movement and transformations at scale.|Expensive to construct and maintain. See Azure Data Factory's [pricing page](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) for more information.
-Integrated with various Azure tools like [Azure Databricks](../data-factory/transform-data-using-databricks-notebook.md) and [Azure Functions](../data-factory/control-flow-azure-function-activity.md) | Doesn't natively run scripts, instead relies on separate compute for script runs
-Natively supports data source triggered data ingestion|
-Data preparation and model training processes are separate.|
-Embedded data lineage capability for Azure Data Factory dataflows|
-Provides a low code experience [user interface](../data-factory/quickstart-create-data-factory-portal.md) for non-scripting approaches |
-
-These steps and the following diagram illustrate Azure Data Factory's data ingestion workflow.
-
-1. Pull the data from its sources
-1. Transform and save the data to an output blob container, which serves as data storage for Azure Machine Learning
-1. With prepared data stored, the Azure Data Factory pipeline invokes a training Machine Learning pipeline that receives the prepared data for model training
--
- ![ADF Data ingestion](media/concept-data-ingestion/data-ingest-option-one.svg)
-
-Learn how to build a data ingestion pipeline for Machine Learning with [Azure Data Factory](how-to-data-ingest-adf.md).
-
-## Azure Machine Learning Python SDK
-
-With the [Python SDK](/python/api/overview/azure/ml), you can incorporate data ingestion tasks into an [Azure Machine Learning pipeline](./how-to-create-machine-learning-pipelines.md) step.
-
-The following table summarizes the pros and con for using the SDK and an ML pipelines step for data ingestion tasks.
-
-Pros| Cons
-|
-Configure your own Python scripts | Does not natively support data source change triggering. Requires Logic App or Azure Function implementations
-Data preparation as part of every model training execution|Requires development skills to create a data ingestion script
-Supports data preparation scripts on various compute targets, including [Azure Machine Learning compute](concept-compute-target.md#azure-machine-learning-compute-managed) |Does not provide a user interface for creating the ingestion mechanism
-
-In the following diagram, the Azure Machine Learning pipeline consists of two steps: data ingestion and model training. The data ingestion step encompasses tasks that can be accomplished using Python libraries and the Python SDK, such as extracting data from local/web sources, and data transformations, like missing value imputation. The training step then uses the prepared data as input to your training script to train your machine learning model.
-
-![Azure pipeline + SDK data ingestion](media/concept-data-ingestion/data-ingest-option-two.png)
-
-## Next steps
-
-Follow these how-to articles:
-* [Build a data ingestion pipeline with Azure Data Factory](how-to-data-ingest-adf.md)
-
-* [Automate and manage data ingestion pipelines with Azure Pipelines](how-to-cicd-data-ingestion.md).
machine-learning Concept Open Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-open-source.md
- Title: Open-source machine learning integration-
-description: Learn how to use open-source Python machine learning frameworks to train, deploy, and manage end-to-end machine learning solutions in Azure Machine Learning.
------- Previously updated : 11/04/2021--
-# Use open-source machine learning libraries and platforms with Azure Machine Learning
-
-In this article, learn about open-source Python machine learning libraries and platforms you can use with Azure Machine Learning. Train, deploy, and manage the end-to-end machine learning process using open source projects you prefer. Use development tools, like Jupyter Notebooks and Visual Studio Code, to leverage your existing models and scripts in Azure Machine Learning.
-
-## Train open-source machine learning models
-
-The machine learning training process involves the application of algorithms to your data in order to achieve a task or solve a problem. Depending on the problem, you may choose different algorithms that best fit the task and your data. For more information on what you can solve with machine learning, see the [deep learning vs machine learning article](./concept-deep-learning-vs-machine-learning.md) and the [machine learning algorithm cheat sheet](algorithm-cheat-sheet.md).
-
-### Classical machine learning: scikit-learn
-
-For training tasks involving classical machine learning algorithms tasks such classification, clustering, and regression you might use something like scikit-learn. To learn how to train a flower classification model, see the [how to train with scikit-learn article](how-to-train-scikit-learn.md).
-
-### Neural networks: PyTorch, TensorFlow, Keras
-
-Open-source machine learning algorithms known as neural networks, a subset of machine learning, are useful for training deep learning models in Azure Machine Learning.
-
-Open-source deep learning frameworks and how-to guides include:
-
- * [PyTorch](https://github.com/pytorch/pytorch): [Train a deep learning image classification model using transfer learning](how-to-train-pytorch.md)
- * [TensorFlow](https://github.com/tensorflow/tensorflow): [Recognize handwritten digits using TensorFlow](how-to-train-tensorflow.md)
- * [Keras](https://github.com/keras-team/keras): [Build a neural network to analyze images using Keras](how-to-train-keras.md)
-
-### Transfer learning
-
-Training a deep learning model from scratch often requires large amounts of time, data, and compute resources. You can shortcut the training process by using transfer learning. Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. This means you can take an existing model repurpose it. See the [deep learning vs machine learning article](concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) to learn more about transfer learning.
-
-### Reinforcement learning: Ray RLLib
-
-Reinforcement learning is an artificial intelligence technique that trains models using actions, states, and rewards: Reinforcement learning agents learn to take a set of predefined actions that maximize the specified rewards based on the current state of their environment.
-
-The [Ray RLLib](https://github.com/ray-project/ray) project has a set of features that allow for high scalability throughout the training process. The iterative process is both time- and resource-intensive as reinforcement learning agents try to learn the optimal way of achieving a task. Ray RLLib also natively supports deep learning frameworks like TensorFlow and PyTorch.
-
-To learn how to use Ray RLLib with Azure Machine Learning, see the [how to train a reinforcement learning model](how-to-use-reinforcement-learning.md).
-
-### Monitor model performance: TensorBoard
-
-Training a single or multiple models requires the visualization and inspection of desired metrics to make sure the model performs as expected. You can [use TensorBoard in Azure Machine Learning to track and visualize experiment metrics](./how-to-monitor-tensorboard.md)
-
-## Responsible AI: Privacy and fairness
-
-### Preserve data privacy with differential privacy
-
-To train a machine learning model, you need data. Sometimes that data is sensitive, and it's important to make sure that the data is secure and private. Differential privacy is a technique of preserving the confidentiality of information in a dataset. To learn more, see the article on [preserving data privacy](concept-differential-privacy.md).
-
-Open-source differential privacy toolkits like [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core-python) help you [preserve the privacy of data](how-to-differential-privacy.md) in Azure Machine Learning solutions.
-
-### Frameworks for interpretable and fair models
-
-Machine learning systems are used in different areas of society such as banking, education, and healthcare. As such, it's important for these systems to be accountable for the predictions and recommendations they make to prevent unintended consequences.
-
-Open-source frameworks like [InterpretML](https://github.com/interpretml/interpret/) and Fairlearn (https://github.com/fairlearn/fairlearn) work with Azure Machine Learning to create more transparent and equitable machine learning models.
-
-For more information on how to build fair and interpretable models, see the following articles:
--- [Model interpretability in Azure Machine Learning](how-to-machine-learning-interpretability.md)-- [Interpret and explain machine learning models](how-to-machine-learning-interpretability-aml.md)-- [Explain AutoML models](how-to-machine-learning-interpretability-automl.md)-- [Mitigate fairness in machine learning models](concept-fairness-ml.md)-- [Use Azure Machine Learning to assets model fairness](how-to-machine-learning-fairness-aml.md)-
-## Model deployment
-
-Once models are trained and ready for production, you have to choose how to deploy it. Azure Machine Learning provides various deployment targets. For more information, see the [where and how to deploy article](./how-to-deploy-and-where.md).
-
-### Standardize model formats with ONNX
-
-After training, the contents of the model such as learned parameters are serialized and saved to a file. Each framework has its own serialization format. When working with different frameworks and tools, it means you have to deploy models according to the framework's requirements. To standardize this process, you can use the Open Neural Network Exchange (ONNX) format. ONNX is an open-source format for artificial intelligence models. ONNX supports interoperability between frameworks. This means you can train a model in one of the many popular machine learning frameworks like PyTorch, convert it into ONNX format, and consume the ONNX model in a different framework like ML.NET.
-
-For more information on ONNX and how to consume ONNX models, see the following articles:
--- [Create and accelerate ML models with ONNX](concept-onnx.md)-- [Use ONNX models in .NET applications](how-to-use-automl-onnx-model-dotnet.md)-
-### Package and deploy models as containers
-
-Container technologies such as Docker are one way to deploy models as web services. Containers provide a platform and resource agnostic way to build and orchestrate reproducible software environments. With these core technologies, you can use [preconfigured environments](./how-to-use-environments.md), [preconfigured container images](./how-to-deploy-custom-container.md) or custom ones to deploy your machine learning models to such as [Kubernetes clusters](./v1/how-to-deploy-azure-kubernetes-service.md?tabs=python). For GPU intensive workflows, you can use tools like NVIDIA Triton Inference server to [make predictions using GPUs](how-to-deploy-with-triton.md?tabs=python).
-
-### Secure deployments with homomorphic encryption
-
-Securing deployments is an important part of the deployment process. To [deploy encrypted inferencing services](how-to-homomorphic-encryption-seal.md), use the `encrypted-inference` open-source Python library. The `encrypted inferencing` package provides bindings based on [Microsoft SEAL](https://github.com/Microsoft/SEAL), a homomorphic encryption library.
-
-## Machine learning operations (MLOps)
-
-Machine Learning Operations (MLOps), commonly thought of as DevOps for machine learning allows you to build more transparent, resilient, and reproducible machine learning workflows. See the [what is MLOps article](./concept-model-management-and-deployment.md) to learn more about MLOps.
-
-Using DevOps practices like continuous integration (CI) and continuous deployment (CD), you can automate the end-to-end machine learning lifecycle and capture governance data around it. You can define your [machine learning CI/CD pipeline in GitHub Actions](./how-to-github-actions-machine-learning.md) to run Azure Machine Learning training and deployment tasks.
-
-Capturing software dependencies, metrics, metadata, data and model versioning are an important part of the MLOps process in order to build transparent, reproducible, and auditable pipelines. For this task, you can [use MLFlow in Azure Machine Learning](how-to-use-mlflow.md) as well as when [training machine learning models in Azure Databricks](./how-to-use-mlflow-azure-databricks.md). You can also [deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
machine-learning Concept Optimize Data Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-optimize-data-processing.md
- Title: Optimize data processing-
-description: Learn best practices for optimizing data processing speeds and what integrations Azure Machine Learning supports for data processing at scale.
------- Previously updated : 10/21/2021-
-#Customer intent: As a data scientist I want to optimize data processing speeds at scale
--
-# Optimize data processing with Azure Machine Learning
-
-In this article, you learn about best practices to help you optimize data processing speeds locally and at scale.
-
-Azure Machine Learning is integrated with open-source packages and frameworks for data processing. By using these integrations and applying the best practice recommendations in this article, you can improve your data processing speeds both locally and at scale.
-
-## Parquet and CSV file formats
-
-Comma-separated values (csv) files are common file formats for data processing. However, parquet file formats are recommended for machine learning tasks.
-
-[Parquet files](https://parquet.apache.org/) store data in a binary columnar format. This format is useful if splitting up the data into multiple files is needed. Also, this format allows you to target the relevant fields for your machine learning experiments. Instead of having to read in a 20-GB data file, you can decrease that data load, by selecting the necessary columns to train your ML model. Parquet files can also be compressed to minimize processing power and take up less space.
-
-CSV files are commonly used to import and export data, since they're easy to edit and read in Excel. The data in CSVs are stored as strings in a row-based format, and the files can be compressed to lessen data transfer loads. Uncompressed CSVs can expand by a factor of about 2-10 and compressed CSVs can increase even further. So that 5-GB CSV in memory expands to well over the 8 GB of RAM you have on your machine. This compression behavior may increase data transfer latency, which isn't ideal if you have large amounts of data to process.
-
-## Pandas dataframe
-
-[Pandas dataframes](https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html) are commonly used for data manipulation and analysis. `Pandas` works well for data sizes less than 1 GB, but processing times for `pandas` dataframes slow down when file sizes reach about 1 GB. This slowdown is because the size of your data in storage isn't the same as the size of data in a dataframe. For instance, data in CSV files can expand up to 10 times in a dataframe, so a 1-GB CSV file can become 10 GB in a dataframe.
-
-`Pandas` is single threaded, meaning operations are done one at a time on a single CPU. You can easily parallelize workloads to multiple virtual CPUs on a single Azure Machine Learning compute instance with packages like [Modin](https://modin.readthedocs.io/en/latest/) that wrap `Pandas` using a distributed backend.
-
-To parallelize your tasks with `Modin` and [Dask](https://dask.org), just change this line of code `import pandas as pd` to `import modin.pandas as pd`.
-
-## Dataframe: out of memory error
-
-Typically an *out of memory* error occurs when your dataframe expands above the available RAM on your machine. This concept also applies to a distributed framework like `Modin` or `Dask`. That is, your operation attempts to load the dataframe in memory on each node in your cluster, but not enough RAM is available to do so.
-
-One solution is to increase your RAM to fit the dataframe in memory. We recommend your compute size and processing power contain two times the size of RAM. So if your dataframe is 10 GB, use a compute target with at least 20 GB of RAM to ensure that the dataframe can comfortably fit in memory and be processed.
-
-For multiple virtual CPUs, vCPU, keep in mind that you want one partition to comfortably fit into the RAM each vCPU can have on the machine. That is, if you have 16-GB RAM 4 vCPUs, you want about 2-GB dataframes per each vCPU.
-
-### Local vs remote
-
-You may notice certain pandas dataframe commands perform faster when working on your local PC versus a remote VM you provisioned with Azure Machine Learning.
-Your local PC typically has a page file enabled, which allows you to load more than what fits in physical memory, that is your hard drive is being used as an extension of your RAM. Currently, Azure Machine Learning VMs run without a page file, therefore can only load as much data as physical RAM available.
-
-For compute-heavy jobs, we recommend you pick a larger VM to improve processing speeds.
-
-Learn more about the [available VM series and sizes](concept-compute-target.md#supported-vm-series-and-sizes) for Azure Machine Learning.
-
-For RAM specifications, see the corresponding VM series pages such as, [Dv2-Dsv2 series](../virtual-machines/dv2-dsv2-series-memory.md) or [NC series](../virtual-machines/nc-series.md).
-
-### Minimize CPU workloads
-
-If you can't add more RAM to your machine, you can apply the following techniques to help minimize CPU workloads and optimize processing times. These recommendations pertain to both single and distributed systems.
-
-Technique | Description
--|-
-Compression | Use a different representation for your data, in a way that uses less memory and doesn't significantly impact the results of your calculation.<br><br>*Example:* Instead of storing entries as a string with about 10 bytes or more per entry, store them as a boolean, True or False, which you could store in 1 byte.
-Chunking | Load data into memory in subsets (chunks), processing the data one subset at time, or multiple subsets in parallel. This method works best if you need to process all the data, but don't need to load all the data into memory at once. <br><br>*Example:* Instead of processing a full year's worth of data at once, load and process the data one month at a time.
-Indexing | Apply and use an index, a summary that tells you where to find the data you care about. Indexing is useful when you only need to use a subset of the data, instead of the full set<br><br>*Example:* If you have a full year's worth of sales data sorted by month, an index helps you quickly search for the desired month that you wish to process.
-
-## Scale data processing
-
-If the previous recommendations aren't enough, and you can't get a virtual machine that fits your data, you can,
-
-* Use a framework like `Spark` or `Dask` to process the data 'out of memory'. In this option, the dataframe is loaded into RAM partition by partition and processed, with the final result being gathered at the end.
-
-* Scale out to a cluster using a distributed framework. In this option, data processing loads are split up and processed on multiple CPUs that work in parallel, with the final result gathered at the end.
-
-### Recommended distributed frameworks
-
-The following table recommends distributed frameworks that are integrated with Azure Machine Learning based on your code preference or data size.
-
-Experience or data size | Recommendation
-|
-If you're familiar with `Pandas`| `Modin` or `Dask` dataframe
-If you prefer `Spark` | `PySpark`
-For data less than 1 GB | `Pandas` locally **or** a remote Azure Machine Learning compute instance
-For data larger than 10 GB| Move to a cluster using `Ray`, `Dask`, or `Spark`
-
-> [!TIP]
-> Load your dataset into a Dask dataframe with the [to_dask_dataframe()](/python/api/azureml-core/azureml.data.tabulardataset#to-dask-dataframe-sample-size-10000--dtypes-none--on-error--nullout-of-range-datetime--null--) method for large scale data processing. This method is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
-
-## Next steps
-
-* [Data ingestion options with Azure Machine Learning](concept-data-ingestion.md).
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## July 28, 2022
+[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `22.07.19`
+
+Main changes:
+
+- Updated `Azure Cli` to version `2.38.0`
+- Updated `Nodejs` to version `v16.16.0`
+- Updated `Scala` to version `2.12.15`
+- Updated `Spark` to version `3.2.2`
+- `MMLSpark` notebook features `v0.10.0`
+- 4 additional R libraries: [janitor](https://cran.r-project.org/web/packages/janitor/https://docsupdatetracker.net/index.html), [skimr](https://cran.r-project.org/web/packages/skimr/https://docsupdatetracker.net/index.html#:~:text=CRAN%20-%20Package%20skimr%20skimr:%20Compact%20and%20Flexible%2cby%20the%20user%20as%20can%20the%20default%20formatting.), [palmerpenguins](https://cran.r-project.org/web/packages/palmerpenguins/https://docsupdatetracker.net/index.html) and [doParallel](https://cran.r-project.org/web/packages/doParallel/https://docsupdatetracker.net/index.html)
+- Added new AzureML Environment `azureml_310_sdkv2`
+
+[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
+
+Version `22.07.18`
+
+Main Changes:
+
+- General OS level updates
+ ## July 11, 2022 [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
Version `22.04.27`
Main changes: - `Plotly` and `summarytools` R studio extensions runtime import fix.-- `Cudatoolkit` and `CUDNN` upgraded to 13.1 and 2.8.1 respectively.-- Fix Python 3.8 - AzureML notebook run, pinned `matplotlib` to 3.2.1 and cycler to 0.11.0 packages in `Azureml_py38` environment.
+- `Cudatoolkit` and `CUDNN` upgraded to `13.1` and `2.8.1` respectively.
+- Fix `Python 3.8` - AzureML notebook run, pinned `matplotlib` to `3.2.1` and `cycler` to `0.11.0` packages in `Azureml_py38` environment.
## April 26, 2022 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
Version: `22.04.21`
Main changes: - `Plotly` R studio extension patch.-- Update `Rscript` env path to support latest R studio version 4.1.3.
+- Update `Rscript` env path to support latest R studio version `4.1.3`.
## April 14, 2022 New DSVM offering for [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview) is currently live in the marketplace.
Version: `22.04.01`
Main changes: -- Updated R environment - added libraries: Cluster, Devtools Factoextra, GlueHere, Ottr, Paletteer, Patchwork, Plotly, Rmd2jupyter, Scales, Statip, Summarytools, Tidyverse, Tidymodels and Testthat-- Further `Log4j` vulnerability mitigation - although not used, we moved all `log4j` to version v2, we have removed old log4j jars1.0 and moved log4j version 2.0 jars.-- Azure CLI to version 2.33.1-- Fixed jupyterhub access issue using public ip address
+- Updated R environment - added libraries: `Cluster`, `Devtools Factoextra`, `GlueHere`, `Ottr`, `Paletteer`, `Patchwork`, `Plotly`, `Rmd2jupyter`, `Scales`, `Statip`, `Summarytools`, `Tidyverse`, `Tidymodels` and `Testthat`
+- Further `Log4j` vulnerability mitigation - although not used, we moved all `log4j` to version `v2`, we have removed old `log4j jars1.0` and moved `log4j` version 2.0 jars.
+- `Azure CLI` to version `2.33.1`
+- Fixed `jupyterhub` access issue using public ip address
- Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created: - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](../concept-automated-ml.md) environment
- - `azureml_py38_PT_TF`: additional azureml_py38 environment, preinstalled with latest TensorFlow and PyTorch
- - `py38_default`: default system environment based on Python 3.8
+ - `azureml_py38_PT_TF`: additional `azureml_py38` environment, preinstalled with latest `TensorFlow` and `PyTorch`
+ - `py38_default`: default system environment based on `Python 3.8`
- We have removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
The Azure Machine Learning CLI v2 uses our new v2 API platform. New features suc
As mentioned in the previous section, there are two types of operations; with ARM and with the workspace. With the __legacy v1 API__, most operations used the workspace. With the v1 API, adding a private endpoint to the workspace provided network isolation for everything except CRUD operations on the workspace or compute resources.
-With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/jobs/create-or-update) api sends metadata, and [parameters](./reference-yaml-job-command.md).
+With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/2022-05-01/jobs/create-or-update) api sends metadata, and [parameters](./reference-yaml-job-command.md).
> [!TIP] > * Public ARM operations do not surface data in your storage account on public networks.
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/com
-H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ```
-To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/workspaces/createorupdate), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
+To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/2022-05-01/workspaces/create-or-update), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
```bash curl -X PUT \
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
Previously updated : 06/10/2022 Last updated : 07/25/2022 ms.devlang: azurecli
Run the code below to create an Azure Managed Grafana workspace.
| Parameter | Description | Example | |--|--|-| | --name | Choose a unique name for your new Managed Grafana instance. | *grafana-test* |
-| --location | Choose an Azure Region where Managed Grafana is available. | *eastus* |
+| --resource-group | Choose a resource group for your Managed Grafana instance. | *my-resource-group* |
```azurecli az grafana create --name <managed-grafana-resource-name> --resource-group <resource-group-name>
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
Last updated 06/14/2020
-# Support matrix for physical server migration
+# Support matrix for migration of physical servers, AWS VMs, and GCP VMs
-This article summarizes support settings and limitations for migrating physical servers to Azure with [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) . If you're looking for information about assessing physical servers for migration to Azure, review the [assessment support matrix](migrate-support-matrix-physical.md).
+This article summarizes support settings and limitations for migrating physical servers, AWS VMs, and GCP VMs to Azure with [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) . If you're looking for information about assessing physical servers for migration to Azure, review the [assessment support matrix](migrate-support-matrix-physical.md).
## Migrating machines as physical
You can select up to 10 machines at once for replication. If you want to migrate
## Physical server requirements
-The table summarizes support for physical servers you want to migrate using agent-based migration.
+The table summarizes support for physical servers, AWS VMs, and GCP VMs that you want to migrate using agent-based migration.
**Support** | **Details** |
The table summarizes support for physical servers you want to migrate using agen
## Replication appliance requirements
-If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as an VMware VM using the OVA template provided in the Azure Migrate hub, the appliance is set up with Windows Server 2016, and complies with the support requirements.
+If you set up the replication appliance manually, then make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as an VMware VM using the OVA template provided in the Azure Migrate hub, the appliance is set up with Windows Server 2016, and complies with the support requirements.
- Learn about [replication appliance requirements](migrate-replication-appliance.md#appliance-requirements). - MySQL must be installed on the appliance. Learn about [installation options](migrate-replication-appliance.md#mysql-installation).
networking Azure Network Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md
Title: Azure network round-trip latency statistics | Microsoft Docs description: Learn about round-trip latency statistics between Azure regions. -+ Last updated 06/08/2021-+ # Azure network round-trip latency statistics
Azure continuously monitors the latency (speed) of core areas of its network usi
The latency measurements are collected from ThousandEyes agents, hosted in Azure cloud regions worldwide, that continuously send network probes between themselves in 1-minute intervals. The monthly latency statistics are derived from averaging the collected samples for the month.
-## May 2021 round-trip latency figures
+## June 2022 round-trip latency figures
-The monthly average round-trip times between Azure regions for past 31 days (ending on May 31, 2021) are shown below. The following measurements are powered by [ThousandEyes](https://thousandeyes.com).
+The monthly Percentile P50 round trip times between Azure regions for the past 30 days (ending on June 30, 2022) are shown below. The following measurements are powered by [ThousandEyes](https://thousandeyes.com).
-[![Azure inter-region latency statistics](media/azure-network-latency/azure-network-latency.png)](media/azure-network-latency/azure-network-latency.png#lightbox)
## Next steps
openshift Howto Aad App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-aad-app-configuration.md
To grant cluster admin access, the memberships in an Azure AD security group are
## Create an Azure AD app registration
-You can automatically create an Azure Active Directory (Azure AD) app registration client as part of creating the cluster by omitting the `--aad-client-app-id` flag to the `az openshift create` command. This tutorial shows you how to create the Azure AD app registration for completeness.
- If your organization doesn't already have an Azure Active Directory (Azure AD) app registration to use as a service principal, follow these instructions to create one. 1. Open the [App registrations blade](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredAppsPreview) and click **+New registration**.
openshift Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/migration.md
Once you have your target cluster properly configured for your workload, [connec
## Delete your source cluster Once you've confirmed that your Azure Red Hat OpenShift 4 cluster is properly set up, delete your Azure Red Hat OpenShift 3.11 cluster.
+```azurecli
+az aro delete --name $CLUSTER_NAME
+ --resource-group $RESOURCE_GROUP
+ [--no-wait]
+ [--yes]
```
-az openshift delete --name $CLUSTER_NAME
- --resource-group $RESOURCE_GROUP
- [--no-wait]
- [--subscription]
- [--yes]
-```
+ ## Next steps Check out Red Hat OpenShift documentation [here](https://docs.openshift.com/container-platform/4.6/welcome/https://docsupdatetracker.net/index.html).
orbital Receive Real Time Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/receive-real-time-telemetry.md
The ground station provides telemetry using Avro as a schema. The schema is belo
## Next steps - [Event Hubs using Python Getting Started](../event-hubs/event-hubs-python-get-started-send.md)-- [Azure Event Hubs client library for Python code samples](/azure-sdk-for-python/tree/main/sdk/eventhub/azure-eventhub/samples/async_samples)
+- [Azure Event Hubs client library for Python code samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventhub/azure-eventhub/samples/async_samples)
private-link Private Link Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-service-overview.md
After a consumer initiates a connection, the service provider can accept or reje
### Delete your service
-If the Private Link service is no longer in use, you can delete it. However, before your delete the service, ensure that there are no private endpoint connections associated with it. You can reject all connections and delete the service.
+If the Private Link service is no longer in use, you can delete it. However, before you delete the service, ensure that there are no private endpoint connections associated with it. You can reject all connections and delete the service.
## Properties
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
Master data management (MDM) is a key pillar of any unified data governance solu
## What, why and how of MDM - Master Data Management?
-Many businesses today have large data estates that move massive amounts of data between applications, storage systems, analytics systems, and across departments within their organization. During these movements, and over time, data can be accidentally duplicated or fragment, and can fall out of date, so accuracy becomes a concern when using this data for analytics about your business.
+Many businesses today have large data estates that move massive amounts of data between applications, storage systems, analytics systems, and across departments within their organization. During these movements, and over time, data can be accidentally duplicated or become fragmented, and become stale or out of date. Hence, accuracy becomes a concern when using this data to drive insights into your business.
-To protect the quality of data within an organization, master data management (MDM) arose as a discipline that creates a source of truth for enterprise data so that an organization can check and validate their key assets. These key assets, or master data assets, are critical records that provide context for a business. For example, master data might include information on specific products, employees, customers, financial structures, suppliers, or locations. Master data management ensures data quality across an entire organization by maintaining the quality of the master data records, and ensuring data remains consistent across their entire data estate.
+To protect the quality of data within an organization, master data management (MDM) arose as a discipline that creates a source of truth for enterprise data so that an organization can check and validate their key assets. These key assets, or master data assets, are critical records that provide context for a business. For example, master data might include information on specific products, employees, customers, financial structures, suppliers, or locations. Master data management ensures data quality across an entire organization by maintaining an authoritative consolidated de-duplicated set of the master data records, and ensuring data remains consistent across your organization's complete data estate.
-As an example, it can be difficult for a company to have a clear, single view of their customers. Customer data may differ between systems, there may be duplicated records due to incorrect entry, or shipping and customer service systems may vary due to name, address, or other attributes. Master data management consolidates all this differing information about the customer it into a single, standard format that can be used to check data across an organizations entire data estate.
-
-Not only does this improve quality of data by eliminating mismatched data across departments, but it ensures that data analyzed for business intelligence (BI) and other applications is trustworthy and up to date, reduces data load by removing duplicate records across the organization, and streamlines communications between business systems.
+As an example, it can be difficult for a company to have a clear, single view of their customers. Customer data may differ between systems, there may be duplicated records due to incorrect entry, or shipping and customer service systems may vary due to name, address, or other attributes. Master data management consolidates all this differing information about the customer it into a single, standard format that can be used to check data across an organizations entire data estate. Not only does this improve quality of data by eliminating mismatched data across departments, but it ensures that data analyzed for business intelligence (BI) and other applications is trustworthy and up to date, reduces data load by removing duplicate records across the organization, and streamlines communications between business systems.
More Details on [Profisee MDM](https://profisee.com/master-data-management-what-why-how-who/) and [Profisee-Purview MDM Concepts and Azure Architecture](/azure/architecture/reference-architectures/data/profisee-master-data-management-purview). -
-## Why Microsoft Purview chose Profisee for Master Data Management (MDM)
+## Microsoft Purview & Profisee Integrated MDM - Better Together!
### Profisee MDM: True SaaS experience
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-classifications.md
Title: List of supported classifications description: This page lists the supported system classifications in Microsoft Purview.--++
Last updated 09/27/2021
This article lists the supported system classifications in Microsoft Purview. To learn more about classification, see [Classification](concept-classification.md).
-Microsoft Purview classifies data by [RegEx](https://wikipedia.org/wiki/Regular_expression) and [Bloom Filter](https://wikipedia.org/wiki/Bloom_filter). The following lists describe the format, pattern, and keywords for the Microsoft Purview defined system classifications. Each classification name is prefixed by *MICROSOFT*.
+Microsoft Purview classifies data by using [RegEx](https://wikipedia.org/wiki/Regular_expression), [Bloom Filter](https://wikipedia.org/wiki/Bloom_filter) and Machine Learning models. The following lists describe the format, pattern, and keywords for the Microsoft Purview defined system classifications. Each classification name is prefixed by *MICROSOFT*.
> [!Note] > Microsoft Purview can classify both structured (CSV, TSV, JSON, SQL Table etc.) as well as unstructured data (DOC, PDF, TXT etc.). However, there are certain classifications that are only applicable to structured data. Here is the list of classifications that Microsoft Purview doesn't apply on unstructured data - City Name, Country Name, Date Of Birth, Email, Ethnic Group, GeoLocation, Person Name, U.S. Phone Number, U.S. States, U.S. ZipCode
Microsoft Purview classifies data by [RegEx](https://wikipedia.org/wiki/Regular_
The City, Country, and Place filters have been prepared using best datasets available for preparing the data.
+## Machine Learning model based classifications
## Person Name
-Person Name bloom filter has been prepared using the below two datasets.
--- [2010 US Census Data for Last Names (162-K entries)](https://www.census.gov/topics/population/genealogy/data/2010_surnames.html)-- [Popular Baby Names (from SSN), using all years 1880-2019 (98-K entries)](https://www.ssa.gov/oact/babynames/limits.html)
+Person Name machine learning model has been trained using global datasets of names in English language.
> [!NOTE]
-> Microsoft Purview classifies columns only when the data contains first/last names. Microsoft Purview doesn't classify columns that contain full names.
+> Microsoft Purview classifies full names stored in the same column as well as first/last names in separate columns.
+ ## RegEx Classifications
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
# Authorize access to a search apps using Azure Active Directory > [!IMPORTANT]
-> Role-based access control for data plane operations, such as creating an index or querying an index, is currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). This functionality is only available in public cloud regions and may impact the latency of your operations while the functionality is in preview. For more information on preview limitations, see [RBAC preview limitations](search-security-rbac.md#preview-limitations).
+> Role-based access control for data plane operations, such as creating or querying an index, is currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). This functionality is only available in public cloud regions and may impact the latency of your operations while the functionality is in preview. For more information on preview limitations, see [RBAC preview limitations](search-security-rbac.md#preview-limitations).
-Search applications that are built on Azure Cognitive Search can now use Azure Active Directory (Azure AD) and Azure role-based access (Azure RBAC) for authenticated and authorized access. A key advantage of using Azure AD is that your credentials and API keys no longer need to be stored in your code. Azure AD authenticates the security principal (a user, group, or service principal) running the application. If authentication succeeds, Azure AD returns the access token to the application, and the application can then use the access token to authorize requests to Azure Cognitive Search. To learn more about the advantages of using Azure AD in your applications, see [Integrating with Azure Active Directory](../active-directory/develop/active-directory-how-to-integrate.md#benefits-of-integration).
+Search applications that are built on Azure Cognitive Search can now use the [Microsoft identity platform](../active-directory/develop/v2-overview.md) for authenticated and authorized access. On Azure, the identity provider is Azure Active Directory (Azure AD). A key [benefit of using Azure AD](../active-directory/develop/active-directory-how-to-integrate.md#benefits-of-integration) is that your credentials and API keys no longer need to be stored in your code. Azure AD authenticates the security principal (a user, group, or service) running the application. If authentication succeeds, Azure AD returns the access token to the application, and the application can then use the access token to authorize requests to Azure Cognitive Search.
-This article will show you how to configure your application for authentication with the [Microsoft identity platform](../active-directory/develop/v2-overview.md) using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md). To learn more about the OAuth 2.0 code grant flow used by Azure AD, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md).
+This article shows you how to configure your client for Azure AD:
+++ For authentication, you'll create a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) as the security principle. You could also use a different type of service principal object, but this article uses managed identities because they eliminate the need to manage credentials.+++ For authorization, you'll assign an Azure role to the managed identity that grants permissions to run queries or manage indexing jobs.+++ Update your client code to call [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential) ## Prepare your search service
-As a first step, [create a search service](search-create-service-portal.md) and configure it to use Azure role-based access control (RBAC).
+As a first step, sign up for the preview and enable role-based access control (RBAC) on your [search service](search-create-service-portal.md).
### Sign up for the preview
-The parts of Azure Cognitive Search's RBAC capabilities required to use Azure AD for querying the search service are still in preview. To use these capabilities, you'll need to add the preview feature to your Azure subscription.
+RBAC for data plane operations is in preview. In this step, add the preview feature to your Azure subscription.
1. Open [Azure portal](https://portal.azure.com/) and find your search service
The parts of Azure Cognitive Search's RBAC capabilities required to use Azure AD
You can also sign up for the preview using Azure Feature Exposure Control (AFEC) and searching for *Role Based Access Control for Search Service (Preview)*. For more information on adding preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md?tabs=azure-portal). > [!NOTE]
-> Once you add the preview to your subscription, all services in the subscription will be permanently enrolled in the preview. If you don't want RBAC on a given service, you can disable RBAC for data plane operations as shown in a later step.
+> Once you add the preview to your subscription, all search services in the subscription are permanently enrolled in the preview. If you don't want RBAC on a given service, you can disable RBAC for data plane operations as shown in a later step.
### Enable RBAC for data plane operations
Once your subscription is added to the preview, you'll still need to enable RBAC
1. On the left navigation pane, select **Keys**.
-1. Determine if you'd like to allow both key-based and role-based access control, or only role-based access control.
+1. Choose whether to allow both key-based and role-based access control, or only role-based access control.
:::image type="content" source="media/search-howto-aad/portal-api-access-control.png" alt-text="Screenshot of authentication options for azure cognitive search in the portal" border="true" :::
You can also change these settings programatically as described in the [Azure Co
## Create a managed identity
-The next step to using Azure AD for authentication is to create a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) if you don't have one already. You can also use a different type of service principal object, but this article will focus on managed identities because they eliminate the need to manage credentials.
-
-To create a manged identity:
+In this step, create a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for your client application.
1. Sign in to the [Azure portal](https://portal.azure.com).
To create a manged identity:
1. Give your managed identity a name and select a region. Then, select **Create**.
- :::image type="content" source="media/search-howto-aad/create-managed-identity.png" alt-text="Screenshot of the create managed identity wizard." border="true" :::
+ :::image type="content" source="media/search-howto-aad/create-managed-identity.png" alt-text="Screenshot of the Create Managed Identity wizard." border="true" :::
## Assign a role to the managed identity Next, you need to grant your managed identity access to your search service. Azure Cognitive Search has various [built-in roles](search-security-rbac.md#built-in-roles-used-in-search). You can also create a [custom role](search-security-rbac.md#create-a-custom-role).
-In general, it's best to give your application only the access required. For example, if your application only needs to be able to query the search index, you could grant it the [Search Index Data Reader (preview)](../role-based-access-control/built-in-roles.md#search-index-data-reader) role. Alternatively, if it needs to be able to read and write to a search index, you could use the [Search Index Data Contributor (preview)](../role-based-access-control/built-in-roles.md#search-index-data-contributor) role.
+It's a best practice to grant minimum permissions. If your application only needs to handle queries, you should assign the [Search Index Data Reader (preview)](../role based-access-control/built-in-roles.md#search-index-data-reader) role. Alternatively, if it needs both read and write access on a search index, you should use the [Search Index Data Contributor (preview)](../role-based-access-control/built-in-roles.md#search-index-data-contributor) role.
-1. Open the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to your search service.
In general, it's best to give your application only the access required. For exa
For more information on the available roles, see [Built-in roles used in Search](search-security-rbac.md#built-in-roles-used-in-search). > [!NOTE]
- > The Owner, Contributor, Reader, and Search Service Contributor roles don't give you access to the data within a search index so you can't query a search index or index data. To get access to the data within a search index, you need either the Search Index Data Contributor or Search Index Data Reader role.
+ > The Owner, Contributor, Reader, and Search Service Contributor roles don't give you access to the data within a search index, so you can't query a search index or index data using those roles. For data access to a search index, you need either the Search Index Data Contributor or Search Index Data Reader role.
1. On the **Members** tab, select the managed identity that you want to give access to your search service. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-You may want to give your managed identity multiple roles such as Search Service Contributor and Search Index Data Contributor if your application needs to both create indexes and query them.
+You can assign multiple roles, such as Search Service Contributor and Search Index Data Contributor, if your application needs comprehensive access to the search services, objects, and content.
You can also [assign roles using PowerShell](./search-security-rbac.md?tabs=config-svc-rest%2croles-powershell%2ctest-rest#step-3-assign-roles). ## Set up Azure AD authentication in your client
-Once you have a managed identity created and you've granted it permissions to access your search service, you're ready to add code to your application to authenticate the security principal and acquire an OAuth 2.0 token.
+Once you have a managed identity and a role assignment on the search service, you're ready to add code to your application to authenticate the security principal and acquire an OAuth 2.0 token.
Azure AD authentication is also supported in the preview SDKs for [Java](https://search.maven.org/artifact/com.azure/azure-search-documents/11.5.0-beta.3/jar), [Python](https://pypi.org/project/azure-search-documents/11.3.0b3/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/11.3.0-beta.3).
+> [!NOTE]
+> To learn more about the OAuth 2.0 code grant flow used by Azure AD, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md).
+ ### [**.NET SDK**](#tab/aad-dotnet) The Azure SDKs make it easy to integrate with Azure AD. Version [11.4.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.2) and newer of the .NET SDK support Azure AD authentication.
The following instructions reference an existing C# sample to demonstrate the co
> [!NOTE] > User-assigned managed identities work only in Azure environments. If you run this code locally, `DefaultAzureCredential` will fall back to authenticating with your credentials. Make sure you've also given yourself the required access to the search service if you plan to run the code locally.
-The Azure.Identity documentation also has more details on using [Azure AD authentication with the Azure SDK for .NET](/dotnet/api/overview/azure/identity-readme), which gives more details on how `DefaultAzureCredential` works as well as other authentication techniques available. `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
+The Azure.Identity documentation has more details about `DefaultAzureCredential` and using [Azure AD authentication with the Azure SDK for .NET](/dotnet/api/overview/azure/identity-readme). `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
### [**REST API**](#tab/aad-rest)
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Azure provides a global [role-based access control (RBAC) authorization system](
+ Use new preview roles for data requests, including creating, loading, and querying indexes.
-Per-user access over search results (sometimes referred to as row-level security or document-level security) is not supported. As a workaround, [create security filters](search-security-trimming-for-azure-search.md) that trim results by user identity, removing documents for which the requestor should not have access.
+Per-user access over search results (sometimes referred to as row-level security or document-level security) isn't supported. As a workaround, [create security filters](search-security-trimming-for-azure-search.md) that trim results by user identity, removing documents for which the requestor shouldn't have access.
## Built-in roles used in Search
Built-in roles include generally available and preview roles. If these roles are
| - | - | | [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default.</br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. </br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. |
-| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role does not give you access to query search indexes or index documents. This role is for search service administrators who need to manage the search services indexes and other resources. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
+| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>This role doesn't allow access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role does not give you access to query search indexes or index documents. This role is for search service administrators who need to manage the search service and its objects, but without the ability to view or access object data. </br></br>Like Contributor, members of this role can't make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. |
If you can't save your selection, or if you get "API access control failed to up
Use the Management REST API version 2021-04-01-Preview, [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update), to configure your service.
-If you are using Postman or another web testing tool, see the Tip below for help on setting up the request.
+If you're using Postman or another web testing tool, see the Tip below for help on setting up the request.
1. Under "properties", set ["AuthOptions"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey".
If you are using Postman or another web testing tool, see the Tip below for help
} ```
-1. [Assign roles](#step-3-assign-roles) on the service and verify they are working correctly against the data plane.
+1. [Assign roles](#step-3-assign-roles) on the service and verify they're working correctly against the data plane.
> [!TIP] > Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
Recall that you can only scope access to top-level resources, such as indexes, s
1. On the Overview page, select the **Indexes** tab:
- + Members of Search Index Data Reader can use Search Explorer to query the index. You can use any API version to check for access. You should be able to issue queries and view results, but you should not be able to view the index definition.
+ + Members of Search Index Data Reader can use Search Explorer to query the index. You can use any API version to check for access. You should be able to issue queries and view results, but you shouldn't be able to view the index definition.
+ Members of Search Index Data Contributor can select **New Index** to create a new index. Saving a new index will verify write access on the service.
For more information on how to acquire a token for a specific environment, see [
The Azure SDK for .NET supports an authorization header in the [NuGet Gallery | Azure.Search.Documents 11.4.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.2) package.
-Additional configuration is required to register an application with Azure Active Directory, and to obtain and pass authorization tokens.
+Configuration is required to register an application with Azure Active Directory, and to obtain and pass authorization tokens:
+ When obtaining the OAuth token, the scope is "https://search.azure.com/.default". The SDK requires the audience to be "https://search.azure.com". The ".default" is an Azure AD convention.
var tokenCredential = new ClientSecretCredential(aadTenantId, aadClientId, aadS
SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, tokenCredential); ```
-More details about using [AAD authentication with the Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity) are available in the SDK's GitHub repo.
+More details about using [Azure AD authentication with the Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity) are available in the SDK's GitHub repo.
> [!NOTE] > If you get a 403 error, verify that your search service is enrolled in the preview program and that your service is configured for preview role assignments.
The PowerShell example shows the JSON syntax for creating a custom role that's a
## Disable API key authentication
-API keys cannot be deleted, but they can be disabled on your service. If you are using the Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader preview roles and Azure AD authentication, you can disable API keys, causing the search service to refuse all data-related requests that pass an API key in the header for content-related requests.
+API keys can't be deleted, but they can be disabled on your service. If you're using the Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader preview roles and Azure AD authentication, you can disable API keys, causing the search service to refuse all data-related requests that pass an API key in the header for content-related requests.
To disable [key-based authentication](search-security-api-keys.md), use the Management REST API version 2021-04-01-Preview and send two consecutive requests for [Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update).
Owner or Contributor permissions are required to disable features. Use Postman o
} ```
-You cannot combine steps one and two. In step one, "disableLocalAuth" must be false to meet the requirements for setting "AuthOptions", whereas step two changes that value to true.
+You can't combine steps one and two. In step one, "disableLocalAuth" must be false to meet the requirements for setting "AuthOptions", whereas step two changes that value to true.
-To re-enable key authentication, rerun the last request, setting "disableLocalAuth" to false. The search service will resume acceptance of API keys on the request automatically (assuming they are specified).
+To re-enable key authentication, rerun the last request, setting "disableLocalAuth" to false. The search service will resume acceptance of API keys on the request automatically (assuming they're specified).
> [!TIP] > Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
To enable a Conditional Access policy for Azure Cognitive Search, follow the bel
1. Save the policy. > [!IMPORTANT]
-> If your search service has a managed identity assigned to it, the specific search service will show up as a cloud app that can be included or excluded as part of the Conditional Access policy. Conditional Access policies cannot be enforced on a specific search service. Instead make sure you select the general **Azure Cognitive Search** cloud app.
+> If your search service has a managed identity assigned to it, the specific search service will show up as a cloud app that can be included or excluded as part of the Conditional Access policy. Conditional Access policies can't be enforced on a specific search service. Instead make sure you select the general **Azure Cognitive Search** cloud app.
security Secure Dev Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-dev-overview.md
developers build more secure software. It helps you address security
compliance requirements while reducing development costs. [Open Web Application Security Project
-(OWASP)](https://www.owasp.org/index.php/Main_Page) ΓÇô OWASP is an online
+(OWASP)](https://www.owasp.org/) ΓÇô OWASP is an online
community that produces freely available articles, methodologies, documentation, tools, and technologies in the field of web application security.
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
We are excited to announce that 9.0 release of the Service Fabric runtime has st
- Introduced a property in Service Fabric runtime that can be set via SFRP as the ARM resource ID - Exposed application type provision timestamp - Support added for Service Fabric Resource Provider (SFRP) metadata to application type + version entities, starting with ARM resource ID
+- Windows Server 2022 is now supported as of the 9.0 CU2 release.
+- Mirantis Container runtime support on Windows for Service Fabric containers
+- The Microsoft Web Platform Installer (WebPI) used for installing Service Fabric SDK and Tools was retired on July 1, 2022.
### Service Fabric 9.0 releases | Release date | Release | More info | |||| | April 29, 2022 | [Azure Service Fabric 9.0](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-9-0-release/ba-p/3299108) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90.md)| | June 06, 2022 | [Azure Service Fabric 9.0 First Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-first-refresh-release/ba-p/3469489) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU1.md)|
+| July 14, 2022 | [Azure Service Fabric 9.0 Second Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-second-refresh-release/ba-p/3575842) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU2.md)|
## Service Fabric 8.2
We are excited to announce that 8.2 release of the Service Fabric runtime has st
| December 16, 2021 | [Azure Service Fabric 8.2 First Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-first-refresh-release/ba-p/3040415) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU1.md)| | February 12, 2022 | [Azure Service Fabric 8.2 Second Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-second-refresh-release/ba-p/3095454) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU2.md)| | June 06, 2022 | [Azure Service Fabric 8.2 Third Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-third-refresh-release/ba-p/3469508) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU3.md)|
+| July 14, 2022 | [Azure Service Fabric 8.2 Fourth Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-8-2-fourth-refresh-release/ba-p/3575845) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU4.md)|
## Service Fabric 8.1
service-fabric Service Fabric Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-containers-overview.md
Compared to virtual machines, containers have the following advantages:
Service Fabric supports the deployment of Docker containers on Linux, and Windows Server containers on Windows Server 2016 and later, along with support for Hyper-V isolation mode. Container runtimes compatible with ServiceFabric:-- Mirantis Container Runtime-- Moby
+- Linux: Mirantis Container Runtime + Ubuntu
+- Windows: Mirantis Container Runtime + Windows Server 2019/2022
#### Docker containers on Linux
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
Ensure you are using a supported [Windows version](service-fabric-versions.md#su
Web Platform Installer (WebPI) is the recommended way to install the SDK and tools. If you receive runtime errors using WebPI, you can also find direct links to the installers in the release notes for a specific Service Fabric release. The release notes can be found in the various release announcements on the [Service Fabric team blog](https://techcommunity.microsoft.com/t5/azure-service-fabric/bg-p/Service-Fabric).
-WebPI may already be installed on your computer. Search for it by using the Windows key. If it's not present, you can [download it here](https://www.microsoft.com/web/downloads/platform.aspx). The page mentions that the tool will be retired fairly soon (July 1, 2022), so you may need to look for alternative ways to install this after that date. Stay tuned!
+WebPI may already be installed on your computer. Search for it by using the Windows key. If it's not present, you can [download it here](https://www.microsoft.com/web/downloads/platform.aspx).WebPI can be used to download Service Fabric SDK releases prior to July 2022 until December 31st, 2022 after which the WebPI feed and installer will be pulled from the Microsoft download center.
+
+For latest Runtime and SDK you can dowload from below.
+
+9.0CU2 RunTime - https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.0.1048.9590.exe
+9.0CU2 SDK - https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.0.1048.msi
+
+8.2CU4 RunTime - https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.8.2.1659.9590.exe
+8.2CU4 SDK - https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.5.2.1659.msi
> [!NOTE] > Local Service Fabric development cluster upgrades are not supported.
spring-cloud How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-github-actions.md
The following sections show you various options for deploying your app.
Azure Spring Apps supports deploying to deployments with built artifacts (e.g., JAR or .NET Core ZIP) or source code archive. The following example deploys to the default production deployment in Azure Spring Apps using JAR file built by Maven. This is the only possible deployment scenario when using the Basic SKU:
+> [!NOTE]
+> The package search pattern should only return exactly one package. If the build task produces multiple JAR packages such as *sources.jar* and *javadoc.jar*, you need to refine the search pattern so that it only matches the application binary artifact.
+ ```yml name: AzureSpringCloud on: push
spring-cloud How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-prepare-app-deployment.md
The following table lists the supported Spring Boot and Spring Cloud combination
| Spring Boot version | Spring Cloud version | ||-|
-| 2.6.x | 2021.0.0+ |
-| 2.5.x | 2020.0+ aka Ilford+ |
+| 2.7.x | 2021.0.3+ aka Jubilee|
+| 2.6.x | 2021.0.0+ aka Jubilee|
### [Enterprise tier](#tab/enterprise-tier) | Spring Boot version | Spring Cloud version | ||-|
-| 2.6.x | 2021.0.0+ |
-| 2.5.x | 2020.0+ aka Ilford+ |
+| 2.7.x | 2021.0.3+ aka Jubilee |
+| 2.6.x | 2021.0.0+ aka Jubilee |
+| 2.5.x | 2020.3+ aka Ilford+ |
| 2.4.x | 2020.0+ aka Ilford+ | | 2.3.x | Hoxton (starting with SR5) |
For more information, see the following pages:
> - Upgrade Spring Boot to 2.5.2 or 2.4.8 to address the following CVE report [CVE-2021-22119: Denial-of-Service attack with spring-security-oauth2-client](https://tanzu.vmware.com/security/cve-2021-22119). If you're using Spring Security, upgrade it to 5.5.1, 5.4.7, 5.3.10 or 5.2.11. > - An issue was identified with Spring Boot 2.4.0 on TLS authentication between apps and Spring Cloud Service Registry. Use version 2.4.1 or above. If you must use version 2.4.0, see the [FAQ](./faq.md?pivots=programming-language-java#development) for a workaround.
-### Dependencies for Spring Boot version 2.4/2.5/2.6
+### Dependencies for Spring Boot version 2.4/2.5/2.6/2.7
For Spring Boot version 2.4/2.5, add the following dependencies to the application POM file.
For Spring Boot version 2.4/2.5, add the following dependencies to the applicati
</dependencyManagement> ```
-For Spring Boot version 2.6, add the following dependencies to the application POM file.
+For Spring Boot version 2.6/2.7, add the following dependencies to the application POM file.
```xml <!-- Spring Boot dependencies --> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId>
- <version>2.6.0</version>
+ <version>2.7.2</version>
</parent> <!-- Spring Cloud dependencies -->
For Spring Boot version 2.6, add the following dependencies to the application P
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId>
- <version>2021.0.0</version>
+ <version>2021.0.3</version>
<type>pom</type> <scope>import</scope> </dependency>
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
Title: Create a blob container with .NET - Azure Storage description: Learn how to create a blob container in your Azure Storage account using the .NET client library. -+ Previously updated : 03/28/2022- Last updated : 07/25/2022+ ms.devlang: csharp
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
No, this scenario isn't supported.
+* <a id="ad-file-mount-cname"></a>
+**Can I use the canonical name (CNAME) to mount an Azure file share while using identity-based authentication (AD DS or Azure AD DS)?**
+
+ No, this scenario isn't supported.
+ * <a id="ad-vm-subscription"></a> **Can I access Azure file shares with Azure AD credentials from a VM under a different subscription?**
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
If you're new to Azure file shares, we recommend reading our [planning guide](st
## Supported scenarios and restrictions -- AD DS Identities used for Azure Files on-premises AD DS authentication must be synced to Azure AD or use a default share-level permission. Password hash synchronization is optional.
+- AD DS identities used for Azure Files on-premises AD DS authentication must be synced to Azure AD or use a default share-level permission. Password hash synchronization is optional.
- Supports Azure file shares managed by Azure File Sync. - Supports Kerberos authentication with AD with [AES 256 encryption](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption) (recommended) and RC4-HMAC. AES 128 Kerberos encryption is not yet supported. - Supports single sign-on experience.
If you're new to Azure file shares, we recommend reading our [planning guide](st
- Only supported against the AD forest that the storage account is registered to. You can only access Azure file shares with the AD DS credentials from a single forest by default. If you need to access your Azure file share from a different forest, make sure that you have the proper forest trust configured, see the [FAQ](storage-files-faq.md#ad-ds--azure-ad-ds-authentication) for details. - Does not support authentication against computer accounts created in AD DS. - Does not support authentication against Network File System (NFS) file shares.
+- Does not support using CNAME to mount file shares.
When you enable AD DS for Azure file shares over SMB, your AD DS-joined machines can mount Azure file shares using your existing AD DS credentials. This capability can be enabled with an AD DS environment hosted either in on-premises machines or hosted in Azure.
stream-analytics Quick Create Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-cli.md
The following Azure CLI code blocks are commands that prepare the input data req
az iot hub device-identity create --hub-name "MyASAIoTHub" --device-id "MyASAIoTDevice" ```
-3. Get the device connection string using the [az iot hub device-identity show-connection-string](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-show-connection-string) command. Copy the entire connection string and save it for when you create the Raspberry Pi simulator.
+3. Get the device connection string using the [az iot hub device-identity connection-string show](/cli/azure/iot/hub/device-identity/connection-string#az-iot-hub-device-identity-connection-string-show) command. Copy the entire connection string and save it for when you create the Raspberry Pi simulator.
```azurecli
- az iot hub device-identity show-connection-string --hub-name "MyASAIoTHub" --device-id "MyASAIoTDevice" --output table
+ az iot hub device-identity connection-string show --hub-name "MyASAIoTHub" --device-id "MyASAIoTDevice" --output table
``` **Output example:**
stream-analytics Stream Analytics Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-powershell.md
The following Azure CLI code block does many commands to prepare the input data
az iot hub device-identity create --hub-name "MyASAIoTHub" --device-id "MyASAIoTDevice" ```
-4. Get the device connection string using the [az iot hub device-identity show-connection-string](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-show-connection-string) command. Copy the entire connection string and save it for when you create the Raspberry Pi simulator.
+4. Get the device connection string using the [az iot hub device-identity connection-string show](/cli/azure/iot/hub/device-identity/connection-string#az-iot-hub-device-identity-connection-string-show) command. Copy the entire connection string and save it for when you create the Raspberry Pi simulator.
```azurecli
- az iot hub device-identity show-connection-string --hub-name "MyASAIoTHub" --device-id "MyASAIoTDevice" --output table
+ az iot hub device-identity connection-string show --hub-name "MyASAIoTHub" --device-id "MyASAIoTDevice" --output table
``` **Output example:**
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
To set up your virtual machine within the Azure portal host pool setup process:
> [!div class="mx-imgBorder"] > ![A screenshot of the Trusted Launch security features available to select from.](https://user-images.githubusercontent.com/62080236/146099320-24cbfdfe-c5ec-43c1-b6b9-ad02378c3709.png)
-6. Next, choose the image that needs to be used to create the virtual machine. You can choose either **Gallery** or **Storage blob**.
+6. Next, choose the image that needs to be used to create the virtual machine. You can choose **Gallery** for new host pool deployments with **Storage blob** no longer available.
- - If you choose **Gallery**, select one of the recommended images from the drop-down menu:
+ - After choosing **Gallery**, select a Windows image that meets your requirements from the drop-down menu:
- - Windows 10 Enterprise multi-session, Version 1909
- - Windows 10 Enterprise multi-session, Version 1909 + Microsoft 365 Apps
+ - Windows 11 Enterprise multi-session (Gen2)
+ - Windows 11 Enterprise multi-session + Microsoft 365 Apps (Gen2)
+ - Windows 10 Enterprise multi-session, Version 21H2 (Gen2)
+ - Windows 10 Enterprise multi-session, Version 21H2 + Microsoft 365 Apps (Gen2)
- Windows Server 2019 Datacenter
- - Windows 10 Enterprise multi-session, Version 2004
- - Windows 10 Enterprise multi-session, Version 2004 + Microsoft 365 Apps
+
If you don't see the image you want, select **See all images**, which lets you select either another image in your gallery or an image provided by Microsoft and other publishers. Make sure that the image you choose is one of the [supported OS images](prerequisites.md#operating-systems-and-licenses).
To set up your virtual machine within the Azure portal host pool setup process:
> [!div class="mx-imgBorder"] > ![A screenshot of the My Items tab.](media/my-items.png)
- - If you choose **Storage Blob**, you can use your own image build through Hyper-V or on an Azure VM. All you have to do is enter the location of the image in the storage blob as a URI.
+ - If you are adding a VM to a host pool that allows **Storage Blob**, you can use your own image build through Hyper-V or on an Azure VM. All you have to do is enter the location of the image in the storage blob as a URI.
The image's location is independent of the availability option, but the imageΓÇÖs zone resiliency determines whether that image can be used with availability zone. If you select an availability zone while creating your image, make sure you're using an image from the gallery with zone resiliency enabled. To learn more about which zone resiliency option you should use, see [the FAQ](./faq.yml#which-availability-option-is-best-for-me-).
To set up your virtual machine within the Azure portal host pool setup process:
If you choose **Advanced**, select an existing network security group that you've already configured.
-12. After that, select whether you want the virtual machines to be joined to **Active Directory** or **Azure Active Directory** (Preview).
+12. After that, select whether you want the virtual machines to be joined to **Active Directory** or **Azure Active Directory**.
- For Active Directory, provide an account to join the domain and choose if you want to join a specific domain and organizational unit.
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-powershell.md
New-AzResourceGroup -Name 'myVMSSResourceGroup' -Location 'EastUS'
``` ## Create a virtual machine scale set
-Now create a virtual machine scale set with [New-AzVmss](/powershell/module/azcompute/new-azvmss). The following example creates a scale set with an instance count of *2* running Windows Server 2019 Datacenter edition.
+Now create a virtual machine scale set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). The following example creates a scale set with an instance count of *2* running Windows Server 2019 Datacenter edition.
```azurepowershell-interactive New-AzVmss `
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
Previously updated : 04/26/2022 Last updated : 07/22/2022
The regions that a resource is replicated to can be updated after creation time.
![Graphic showing how you can replicate images](./media/shared-image-galleries/replication.png) <a name=community></a>
-## Community gallery (preview)
+## Sharing
+
+There are three main ways to share images in an Azure Compute Gallery, depending on who you want to share with:
+
+| Share with\: | Option |
+|-|-|
+| [Specific people, groups, or service principals](#rbac) | Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. |
+| [Subscriptions or tenants](#shared-directly-to-a-tenant-or-subscription) | Direct shared gallery (preview) lets you share to everyone in a subscription or tenant. |
+| [Everyone](#community-gallery) | Community gallery (preview) lets you share your entire gallery publicly, to all Azure users. |
+
+### RBAC
+
+As the Azure Compute Gallery, definition, and version are all resources, they can be shared using the built-in native Azure Roles-based Access Control (RBAC) roles. Using Azure RBAC roles you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the resource version, they can use it to deploy a VM or a Virtual Machine Scale Set. Here is the sharing matrix that helps understand what the user gets access to:
+
+| Shared with User | Azure Compute Gallery | Image Definition | Image version |
+|-|-|--|-|
+| Azure Compute Gallery | Yes | Yes | Yes |
+| Image Definition | No | Yes | Yes |
+
+We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual image versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
+
+For more information, see [Share using RBAC](./share-gallery.md).
++
+### Shared directly to a tenant or subscription
+
+Give specific subscriptions or tenants access to a direct shared Azure Compute Gallery. Sharing a gallery with tenants and subscriptions give them read-only access to your gallery. For more information, see [Share a gallery with subscriptions or tenants](./share-gallery-direct.md).
> [!IMPORTANT]
-> Azure Compute Gallery ΓÇô community gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Creating VMs from a direct shared gallery is open to all Azure users.
>
-> To share images in the community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs and scale sets from images shared the community gallery is open to all Azure users.
+> During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+>
+> You can't currently create a Flexible virtual machine scale set from an image shared to you by another tenant.
+#### Limitations
-Sharing images to the community is a new capability in Azure Compute Gallery. In the preview, you can make your image galleries public, and share them to all Azure customers. When a gallery is marked as a community gallery, all images under the gallery become available to all Azure customers as a new resource type under Microsoft.Compute/communityGalleries. All Azure customers can see the galleries and use them to create VMs. Your original resources of the type `Microsoft.Compute/galleries` are still under your subscription, and private.
+During the preview:
+- You can only share to subscriptions that are also in the preview.
+- You can only share to 30 subscriptions and 5 tenants.
+- A direct shared gallery cannot contain encrypted image versions. Encrypted images cannot be created within a gallery that is directly shared.
+- Only the owner of a subscription, or a user or service principal assigned to the `Compute Gallery Sharing Admin` role at the subscription or gallery level will be able to enable group-based sharing.
+- You need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
-### Why share to the community?
+### Community gallery
+
+To share a gallery with all Azure users, you can create a community gallery (preview). Community galleries can be used by anyone with an Azure subscription. Someone creating a VM can browse images shared with the community using the portal, REST, or the Azure CLI.
+
+Sharing images to the community is a new capability in [Azure Compute Gallery](./azure-compute-gallery.md). In the preview, you can make your image galleries public, and share them to all Azure customers. When a gallery is marked as a community gallery, all images under the gallery become available to all Azure customers as a new resource type under Microsoft.Compute/communityGalleries. All Azure customers can see the galleries and use them to create VMs. Your original resources of the type `Microsoft.Compute/galleries` are still under your subscription, and private.
+
+For more information, see [Share images using a community gallery](./share-gallery-community.md).
++
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
+>
+> During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery.
+>
+> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
++
+#### Why share to the community?
As a content publisher, you might want to share a gallery to the community:
As a content publisher, you might want to share a gallery to the community:
- You donΓÇÖt want to deal with the complexity of multi-tenant authentication when sharing with multiple tenants on Azure.
-### How sharing with the community works
+#### How sharing with the community works
-You [create a gallery resource](create-gallery.md#create-a-community-gallery-preview) under `Microsoft.Compute/Galleries` and choose `community` as a sharing option.
+You [create a gallery resource](create-gallery.md#create-a-community-gallery) under `Microsoft.Compute/Galleries` and choose `community` as a sharing option.
When you are ready, you flag your gallery as ready to be shared publicly. Only the owner of a subscription, or a user or service principal with the `Compute Gallery Sharing Admin` role at the subscription or gallery level, can enable a gallery to go public to the community. At this point, the Azure infrastructure creates proxy read-only regional resources, under `Microsoft.Compute/CommunityGalleries`, which are public.
Information from your image definitions will also be publicly available, like wh
> If you stop sharing your gallery during the preview, you won't be able to re-share it.
-### Limitations for images shared to the community
+#### Limitations for images shared to the community
There are some limitations for sharing your gallery to the community: - Encrypted images aren't supported.
There are some limitations for sharing your gallery to the community:
> [!IMPORTANT] > Microsoft does not provide support for images you share to the community.
-### Community-shared images FAQ
+#### Community-shared images FAQ
**Q: What are the charges for using a gallery that is shared to the community?**
There are some limitations for sharing your gallery to the community:
**A**: Only the content publishers have control over the regions their images are available in. If you donΓÇÖt find an image in a specific region, reach out to the publisher directly. -
-## Explicit sharing using RBAC roles
-
-As the Azure Compute Gallery, definition, and version are all resources, they can be shared using the built-in native Azure Roles-based Access Control (RBAC) roles. Using Azure RBAC roles you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the resource version, they can use it to deploy a VM or a Virtual Machine Scale Set. Here is the sharing matrix that helps understand what the user gets access to:
-
-| Shared with User | Azure Compute Gallery | Image Definition | Image version |
-|-|-|--|-|
-| Azure Compute Gallery | Yes | Yes | Yes |
-| Image Definition | No | Yes | Yes |
-
-We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual image versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
- ## Activity Log The [Activity log](../azure-monitor/essentials/activity-log.md) displays recent activity on the gallery, image, or version including any configuration changes and when it was created and deleted. View the activity log in the Azure portal, or create a [diagnostic setting to send it to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace), where you can view events over time or analyze them with other collected data
There is no extra charge for using the Azure Compute Gallery service. You will b
For example, let's say you have an image of a 127 GB OS disk, that only occupies 10GB of storage, and one empty 32 GB data disk. The occupied size of each image would only be 10 GB. The image is replicated to 3 regions and each region has two replicas. There will be six total snapshots, each using 10GB. You will be charged the storage cost for each snapshot based on the occupied size of 10 GB. You will pay network egress charges for the first replica to be copied to the additional two regions. For more information on the pricing of snapshots in each region, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). For more information on network egress, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
+## Best practices
+
+- To prevent images from being accidentally deleted, use resource locks at the Gallery level. For more information, see [Protect your Azure resources with a lock](../azure-resource-manager/management/lock-resources.md).
+
+- Use ZRS wherever available for high availability. You can configure ZRS in the replication tab when you create the a version of the image or VM application.
+ For more information about which regions support ZRS, see [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
+
+- Keep a minimum of 3 replicas for production images. For every 20 VMs that you create concurrently, we recommend you keep one replica. For example, if you create 1000 VMΓÇÖs concurrently, you should keep 50 replicas (you can have a maximum of 50 replicas per region). To update the replica count, please go to the gallery -> Image Definition -> Image Version -> Update replication.
+
+- Maintain separate galleries for production and test images, donΓÇÖt put them in a single gallery.
+
+- When creating an image definition, keep the Publisher/Offer/SKU consistent with Marketplace images to easily identify OS versions. For example, if you are customizing a Windows server 2019 image from Marketplace and store it as a Compute gallery image, please use the same Publisher/Offer/SKU that is used in the Marketplace image in your compute gallery image.
+
+- Use `excludeFromLatest` when publishing images if you want to exclude a specific image version during VM or scale set creation.
+[Gallery Image Versions - Create Or Update](/rest/api/compute/gallery-image-versions/create-or-update#galleryimageversionpublishingprofile).
+
+ If you want to exclude a version in a specific region, use `regionalExcludeFromLatest` instead of the global `excludeFromLatest`. You can set both global and regional `excludeFromLatest` flag, but the regional flag will take precedence when both are specified.
+
+ ```
+ "publishingProfile": {
+ "targetRegions": [
+ {
+ "name": "brazilsouth",
+ "regionalReplicaCount": 1,
+ "regionalExcludeFromLatest": false,
+ "storageAccountType": "Standard_LRS"
+ },
+ {
+ "name": "canadacentral",
+ "regionalReplicaCount": 1,
+ "regionalExcludeFromLatest": true,
+ "storageAccountType": "Standard_LRS"
+ }
+ ],
+ "replicaCount": 1,
+ "excludeFromLatest": true,
+ "storageAccountType": "Standard_LRS"
+ }
+ ```
++
+- For disaster recovery scenarios, it is a best practice is to have at least two galleries, in different regions. You can still use image versions in other regions, but if the region your gallery is in goes down, you can't create new gallery resources or update existing ones.
+ ## SDK support
virtual-machines Create Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-gallery.md
Previously updated : 04/24/2022 Last updated : 06/22/2022
An [Azure Compute Gallery](./shared-image-galleries.md) (formerly known as Share
The Azure Compute Gallery lets you share custom VM images and application packages with others in your organization, within or across regions, within a tenant. Choose what you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group resources.
-The gallery is a top-level resource that provides full Azure role-based access control (Azure RBAC).
+The gallery is a top-level resource that can be shared in multiple ways:
+
+| Share with\: | Option |
+|-|-|
+| [Specific people, groups, or service principals](#create-a-private-gallery) | Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. |
+| [Subscriptions or tenants](#create-a-direct-shared-gallery) | Direct shared gallery (preview) lets you share to everyone in a subscription or tenant. |
+| [Everyone](#create-a-community-gallery) | Community gallery (preview) lets you share your entire gallery publicly, to all Azure users. |
+
+## Naming
+
+Allowed characters for gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name can't contain dashes. Gallery names must be unique within your subscription.
+ ## Create a private gallery
-Allowed characters for gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name cannot contain dashes. Gallery names must be unique within your subscription.
-Choose an option below for creating your gallery:
++ ### [Portal](#tab/portal)
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
-<a name=community></a>
+## Create a direct shared gallery
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+>
+> You can't currently create a Flexible virtual machine scale set from an image shared to you by another tenant.
+
+### [Portal](#tab/portaldirect)
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+1. Type **Azure Compute Gallery** in the search box and select **Azure Compute Gallery** in the results.
+1. In the **Azure Compute Gallery** page, click **Add**.
+1. On the **Create Azure Compute Gallery** page, select the correct subscription.
+1. Complete all of the details on the page.
+1. At the bottom of the page, select **Next: Sharing method**.
+ :::image type="content" source="media/create-gallery/create-gallery.png" alt-text="Screenshot showing where to select to go on to sharing methods.":::
+1. On the **Sharing** tab, select **RBAC + share directly**.
+
+ :::image type="content" source="media/create-gallery/share-direct.png" alt-text="Screenshot showing the option to share using both role-based access control and share directly.":::
+
+1. When you are done, select **Review + create**.
+1. After validation passes, select **Create**.
+1. When the deployment is finished, select **Go to resource**.
+
+To start sharing the gallery with a subscription or tenant, see [Share a gallery with a subscription or tenant](./share-gallery-direct.md).
+
+### [CLI](#tab/clidirect)
+
+To create a gallery that can be shared to a subscription or tenant using a direct shared gallery, you need to create the gallery with the `--permissions` parameter set to `groups`.
+
+```azurecli-interactive
+az sig create \
+ --gallery-name myGallery \
+ --permissions groups \
+ --resource-group myResourceGroup
+```
+
+
+To start sharing the gallery with a subscription or tenant, use see [Share a gallery with a subscription or tenant](./share-gallery-direct.md).
+
+
+### [REST](#tab/restdirect)
+
+Create a gallery for subscription or tenant-level sharing using the Azure REST API.
-## Create a community gallery (preview)
+```rest
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.Compute/galleries/{gallery-name}?api-version=2020-09-30
+
+{
+ "properties": {
+ "sharingProfile": {
+ "permissions": "Groups"
+ }
+ },
+ "location": "{location}
+}
+```
+
+To start sharing the gallery with a subscription or tenant, use see [Share a gallery with a subscription or tenant](./share-gallery-direct.md).
++
+Reset sharing to clear everything in the `sharingProfile`.
+
+```rest
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.Compute/galleries/{galleryName}/share?api-version=2020-09-30
+
+{
+ "operationType" : "Reset",
+}
+```
+++
+To start sharing the gallery with a subscription or tenant, use see [Share a gallery with a subscription or tenant](./share-gallery-direct.md).
+
+<a name=community></a>
+## Create a community gallery
A [community gallery](azure-compute-gallery.md#community) is shared publicly with everyone. To create a community gallery, you create the gallery first, then enable it for sharing. The name of public instance of your gallery will be the prefix you provide, plus a unique GUID.
During the preview, make sure that you create your gallery, image definitions, a
> > To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
-When creating an image to share with the community, you will need to provide contact information. This information will be shown **publicly**, so be careful when providing:
+When creating an image to share with the community, you'll need to provide contact information. This information will be shown **publicly**, so be careful when providing:
- Community gallery prefix - Publisher support email - Publisher URL
az sig create \
The output of this command will give you the public name for your community gallery in the `sharingProfile` section, under `publicNames`.
-Once you are ready to make the gallery available to the public, enable the community gallery using [az sig share enable-community](/cli/azure/sig/share#az-sig-share-enable-community). Only a user in the `Owner` role definition can enable a gallery for community sharing.
-
-```azurecli-interactive
-az sig share enable-community \
- --gallery-name $galleryName \
- --resource-group $resourceGroup
-```
--
-> [!IMPORTANT]
-> If you are listed as the owner of your subscription, but you are having trouble sharing the gallery publicly, you may need to explicitly [add yourself as owner again](../role-based-access-control/role-assignments-portal-subscription-admin.md).
-
-To go back to only RBAC based sharing, use the [az sig share reset](/cli/azure/sig/share#az-sig-share-reset) command.
-
-To delete a gallery shared to community, you must first run `az sig share reset` to stop sharing, then delete the gallery.
+To start sharing the gallery to all Azure users, see [Share images using a community gallery](share-gallery-community.md).
### [REST](#tab/rest2) To create gallery, submit a PUT request:
Specify `permissions` as `Community` and information about your gallery in the r
} } ```
+To start sharing the gallery to all Azure users, see [Share images using a community gallery](share-gallery-community.md).
-To go live with community sharing, send the following POST request. As part of the request, include the property `operationType` with value `EnableCommunity`.
-
-```rest
-POST
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compu
-te/galleries/{galleryName}/share?api-version=2021-07-01
-{ΓÇ»
-ΓÇ» "operationType" : "EnableCommunity"
-}ΓÇ»
-```
### [Portal](#tab/portal2)
-Making a community gallery available to all Azure users is a two-step process. First you create the gallery with community sharing enabled, when you are ready to make it public, you share the gallery.
+Making a community gallery available to all Azure users is a two-step process. First you create the gallery with community sharing enabled, when you're ready to make it public, you share the gallery.
1. Sign in to the Azure portal at https://portal.azure.com. 1. Type **Azure Compute Gallery** in the search box and select **Azure Compute Gallery** in the results.
Making a community gallery available to all Azure users is a two-step process. F
1. For **Publisher email** type a valid e-mail address that can be used to communicate with you about the gallery. 1. For **Publisher URL**, type the URL for where users can get more information about the images in your community gallery. 1. For **Legal Agreement URL**, type the URL where end users can find legal terms for the image.
-1. When you are done, select **Review + create**.
+1. When you're done, select **Review + create**.
:::image type="content" source="media/create-gallery/rbac-community.png" alt-text="Screenshot showing the information that needs to be completed to create a community gallery.":::
Making a community gallery available to all Azure users is a two-step process. F
To see the public name of your gallery, select **Sharing** in the left menu.
-When you are ready to make the gallery public:
-
-1. On the page for the gallery, select **Sharing** from the left menu.
-1. Select **Share** from the top of the page.
- :::image type="content" source="media/create-gallery/share.png" alt-text="Screenshot showing the Share button for sharing your gallery to the community.":::
-1. When you are done, select **Save**.
+To start sharing the gallery to all Azure users, see [Share images using a community gallery](share-gallery-community.md).
-> [!IMPORTANT]
-> If you are listed as the owner of your subscription, but you are having trouble sharing the gallery publicly, you may need to explicitly [add yourself as owner again](../role-based-access-control/role-assignments-portal-subscription-admin.md).
-
When you are ready to make the gallery public:
- [Create a VM application](vm-applications-how-to.md) in your gallery. -- You can also create Azure Compute Gallery [create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/) using a template. -- [Azure Image Builder](./image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](./windows/image-builder-gallery-update-image-version.md).
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
When placing the extension JSON at the root of the template, the resource name i
## Azure CLI deployment
-The Azure CLI can be used to deploy the Log Analytics agent VM extension to an existing virtual machine. Replace the *myWorkspaceKey* value below with your workspace key and the *myWorkspaceId* value with your workspace ID. These values can be found in your Log Analytics workspace in the Azure portal under *Advanced Settings*.
+The Azure CLI can be used to deploy the Log Analytics agent VM extension to an existing virtual machine. Replace the *myWorkspaceKey* value below with your workspace key and the *myWorkspaceId* value with your workspace ID. These values can be found in your Log Analytics workspace in the Azure portal under *Advanced Settings*. Replace the latestVersion value with a version from [Log Analytics Linux VM extension version](oms-linux.md#agent-and-vm-extension-version).
```azurecli az vm extension set \
az vm extension set \
--name OmsAgentForLinux \ --publisher Microsoft.EnterpriseCloud.Monitoring \ --protected-settings '{"workspaceKey":"myWorkspaceKey"}' \
- --settings '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}'
+ --settings '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}' \
+ --version latestVersion
+```
+
+## Azure PowerShell deployment
+
+The Azure Powershell cmdlets can be used to deploy the Log Analytics agent VM extension to an existing virtual machine. Replace the *myWorkspaceKey* value below with your workspace key and the *myWorkspaceId* value with your workspace ID. These values can be found in your Log Analytics workspace in the Azure portal under *Advanced Settings*. Replace the latestVersion value with a version from [Log Analytics Linux VM extension version](oms-linux.md#agent-and-vm-extension-version).
+
+```powershell
+Set-AzVMExtension \
+ -ResourceGroupName myResourceGroup \
+ -VMName myVM \
+ -ExtensionName OmsAgentForLinux \
+ -ExtensionType OmsAgentForLinux \
+ -Publisher Microsoft.EnterpriseCloud.Monitoring \
+ -TypeHandlerVersion latestVersion
+ -ProtectedSettingString '{"workspaceKey":"myWorkspaceKey"}' \
+ -SettingString '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}'
``` ## Troubleshoot and support ### Troubleshoot
-Data about the state of extension deployments can be retrieved from the Azure portal, and by using the Azure CLI. To see the deployment state of extensions for a given VM, run the following command using the Azure CLI.
+Data about the state of extension deployments can be retrieved from the Azure portal, and by using the Azure CLI or Azure Powershell. To see the deployment state of extensions for a given VM, run the following command if you are using the Azure CLI.
```azurecli az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
Extension execution output is logged to the following file: ```
-/opt/microsoft/omsagent/bin/stdout
+/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/extension.log
+```
+
+To retrieve the OMS extension version installed on a VM, run the following command if you are using Azure CLI.
+
+```azurecli
+az vm extension show --resource-group myResourceGroup --vm-name myVM -instance-view
```
-To retrieve the OMS extension version installed on a VM, run the following command using Azure PowerShell.
+To retrieve the OMS extension version installed on a VM, run the following command if you are using Azure PowerShell.
```powershell Get-AzVMExtension -ResourceGroupName my_resource_group -VMName my_vm_name -Name OmsAgentForLinux -Status
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
For more information about the parameters you can specify for an image definitio
In this example, the image definition is named *myImageDefinition*, and is for a [specialized](shared-image-galleries.md#generalized-and-specialized-images) Linux OS image. To create a definition for images using a Windows OS, use `--os-type Windows`.
-```azurecli-interactive
+```azurecli-interactive
az sig image-definition create \ --resource-group myGalleryRG \ --gallery-name myGallery \
virtual-machines Endorsed Distros https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/endorsed-distros.md
The Azure Linux Agent is already pre-installed on Azure Marketplace images and i
| [Red Hat Enterprise Linux by Red Hat](../workloads/redhat/overview.md) |7.x, 8.x |In kernel |Package: In repo under "WALinuxAgent" <br/>Source code: [GitHub](https://github.com/Azure/WALinuxAgent) | | SUSE Linux Enterprise by SUSE |SLES/SLES for SAP 11.x, 12.x, 15.x <br/> [SUSE Public Cloud Image Lifecycle](https://www.suse.com/c/suse-public-cloud-image-life-cycle/) |In kernel |Package:<p> for 11 in [Cloud:Tools](https://build.opensuse.org/project/show/Cloud:Tools) repo<br>for 12 included in "Public Cloud" Module under "python-azure-agent"<br/>Source code: [GitHub](https://go.microsoft.com/fwlink/p/?LinkID=250998) | | openSUSE by SUSE |openSUSE Leap 15.x |In kernel |Package: In [Cloud:Tools](https://build.opensuse.org/project/show/Cloud:Tools) repo under "python-azure-agent" <br/>Source code: [GitHub](https://github.com/Azure/WALinuxAgent) |
-| Ubuntu by Canonical |Ubuntu Server and Pro. 18.x, 20.x<p>Information about extended support for Ubuntu 14.04 pro and 16.04 pro can be found here: [Ubuntu Extended Security Maintenance](https://www.ubuntu.com/esm). |In kernel |Package: In repo under "walinuxagent" <br/>Source code: [GitHub](https://github.com/Azure/WALinuxAgent) |
+| Ubuntu by Canonical |Ubuntu Server and Pro. 18.x, 20.x 22.x<p>Information about extended support for Ubuntu 14.04 pro and 16.04 pro can be found here: [Ubuntu Extended Security Maintenance](https://www.ubuntu.com/esm). |In kernel |Package: In repo under "walinuxagent" <br/>Source code: [GitHub](https://github.com/Azure/WALinuxAgent) |
## Image update cadence
virtual-machines Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/share-images-across-tenants.md
- Title: Share gallery images across tenants
-description: Learn how to share VM images across Azure tenants using Azure Compute Galleries and the Azure CLI.
---- Previously updated : 05/04/2019-----
-# Share gallery VM images across Azure tenants using the Azure CLI
-
-Azure Compute Galleries let you share images using Azure RBAC. You can use Azure RBAC to share images within your tenant, and even to individuals outside of your tenant. For more information about this simple sharing option, see the [Share the gallery](./shared-images-portal.md#share-the-gallery).
--
-> [!IMPORTANT]
-> You cannot use the portal to deploy a VM from an image in another azure tenant. To create a VM from an image shared between tenants, you must use the Azure CLI or [PowerShell](../windows/share-images-across-tenants.md).
-
-## Create a VM using Azure CLI
-
-Sign in the service principal for tenant 1 using the appID, the app key, and the ID of tenant 1. You can use `az account show --query "tenantId"` to get the tenant IDs if needed.
-
-```azurecli-interactive
-az account clear
-az login --service-principal -u '<app ID>' -p '<Secret>' --tenant '<tenant 1 ID>'
-az account get-access-token
-```
-
-Sign in the service principal for tenant 2 using the appID, the app key, and the ID of tenant 2:
-
-```azurecli-interactive
-az login --service-principal -u '<app ID>' -p '<Secret>' --tenant '<tenant 2 ID>'
-az account get-access-token
-```
-
-Create the VM. Replace the information in the example with your own.
-
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroup \
- --name myVM \
- --image "/subscriptions/<Tenant 1 subscription>/resourceGroups/<Resource group>/providers/Microsoft.Compute/galleries/<Gallery>/images/<Image definition>/versions/<version>" \
- --admin-username azureuser \
- --generate-ssh-keys
-```
-
-## Next steps
-
-If you run into any issues, you can [troubleshoot galleries](../troubleshooting-shared-images.md).
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
+
+ Title: Share Azure Compute Gallery resources with a community gallery
+description: Learn how to use a community gallery to share VM images stored in an Azure Compute Gallery.
+++++ Last updated : 07/07/2022+++
+ms.devlang: azurecli
+++
+# Share images using a community gallery (preview)
+
+To share a gallery with all Azure users, you can create a community gallery (preview). Community galleries can be used by anyone with an Azure subscription. Someone creating a VM can browse images shared with the community using the portal, REST, or the Azure CLI.
+
+Sharing images to the community is a new capability in [Azure Compute Gallery](./azure-compute-gallery.md#community). In the preview, you can make your image galleries public, and share them to all Azure customers. When a gallery is marked as a community gallery, all images under the gallery become available to all Azure customers as a new resource type under Microsoft.Compute/communityGalleries. All Azure customers can see the galleries and use them to create VMs. Your original resources of the type `Microsoft.Compute/galleries` are still under your subscription, and private.
++
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
+>
+> During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery.
+>
+> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
++
+There are three main ways to share images in an Azure Compute Gallery, depending on who you want to share with:
+
+| Share with\: | Option |
+|-|-|
+| [Specific people, groups, or service principals](./share-gallery.md) | Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. |
+| [Subscriptions or tenants](./share-gallery-direct.md) | Direct shared gallery lets you share to everyone in a subscription or tenant. |
+| Everyone (described in this article) | Community gallery lets you share your entire gallery publicly, to all Azure users. |
+
+## Limitations for images shared to the community
+
+There are some limitations for sharing your gallery to the community:
+- Encrypted images aren't supported.
+- For the preview, image resources need to be created in the same region as the gallery. For example, if you create a gallery in West US, the image definitions and image versions should be created in West US if you want to make them available during the public preview.
+- For the preview, you can't share [VM Applications](vm-applications.md) to the community.
+- The gallery must be created as a community gallery. For the preview, there is no way to migrate an existing gallery to be a community gallery.
+- To find images shared to the community from the Azure portal, you need to go through the VM create or scale set creation pages. You can't search the portal or Azure Marketplace for the images.
+
+> [!IMPORTANT]
+> Microsoft does not provide support for images you share to the community.
+
+## How sharing with the community works
+
+You [create a gallery resource](create-gallery.md#create-a-community-gallery) under `Microsoft.Compute/Galleries` and choose `community` as a sharing option.
+
+When you are ready, you flag your gallery as ready to be shared publicly. Only the owner of a subscription, or a user or service principal with the `Compute Gallery Sharing Admin` role at the subscription or gallery level, can enable a gallery to go public to the community. At this point, the Azure infrastructure creates proxy read-only regional resources, under `Microsoft.Compute/CommunityGalleries`, which are public.
+
+The end-users can only interact with the proxy resources, they never interact with your private resources. As the publisher of the private resource, you should consider the private resource as your handle to the public proxy resources. The `prefix` you provide when you create the gallery will be used, along with a unique GUID, to create the public facing name for your gallery.
+
+Azure users can see the latest image versions shared to the community in the portal, or query for them using the CLI. Only the latest version of an image is listed in the community gallery.
+
+When creating a community gallery, you will need to provide contact information for your images. This information will be shown **publicly**, so be careful when providing it:
+- Community gallery prefix
+- Publisher support email
+- Publisher URL
+- Legal agreement URL
+
+Information from your image definitions will also be publicly available, like what you provide for **Publisher**, **Offer**, and **SKU**.
+
+> [!WARNING]
+> If you want to stop sharing a gallery publicly, you can update the gallery to stop sharing, but making the gallery private will prevent existing virtual machine scale set users from scaling their resources.
+>
+> If you stop sharing your gallery during the preview, you won't be able to re-share it.
+
+## Start sharing publicly
+
+In order to share a gallery publicly, it needs to be created as a community gallery. For more information, see [Create a community gallery](create-gallery.md#create-a-community-gallery)
+
+### [CLI](#tab/cli)
+
+Once you are ready to make the gallery available to the public, enable the community gallery using [az sig share enable-community](/cli/azure/sig/share#az-sig-share-enable-community). Only a user in the `Owner` role definition can enable a gallery for community sharing.
+
+```azurecli-interactive
+az sig share enable-community \
+ --gallery-name $galleryName \
+ --resource-group $resourceGroup
+```
++
+To go back to only RBAC based sharing, use the [az sig share reset](/cli/azure/sig/share#az-sig-share-reset) command.
+
+To delete a gallery shared to community, you must first run `az sig share reset` to stop sharing, then delete the gallery.
+
+### [REST](#tab/rest)
+
+To go live with community sharing, send the following POST request. As part of the request, include the property `operationType` with value `EnableCommunity`.
+
+```rest
+POST
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compu
+te/galleries/{galleryName}/share?api-version=2021-07-01
+{ΓÇ»
+ΓÇ» "operationType" : "EnableCommunity"
+}ΓÇ»
+```
+
+### [Portal](#tab/portal)
+
+When you're ready to make the gallery public:
+
+1. On the page for the gallery, select **Sharing** from the left menu.
+1. Select **Share** from the top of the page.
+ :::image type="content" source="media/create-gallery/share.png" alt-text="Screenshot showing the Share button for sharing your gallery to the community.":::
+1. When you are done, select **Save**.
++++
+> [!IMPORTANT]
+> If you are listed as the owner of your subscription, but you are having trouble sharing the gallery publicly, you may need to explicitly [add yourself as owner again](../role-based-access-control/role-assignments-portal-subscription-admin.md).
+
+To go back to only RBAC based sharing, use the [az sig share reset](/cli/azure/sig/share#az-sig-share-reset) command.
+
+To delete a gallery shared to community, you must first run `az sig share reset` to stop sharing, then delete the gallery.
+
+## Next steps
+
+Create an [image definition and an image version](image-version.md).
+
+Create a VM from a [generalized](vm-generalized-image-version.md#create-a-vm-from-a-community-gallery-image) or [specialized](vm-specialized-image-version.md#create-a-vm-from-a-community-gallery-image) image in a community gallery.
+++
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
+
+ Title: Share Azure Compute Gallery resources directly with subscriptions and tenants
+description: Learn how to share Azure Compute Gallery resources explicitly with subscriptions and tenants.
+++++ Last updated : 07/25/2022+++
+ms.devlang: azurecli
+++
+# Share a gallery with subscriptions or tenants (preview)
+
+This article covers how to share an Azure Compute Gallery with specific subscriptions or tenants using a direct shared gallery. Sharing a gallery with tenants and subscriptions give them read-only access to your gallery.
++
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Creating VMs from a direct shared gallery is open to all Azure users.
+>
+> During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+++
+There are three main ways to share images in an Azure Compute Gallery, depending on who you want to share with:
+
+| Share with\: | Option |
+|-|-|
+| [Specific people, groups, or service principals](./share-gallery.md) | Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. |
+| [Subscriptions or tenants](explained in this article) | Direct shared gallery lets you share to everyone in a subscription or tenant. |
+| [Everyone](./share-gallery-community.md) | Community gallery lets you share your entire gallery publicly, to all Azure users. |
++
+## Limitations
+
+During the preview:
+- You can only share to subscriptions that are also in the preview.
+- You can only share to 30 subscriptions and 5 tenants.
+- Only images can be shared. You can't directly share a [VM application](vm-applications.md) during the preview.
+- A direct shared gallery can't contain encrypted image versions. Encrypted images can't be created within a gallery that is directly shared.
+- Only the owner of a subscription, or a user or service principal assigned to the `Compute Gallery Sharing Admin` role at the subscription or gallery level will be able to enable group-based sharing.
+- You need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+- PowerShell, Ansible, and Terraform aren't supported at this time.
+- **Known issue**: When creating a VM from a direct shared image using the Azure portal, if you select a region, select an image, then change the region, you will get an error message: "You can only create VM in the replication regions of this image" even when the image is replicated to that region. To get rid of the error, select a different region, then switch back to the region you want. If the image is available, it should clear the error message.
+
+## Prerequisites
+
+You need to create a [new direct shared gallery ](./create-gallery.md#create-a-direct-shared-gallery). A direct shared gallery has the `sharingProfile.permissions` property is set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+
+## Share to subscriptions and tenants
+
+First you create a gallery under `Microsoft.Compute/Galleries` and choose `groups` as a sharing option.
+
+When you are ready, you share your gallery with subscriptions and tenants. Only the owner of a subscription, or a user or service principal with the `Compute Gallery Sharing Admin` role at the subscription or gallery level, can share the gallery. At this point, the Azure infrastructure creates proxy read-only regional resources, under `Microsoft.Compute/SharedGalleries`. Only subscriptions and tenants you have shared with can interact with the proxy resources, they never interact with your private resources. As the publisher of the private resource, you should consider the private resource as your handle to the public proxy resources. The subscriptions and tenants you have shared your gallery with will see the gallery name as the subscription ID where the gallery was created, followed by the gallery name.
+
+### [Portal](#tab/portaldirect)
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+1. Type **Azure Compute Gallery** in the search box and select **Azure Compute Gallery** in the results.
+1. In the **Azure Compute Gallery** page, click **Add**.
+1. On the **Create Azure Compute Gallery** page, select the correct subscription.
+1. Complete all of the details on the page.
+1. At the bottom of the page, select **Next: Sharing method**.
+ :::image type="content" source="media/create-gallery/create-gallery.png" alt-text="Screenshot showing where to select to go on to sharing methods.":::
+1. On the **Sharing** tab, select **RBAC + share directly**.
+
+ :::image type="content" source="media/create-gallery/share-direct.png" alt-text="Screenshot showing the option to share using both role-based access control and share directly.":::
+
+1. When you are done, select **Review + create**.
+1. After validation passes, select **Create**.
+1. When the deployment is finished, select **Go to resource**.
++
+To share the gallery:
+
+1. On the page for the gallery, select **Sharing** from the left menu.
+1. Under **Direct sharing settings**, select **Add**.
+
+ :::image type="content" source="media/create-gallery/direct-share-add.png" alt-text="Screenshot showing the option to share with a subscription or tenant.":::
+
+1. If you would like to share with someone within your organization, for **Type** select *Subscription* or *Tenant* and choose the appropriate item from the **Tenants and subscriptions** drop-down. If you want to share with someone outside of your organization, select either *Subscription outside of my organization* or *Tenant outside of my organization* and then paste or type the ID into the text box.
+1. When you are done adding items, select **Save**.
+
+### [CLI](#tab/clidirect)
+
+To create a direct shared gallery, you need to create the gallery with the `--permissions` parameter set to `groups`.
+
+```azurecli-interactive
+az sig create \
+ --gallery-name myGallery \
+ --permissions groups \
+ --resource-group myResourceGroup
+```
+
+
+To start sharing the gallery with a subscription or tenant, use [az sig share add](/cli/azure/sig#az-sig-share-add)
+
+```azurecli-interactive
+sub=<subscription-id>
+tenant=<tenant-id>
+gallery=<gallery-name>
+rg=<resource-group-name>
+az sig share add \
+ --subscription-ids $sub \
+ --tenant-ids $tenant \
+ --gallery-name $gallery \
+ --resource-group $rg
+```
+
+
+Remove access for a subscription or tenant using [az sig share remove](/cli/azure/sig#az-sig-share-remove).
+
+```azurecli-interactive
+sub=<subscription-id>
+tenant=<tenant-id>
+gallery=<gallery-name>
+rg=<resource-group-name>
+
+az sig share remove \
+ --subscription-ids $sub \
+ --tenant-ids $tenant \
+ --gallery-name $gallery \
+ --resource-group $rg
+```
+
++
+
+### [REST](#tab/restdirect)
+
+Create a gallery for subscription or tenant-level sharing using the Azure REST API.
+
+```rest
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.Compute/galleries/{gallery-name}?api-version=2020-09-30
+
+{
+ "properties": {
+ "sharingProfile": {
+ "permissions": "Groups"
+ }
+ },
+ "location": "{location}
+}
+
+```
++
+Share a gallery to subscription or tenant.
++
+```rest
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.Compute/galleries/{galleryName}/share?api-version=2020-09-30
+
+{
+ "operationType": "Add",
+ "groups": [
+ {
+ "type": "Subscriptions",
+ "ids": [
+ "{SubscriptionID}"
+ ]
+ },
+ {
+ "type": "AADTenants",
+ "ids": [
+ "{tenantID}"
+ ]
+ }
+ ]
+}
+
+```
+
+
+Remove access for a subscription or tenant.
+
+```rest
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.Compute/galleries/{galleryName}/share?api-version=2020-09-30
+
+{
+ "operationType": "Remove",
+ "groups":[
+ {
+ "type": "Subscriptions",
+ "ids": [
+ "{subscriptionId1}",
+ "{subscriptionId2}"
+ ],
+},
+{
+ "type": "AADTenants",
+ "ids": [
+ "{tenantId1}",
+ "{tenantId2}"
+ ]
+ }
+ ]
+}
+
+```
++
+Reset sharing to clear everything in the `sharingProfile`.
+
+```rest
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.Compute/galleries/{galleryName}/share?api-version=2020-09-30
+
+{
+ "operationType" : "Reset",
+}
+```
++++
+## Next steps
+- Create an [image definition and an image version](image-version.md).
+- Create a VM from a [generalized](vm-generalized-image-version.md#create-a-vm-from-a-community-gallery-image) or [specialized](vm-specialized-image-version.md#create-a-vm-from-a-community-gallery-image) image in a direct shared gallery.
virtual-machines Share Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md
Previously updated : 04/24/2022 Last updated : 06/14/2022
ms.devlang: azurecli
# Share gallery resources
-There are two main ways to share images in an Azure Compute Gallery:
+As the Azure Compute Gallery, definition, and version are all resources, they can be shared using the built-in native Azure Roles-based Access Control (RBAC) roles. Using Azure RBAC roles you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access, they can use the gallery resources to deploy a VM or a Virtual Machine Scale Set. Here's the sharing matrix that helps understand what the user gets access to:
-- Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level.-- Community gallery lets you share your entire gallery publicly, to all Azure users.
+| Shared with User | Azure Compute Gallery | Image Definition | Image version |
+|-|-|--|-|
+| Azure Compute Gallery | Yes | Yes | Yes |
+| Image Definition | No | Yes | Yes |
-> [!IMPORTANT]
-> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+We recommend sharing at the Gallery level for the best experience. We don't recommend sharing individual image versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
-## RBAC
+There are three main ways to share images in an Azure Compute Gallery, depending on who you want to share with:
-The Azure Compute Gallery, definitions, and versions are all resources, they can be shared using the built-in native Azure RBAC controls. Using Azure RBAC you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the image or application version, they can deploy a VM or a Virtual Machine Scale Set.
+| Share with\: | Option |
+|-|-|
+| Specific people, groups, or service principals (described in this article) | Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. |
+| [Subscriptions or tenants](./share-gallery-direct.md) | A direct shared gallery lets you share to everyone in a subscription or tenant. |
+| [Everyone](./share-gallery-community.md) | Community gallery lets you share your entire gallery publicly, to all Azure users. |
-We recommend sharing at the gallery level for the best experience and prevent management overhead. We do not recommend sharing individual image or application versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
-If the user is outside of your organization, they will get an email invitation to join the organization. The user needs to accept the invitation, then they will be able to see the gallery and all of the image definitions and versions in their list of resources.
+## Share using RBAC
### [Portal](#tab/portal)
-If the user is outside of your organization, they will get an email invitation to join the organization. The user needs to accept the invitation, then they will be able to see the gallery and all of the definitions and versions in their list of resources.
- 1. On the page for your gallery, in the menu on the left, select **Access control (IAM)**. 1. Under **Add a role assignment**, select **Add**. The **Add a role assignment** pane will open. 1. Under **Role**, select **Reader**. 1. Under **assign access to**, leave the default of **Azure AD user, group, or service principal**. 1. Under **Select**, type in the email address of the person that you would like to invite.
-1. If the user is outside of your organization, you will see the message **This user will be sent an email that enables them to collaborate with Microsoft.** Select the user with the email address and then click **Save**.
+1. If the user is outside of your organization, you'll see the message **This user will be sent an email that enables them to collaborate with Microsoft.** Select the user with the email address and then click **Save**.
### [CLI](#tab/cli)
New-AzRoleAssignment `
-<a name=community></a>
-## Community gallery (preview)
-
-To share a gallery with all Azure users, you can [create a community gallery (preview)](create-gallery.md#community). Community galleries can be used by anyone with an Azure subscription. Someone creating a VM can browse images shared with the community using the portal, REST, or the Azure CLI.
-
-> [!IMPORTANT]
-> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
->
-> During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery.
-
-To learn more, see [Community gallery (preview) overview](azure-compute-gallery.md#community) and [Create a community gallery](create-gallery.md#community).
-- ## Next steps
-Create an [image definition and an image version](image-version.md).
+- Create an [image definition and an image version](image-version.md).
+- Create a VM from a [generalized](vm-generalized-image-version.md#create-a-vm-from-your-gallery) or [specialized](vm-specialized-image-version.md#create-a-vm-from-your-gallery) private gallery.
-You can also create Azure Compute Gallery resources using templates. There are several quickstart templates available:
-- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)-- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)-- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
Previously updated : 04/24/2022 Last updated : 07/18/2022 #Customer intent: As an IT administrator, I want to learn about how to create shared VM images to minimize the number of post-deployment configuration tasks.
Image version:
## Sharing
-> [!IMPORTANT]
-> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+There are three main ways to share an Azure Compute Gallery, depending on who you want to share with:
-You can [share images](share-gallery.md) to users and groups using the standard role-based access control (RBAC) or you can share an entire gallery of images to the public, using a [community gallery (preview)](azure-compute-gallery.md#community).
+| Share with\: | Option |
+|-|-|
+|[Specific people, groups, or service principals](./share-gallery.md) | Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. |
+| [Subscriptions or tenants](./share-gallery-direct.md) | A direct shared gallery (preview) lets you share to everyone in a subscription or tenant. |
+| [Everyone](./share-gallery-community.md) | Community gallery (preview) lets you share your entire gallery publicly, to all Azure users. |
-> [!IMPORTANT]
-> Azure Compute Gallery ΓÇô community gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To share images in the community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs and scale sets from images shared the community gallery is open to all Azure users.
## Shallow replication
virtual-machines Update Image Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/update-image-resources.md
-# List, update, and delete gallery resources
+# List, update, and delete gallery resources
You can manage your Azure Compute Gallery (formerly known as Shared Image Gallery) resources using the Azure CLI or Azure PowerShell.
+## List galleries shared with you
+
+### [CLI](#tab/cli)
+
+List Galleries shared with your subscription.
+
+```azurecli-interactive
+region=westus
+az sig list-shared --location $region
+```
+
+List Galleries shared with your tenant.
+
+```azurecli-interactive
+region=westus
+az sig list-shared --location $region --shared-to tenant
+```
+
+The output will contain the public `name` and `uniqueID` of the gallery that is shared with you. You can use the name of the gallery to query for images that are available through the gallery.
+
+Here is example output:
+
+```output
+[
+ {
+ "location": "westus",
+ "name": "1231b567-8a99-1a2b-1a23-123456789abc-MYDIRECTSHARED",
+ "uniqueId": "/SharedGalleries/1231b567-8a99-1a2b-1a23-123456789abc-MYDIRECTSHARED"
+ }
+]
+```
+
+### [REST](#tab/rest)
+
+List galleries shared with a subscription.
+
+```REST
+GET
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.Compute/Locations/{location}/SharedGalleries?api-version=2020-09-30
+```
+
+
+The response should look similar to this:
+
+```rest
+{
+"value": [
+{
+"identifier": {
+"uniqueId": "/SharedGalleries/{SharedGalleryUniqueName}"
+},
+"name": "galleryuniquename1",
+ "type": "Microsoft. Compute/sharedGalleries",
+ "location": "location"
+ },
+ {
+"identifier": {
+"uniqueName": "/SharedGalleries/{SharedGalleryUniqueName}"
+},
+"name": "galleryuniquename2",
+"type": "Microsoft. Compute/sharedGalleries",
+"location": "location"
+ }
+],
+}
+```
+
+
+List the galleries shared with a tenant.
+
+```rest
+GET
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/Locations/{location}/SharedGalleries?api-version=2020-09-30&sharedTo=tenant
+```
+
+
+The response should look similar to this:
+
+```
+{
+"value": [
+{
+"identifier": {
+"uniqueName": "/SharedGalleries/{SharedGalleryUniqueName}"
+},
+"name": "galleryuniquename1",
+ "type": "Microsoft. Compute/sharedGalleries",
+ "location": "location"
+ },
+ {
+"identifier": {
+"uniqueName": "/SharedGalleries/{SharedGalleryUniqueName}"
+},
+"name": "galleryuniquename2",
+"type": "Microsoft. Compute/sharedGalleries",
+"location": "location"
+ }
+],
+}
+++ ## List your gallery information ### [CLI](#tab/cli)
az sig list -o table
``` --- **List the image definitions** List the image definitions in your gallery, including information about OS type and status, using [az sig image-definition list](/cli/azure/sig/image-definition#az-sig-image-definition-list).
az sig image-definition list --resource-group myGalleryRG --gallery-name myGalle
``` -- **List image versions** List image versions in your gallery using [az sig image-version list](/cli/azure/sig/image-version#az-sig-image-version-list):
Remove-AzGalleryImageVersion `
## Update resources
-### [CLI](#tab/cli)
There are some limitations on what can be updated. The following items can be updated: Azure Compute Gallery:
Image version:
If you plan on adding replica regions, don't delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
+### [CLI](#tab/cli2)
+ Update the description of a gallery using ([az sig update](/cli/azure/sig#az-sig-update). ```azurecli-interactive
az sig image-version update \
--set publishingProfile.excludeFromLatest=false ```
-### [PowerShell](#tab/powershell)
--
-There are some limitations on what can be updated. The following items can be updated:
-
-Azure Compute Gallery:
-- Description-
-Image definition:
-- Recommended vCPUs-- Recommended memory-- Description-- End of life date
+### [PowerShell](#tab/powershell2)
-Image version:
-- Regional replica count-- Target regions-- Exclusion from latest-- End of life date
-If you plan on adding replica regions, don`t delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
To update the description of a gallery, use [Update-AzGallery](/powershell/module/az.compute/update-azgallery).
Update-AzGalleryImageVersion `
- ## Delete resources
-You have to delete resources in reverse order, by deleting the image version first. After you delete all of the image versions, you can delete the image definition. After you delete all image definitions, you can delete the gallery.
+You have to delete resources in reverse order, by deleting the image version first. After you delete all of the image versions, you can delete the image definition. After you delete all image definitions, you can delete the gallery.
-Before you can delete a community shared gallery, you need to use [az sig share reset](/cli/azure/sig/share#az-sig-share-reset) to stop sharing the gallery publicly.
-### [CLI](#tab/cli)
+### [CLI](#tab/cli4)
+
+Before you can delete a community shared gallery, you need to use [az sig share reset](/cli/azure/sig/share#az-sig-share-reset) to stop sharing the gallery publicly.
Delete an image version using [az sig image-version delete](/cli/azure/sig/image-version#az-sig-image-version-delete).
az sig delete \
--gallery-name myGallery ```
-### [PowerShell](#tab/powershell)
+### [PowerShell](#tab/powershell4)
Remove-AzGallery `
Remove-AzResourceGroup -Name $resourceGroup ```- - ## Community galleries > [!IMPORTANT]
az sig image-version list-community \
-o table ``` +
+## Direct shared galleries
++
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Creating VMs from a direct shared gallery is open to all Azure users.
+>
+> During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+++
+To find the `uniqueID` of a gallery that is shared with you, use [az sig list-shared](/cli/azure/sig/image-definition#az-sig-image-definition-list-shared). In this example, we are looking for galleries in the West US region.
+
+```azurecli-interactive
+region=westus
+az sig list-shared --location $region --query "[].uniqueId" -o tsv
+```
+
+List all of the image definitions that are shared directly with you, use [az sig image-definition list-shared](/cli/azure/sig/image-definition#az-sig-image-definition-list-shared).
+
+In this example, we list all of the images in the gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+
+```azurecli-interactive
+name="1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f-myDirectShared"
+ az sig image-definition list-shared \
+ --gallery-unique-name $name
+ --location $region \
+ --query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table
+```
+
+List image versions directly shared to you using [az sig image-version list-shared](/cli/azure/sig/image-version#az-sig-image-version-list-shared):
+
+```azurecli-interactive
+imgDef="myImageDefinition"
+az sig image-version list-shared \
+ --location $region \
+ --public-gallery-name $name \
+ --gallery-image-definition $imgDef \
+ --query [*]."{Name:name,UniqueId:uniqueId}" \
+ -o table
+```
+ ## Next steps
-[Azure Image Builder (preview)](./image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](./linux/image-builder-gallery-update-image-version.md).
+- Create an [image definition and an image version](image-version.md).
+- Create a VM from a [generalized](vm-generalized-image-version.md#create-a-vm-from-a-community-gallery-image) or [specialized](vm-specialized-image-version.md#create-a-vm-from-a-community-gallery-image) image in a direct shared gallery.
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
Previously updated : 04/26/2022 Last updated : 07/18/2022
# Create a VM from a generalized image version
-Create a VM from a [generalized image version](./shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery (formerly known as Shared Image Gallery). If you want to create a VM using a specialized image, see [Create a VM from a specialized image](vm-specialized-image-version.md).
+Create a VM from a [generalized image version](./shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery (formerly known as Shared Image Gallery). If you want to create a VM using a specialized image, see [Create a VM from a specialized image](vm-specialized-image-version.md).
+
+This article shows how to create a VM from a generalized image:
+- [In your own gallery](#create-a-vm-from-your-gallery)
+- Shared to a [community gallery](#create-a-vm-from-a-community-gallery-image)
+- [Directly shared to your subscription or tenant](#create-a-vm-from-a-gallery-shared-with-your-subscription-or-tenant)
## Create a VM from your gallery
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rg}/
+## Create a VM from a gallery shared with your subscription or tenant
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Creating VMs from a direct shared gallery is open to all Azure users.
+>
+> During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
++++
+### [CLI](#tab/cli2)
+
+To create a VM using an image shared to your subscription or tenant, you need the unique ID of the image in the following format:
+
+```
+/SharedGalleries/<uniqueID>/Images/<image name>/Versions/latest
+```
+
+To find the `uniqueID` of a gallery that is shared with you, use [az sig list-shared](/cli/azure/sig/image-definition#az-sig-image-definition-list-shared). In this example, we are looking for galleries in the West US region.
+
+```azurecli-interactive
+region=westus
+az sig list-shared --location $region --query "[].name" -o tsv
+```
+
+Use the gallery name to find the images that are available. In this example, we list all of the images in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+
+```azurecli-interactive
+galleryName="1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f-myDirectShared"
+ az sig image-definition list-shared \
+ --gallery-unique-name $galleryName \
+ --location $region \
+ --query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table
+```
+
+Make sure the state of the image is `Generalized`. If you want to use an image with the `Specialized` state, see [Create a VM from a specialized image version](vm-specialized-image-version.md).
+
+Use the `Id` from the output, appended with `/Versions/latest` to use the latest version, as the value for `--image`` to create a VM. In this example, we are creating a VM from a Linux image that is directly shared to us, and creating SSH keys for authentication.
+
+```azurecli-interactive
+imgDef="/SharedGalleries/1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f-MYDIRECTSHARED/Images/myDirectDefinition/Versions/latest"
+vmResourceGroup=myResourceGroup
+location=westus
+vmName=myVM
+adminUsername=azureuser
+
+az group create --name $vmResourceGroup --location $location
+
+az vm create\
+ --resource-group $vmResourceGroup \
+ --name $vmName \
+ --image $imgDef \
+ --admin-username $adminUsername \
+ --generate-ssh-keys
+```
++
+### [Portal](#tab/portal2)
+
+> [!NOTE]
+> **Known issue**: In the Azure portal, if you you select a region, select an image, then change the region, you will get an error message: "You can only create VM in the replication regions of this image" even when the image is replicated to that region. To get rid of the error, select a different region, then switch back to the region you want. If the image is available, it should clear the error message.
+>
+> You can also use the Azure CLI to check what images are shared with you. For example, you can use `az sig list-shared --location westus" to see what images are shared with you in the West US region.
+
+1. Type **virtual machines** in the search.
+1. Under **Services**, select **Virtual machines**.
+1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group or select one from the drop-down.
+1. Under **Instance details**, type a name for the **Virtual machine name**.
+1. For **Security type**, make sure *Standard* is selected.
+1. For your **Image**, select **See all images**. The **Select an image** page will open.
+1. In the left menu, under **Other Items**, select **Direct Shared Images (PREVIEW)**. The **Other Items | Direct Shared Images (PREVIEW)** page will open.
+1. Select an image from the list. Make sure that the **OS state** is *Generalized*. If you want to use a specialized image, see [Create a VM using a specialized image version](vm-specialized-image-version.md). Depending on the image you choose, the **Region** the VM will be created in will change to match the image.
+1. Complete the rest of the options and then select the **Review + create** button at the bottom of the page.
+1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
++
+### [REST](#tab/rest2)
+
+Get the ID of the image version. The value will be used in the VM deployment request.
+
+```rest
+GET
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/Locations/{location}/CommunityGalleries/{CommunityGalleryPublicName}/Images/{galleryImageName}/Versions/{1.0.0}?api-version=2021-07-01
+
+```
+
+Response:
+
+```json
+"location": "West US",
+ "identifier": {
+ "uniqueId": "/CommunityGalleries/{PublicGalleryName}/Images/{imageName}/Versions/{verionsName}"
+ },
+ "name": "1.0.0"
+```
+
++
+Now you can deploy the VM. The example requires API version 2021-07-01 or later.
+
+```rest
+PUT
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines/{VMName}?api-version=2021-03-01
+{
+ "location": "{location}",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_D1_v2"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "communityGalleryImageId":"/communityGalleries/{publicGalleryName}/images/{galleryImageName}/versions/1.0.0"
+ },
+ "osDisk": {
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "storageAccountType": "Standard_LRS"
+ },
+ "name": "myVMosdisk",
+ "createOption": "FromImage"
+ }
+ },
+ "osProfile": {
+ "adminUsername": "azureuser",
+ "computerName": "myVM",
+ "adminPassword": "{password}}"
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-
+000000000000/resourceGroups/{rg}/providers/Microsoft.Network/networkInterfaces/{networkIntefaceName}",
+ "properties": {
+ "primary": true
+ }
+ }
+ ]
+ }
+ }
+}
+
+```
++++ **Next steps**
-[Azure Image Builder (preview)](./image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](./linux/image-builder-gallery-update-image-version.md).
+- [Create an Azure Compute Gallery](create-gallery.md)
+- [Create an image in an Azure Compute Gallery](image-version.md)
virtual-machines Vm Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-specialized-image-version.md
Previously updated : 04/26/2022 Last updated : 07/18/2022
Create a VM from a [specialized image version](./shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery (formerly known as Shared Image Gallery). If you want to create a VM using a generalized image version, see [Create a VM from a generalized image version](vm-generalized-image-version.md).
+This article shows how to create a VM from a specialized image:
+- [In your own gallery](#create-a-vm-from-your-gallery)
+- Shared to a [community gallery](#create-a-vm-from-a-community-gallery-image)
+- [Directly shared to your subscription or tenant](#create-a-vm-from-a-gallery-shared-with-your-subscription-or-tenant)
+ > [!IMPORTANT] > > When you create a new VM from a specialized image, the new VM retains the computer name of the original VM. Other computer-specific information, like the CMID, is also kept. This duplicate information can cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
-Replace resource names as needed in these examples.
## Create a VM from your gallery
-### [Portal](#tab/portal)
-
-Now you can create one or more new VMs. This example creates a VM named *myVM*, in the *myResourceGroup*, in the *East US* datacenter.
-
-1. Go to your image definition. You can use the resource filter to show all image definitions available.
-1. On the page for your image definition, select **Create VM** from the menu at the top of the page.
-1. For **Resource group**, select **Create new** and type *myResourceGroup* for the name.
-1. In **Virtual machine name**, type *myVM*.
-1. For **Region**, select *East US*.
-1. For **Availability options**, leave the default of *No infrastructure redundancy required*.
-1. The value for **Image** is automatically filled with the `latest` image version if you started from the page for the image definition.
-1. For **Size**, choose a VM size from the list of available sizes and then choose **Select**.
-1. Under **Administrator account**, the username will be greyed out because the username and credentials from the source VM are used.
-1. If you want to allow remote access to the VM, under **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** or **RDP (3389)** from the drop-down. If you don't want to allow remote access to the VM, leave **None** selected for **Public inbound ports**.
-1. When you are finished, select the **Review + create** button at the bottom of the page.
-1. After the VM passes validation, select **Create** at the bottom of the page to start the deployment.
### [CLI](#tab/cli)
Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the --speci
Use the image definition ID for `--image` to create the VM from the latest version of the image that is available. You can also create the VM from a specific version by supplying the image version ID for `--image`.
-In this example, we are creating a VM from the latest version of the *myImageDefinition* image.
+In this example, we're creating a VM from the latest version of the *myImageDefinition* image.
```azurecli az group create --name myResourceGroup --location eastus
az vm create --resource-group myResourceGroup \
Once you have a specialized image version, you can create one or more new VMs using the [New-AzVM](/powershell/module/az.compute/new-azvm) cmdlet.
-In this example, we are using the image definition ID to ensure your new VM will use the most recent version of an image. You can also use a specific version by using the image version ID for `Set-AzVMSourceImage -Id`. For example, to use image version *1.0.0* type: `Set-AzVMSourceImage -Id "/subscriptions/<subscription ID where the gallery is located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"`.
+In this example, we're using the image definition ID to ensure your new VM will use the most recent version of an image. You can also use a specific version by using the image version ID for `Set-AzVMSourceImage -Id`. For example, to use image version *1.0.0* type: `Set-AzVMSourceImage -Id "/subscriptions/<subscription ID where the gallery is located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"`.
Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
New-AzVM `
-VM $vmConfig ```+
+### [Portal](#tab/portal)
+
+Now you can create one or more new VMs. This example creates a VM named *myVM*, in the *myResourceGroup*, in the *East US* datacenter.
+
+1. Go to your image definition. You can use the resource filter to show all image definitions available.
+1. On the page for your image definition, select **Create VM** from the menu at the top of the page.
+1. For **Resource group**, select **Create new** and type *myResourceGroup* for the name.
+1. In **Virtual machine name**, type *myVM*.
+1. For **Region**, select *East US*.
+1. For **Availability options**, leave the default of *No infrastructure redundancy required*.
+1. The value for **Image** is automatically filled with the `latest` image version if you started from the page for the image definition.
+1. For **Size**, choose a VM size from the list of available sizes and then choose **Select**.
+1. Under **Administrator account**, the username will be greyed out because the username and credentials from the source VM are used.
+1. If you want to allow remote access to the VM, under **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** or **RDP (3389)** from the drop-down. If you don't want to allow remote access to the VM, leave **None** selected for **Public inbound ports**.
+1. When you're finished, select the **Review + create** button at the bottom of the page.
+1. After the VM passes validation, select **Create** at the bottom of the page to start the deployment.
+ ## Create a VM from a community gallery image
To create a VM from a generalized image in a community gallery, see [Create a VM
Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the `--specialized` parameter to indicate that the image is a specialized image.
-In this example, we are creating a VM from the latest version of the *myImageDefinition* image.
+In this example, we're creating a VM from the latest version of the *myImageDefinition* image.
```azurecli az group create --name myResourceGroup --location eastus
To create the VM from community gallery image, you must accept the license agree
:::image type="content" source="media/shared-image-galleries/community.png" alt-text="Screenshot showing where to select community gallery images."::: 1. Select an image from the list. Make sure that the **OS state** is *Specialized*. If you want to use a specialized image, see [Create a VM using a generalized image version](vm-generalized-image-version.md). Depending on the image choose, the **Region** the VM will be created in will change to match the image. 1. Complete the rest of the options and then select the **Review + create** button at the bottom of the page.
-1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
+1. On the **Create a virtual machine** page, you can see the details about the VM you're about to create. When you're ready, select **Create**.
+## Create a VM from a gallery shared with your subscription or tenant
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Creating VMs from a direct shared gallery is open to all Azure users.
+>
+> During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+++
+### [CLI](#tab/cli2)
+
+To create a VM using the latest version of an image shared to your subscription or tenant, you need the ID of the image in the following format:
+
+```
+/SharedGalleries/<uniqueID>/Images/<image name>/Versions/latest
+```
+
+To find the `uniqueID` of a gallery that is shared with you, use [az sig list-shared](/cli/azure/sig/image-definition#az-sig-image-definition-list-shared). In this example, we're looking for galleries in the West US region.
+
+```azurecli-interactive
+region=westus
+az sig list-shared --location $region --query "[].name" -o tsv
+```
+
+Use the gallery name to find all of the images that are available. In this example, we list all of the images in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+
+```azurecli-interactive
+galleryName="1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f-myDirectShared"
+ az sig image-definition list-shared \
+ --gallery-unique-name $galleryName \
+ --location $region \
+ --query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table
+```
+
+Make sure the state of the image is `Specialized`. If you want to use an image with the `Generalized` state, see [Create a VM from a generalized image version](vm-generalized-image-version.md).
+
+Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the `--specialized` parameter to indicate that the image is a specialized image.
+
+Use the `Id`, appended with `/Versions/latest` to use the latest version, as the value for `--image`` to create a VM.
+
+In this example, we're creating a VM from the latest version of the *myImageDefinition* image.
+
+```azurecli
+imgDef="/SharedGalleries/1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f-MYDIRECTSHARED/Images/myDirectDefinition/Versions/latest"
+vmResourceGroup=myResourceGroup
+location=westus
+vmName=myVM
+
+az group create --name $vmResourceGroup --location $location
+
+az vm create\
+ --resource-group $vmResourceGroup \
+ --name $vmName \
+ --image $imgDef \
+ --specialized
+```
+
+### [Portal](#tab/portal2)
+
+> [!NOTE]
+> **Known issue**: In the Azure portal, if you select a region, select an image, then change the region, you will get an error message: "You can only create VM in the replication regions of this image" even when the image is replicated to that region. To get rid of the error, select a different region, then switch back to the region you want. If the image is available, it should clear the error message.
+>
+> You can also use the Azure CLI to check what images are shared with you. For example, you can use `az sig list-shared --location westus" to see what images are shared with you in the West US region.
+
+1. Type **virtual machines** in the search.
+1. Under **Services**, select **Virtual machines**.
+1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group or select one from the drop-down.
+1. Under **Instance details**, type a name for the **Virtual machine name**.
+1. For **Security type**, make sure *Standard* is selected.
+1. For your **Image**, select **See all images**. The **Select an image** page will open.
+1. In the left menu, under **Other Items**, select **Direct Shared Images (PREVIEW)**. The **Other Items | Direct Shared Images (PREVIEW)** page will open.
+1. Select an image from the list. Make sure that the **OS state** is *Specialized*. If you want to use a generalized image, see [Create a VM using a generalized image version](vm-generalized-image-version.md). Depending on the image you choose, the **Region** the VM will be created in will change to match the image.
+1. Complete the rest of the options and then select the **Review + create** button at the bottom of the page.
+1. On the **Create a virtual machine** page, you can see the details about the VM you're about to create. When you're ready, select **Create**.
+++++ **Next steps**
-You can also create Azure Compute Gallery resource using templates. There are several quickstart templates available:
+- [Create an Azure Compute Gallery](create-gallery.md)
+- [Create an image in an Azure Compute Gallery](image-version.md)
-- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)-- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)-- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)
virtual-machines Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/share-images-across-tenants.md
- Title: Share gallery images across tenants in Azure
-description: Learn how to share VM images across Azure tenants using Azure Compute Galleries and PowerShell.
---- Previously updated : 07/15/2019-----
-# Share gallery VM images across Azure tenants using PowerShell
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-Azure Compute Galleries let you share images using Azure RBAC. You can use Azure RBAC to share images within your tenant, and even to individuals outside of your tenant. For more information about this simple sharing option, see the [Share the gallery](../share-gallery.md).
---
-> [!IMPORTANT]
-> You cannot use the portal to deploy a VM from an image in another azure tenant. To create a VM from an image shared between tenants, you must use the [Azure CLI](../linux/share-images-across-tenants.md) or PowerShell.
-
-## Create a VM using PowerShell
-
-Log into both tenants using the application ID, secret and tenant ID.
-
-```azurepowershell-interactive
-$applicationId = '<App ID>'
-$secret = <Secret> | ConvertTo-SecureString -AsPlainText -Force
-$tenant1 = "<Tenant 1 ID>"
-$tenant2 = "<Tenant 2 ID>"
-$cred = New-Object -TypeName PSCredential -ArgumentList $applicationId, $secret
-Clear-AzContext
-Connect-AzAccount -ServicePrincipal -Credential $cred -Tenant "<Tenant 1 ID>"
-Connect-AzAccount -ServicePrincipal -Credential $cred -Tenant "<Tenant 2 ID>"
-```
-
-Create the VM in the resource group that has permission on the app registration. Replace the information in this example with your own.
---
-```azurepowershell-interactive
-$resourceGroup = "myResourceGroup"
-$location = "South Central US"
-$vmName = "myVMfromImage"
-
-# Set a variable for the image version in Tenant 1 using the full image ID of the image version
-$image = "/subscriptions/<Tenant 1 subscription>/resourceGroups/<Resource group>/providers/Microsoft.Compute/galleries/<Gallery>/images/<Image definition>/versions/<version>"
-
-# Create user object
-$cred = Get-Credential -Message "Enter a username and password for the virtual machine."
-
-# Create a resource group
-New-AzResourceGroup -Name $resourceGroup -Location $location
-
-# Networking pieces
-$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name mySubnet -AddressPrefix 192.168.1.0/24
-$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $location `
- -Name MYvNET -AddressPrefix 192.168.0.0/16 -Subnet $subnetConfig
-$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $location `
- -Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4
-$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp `
- -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
-$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $location `
- -Name myNetworkSecurityGroup -SecurityRules $nsgRuleRDP
-$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $resourceGroup -Location $location `
- -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
-
-# Create a virtual machine configuration using the $image variable to specify the image
-$vmConfig = New-AzVMConfig -VMName $vmName -VMSize Standard_D1_v2 | `
-Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
-Set-AzVMSourceImage -Id $image | `
-Add-AzVMNetworkInterface -Id $nic.Id
-
-# Create a virtual machine
-New-AzVM -ResourceGroupName $resourceGroup -Location $location -VM $vmConfig
-```
-
-## Next steps
-
-Create [Azure Compute Gallery resources](../image-version.md).
virtual-machines Hana Certification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-certification.md
The SAP HANA on Azure (Large Instances) types, referred to in SAP HANA certified
- BW/4HANA - Other SAP HANA workloads in Azure.
-The solution is based on the SAP-HANA certified dedicated hardware stamp ([SAP HANA tailored data center integration ΓÇô TDI](https://scn.sap.com/docs/DOC-63140)). If you run an SAP HANA TDI-configured solution, all the above SAP HANA-based applications work on the hardware infrastructure.
+The solution is based on the SAP-HANA certified dedicated hardware stamp ([SAP HANA tailored data center integration ΓÇô TDI](https://www.sap.com/documents/2017/09/e6519450-d47c-0010-82c7-eda71af511fa.html)). If you run an SAP HANA TDI-configured solution, all the above SAP HANA-based applications work on the hardware infrastructure.
Compared to running SAP HANA in VMs, this solution offers the benefit of much larger memory volumes.
virtual-machines Hana Storage Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-storage-architecture.md
In this article, we'll look at the storage architecture for deploying SAP HANA on Azure Large Instances (also known as BareMetal Infrastructure).
-The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on the classic deployment model per SAP recommended guidelines. For more information on the guidelines, see [SAP HANA storage requirements](https://archive.sap.com/kmuuid2/70c8e423-c8aa-3210-3fae-e043f5c1ca92/SAP%20HANA%20TDI%20-%20Storage%20Requirements.pdf).
+The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on the classic deployment model per SAP recommended guidelines.
Type I class of HANA Large Instances come with four times the memory volume as storage volume. Whereas Type II class of HANA Large Instances come with a volume intended for storing HANA transaction log backups. For more information, see [Install and configure SAP HANA (Large Instances) on Azure](hana-installation.md).
virtual-machines Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md
Other advantages of Ultra disk can be the better read latency in comparison to p
> Ultra disk is not yet present in all the Azure regions and is also not yet supporting all VM types listed below. For detailed information where Ultra disk is available and which VM families are supported, check the article [What disk types are available in Azure?](../../disks-types.md#ultra-disks). ### Production recommended storage solution with pure Ultra disk configuration
-In this configuration, you keep the **/hana/data** and **/hana/log** volumes separately. The suggested values are derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as recommended in the [SAP TDI Storage Whitepaper](https://archive.sap.com/kmuuid2/70c8e423-c8aa-3210-3fae-e043f5c1ca92/SAP%20HANA%20TDI%20-%20Storage%20Requirements.pdf).
+In this configuration, you keep the **/hana/data** and **/hana/log** volumes separately. The suggested values are derived out of the KPIs that SAP has to certify VM types for SAP HANA and storage configurations as recommended in the [SAP TDI Storage Whitepaper](https://www.sap.com/documents/2017/09/e6519450-d47c-0010-82c7-eda71af511fa.html).
The recommendations are often exceeding the SAP minimum requirements as stated earlier in this article. The listed recommendations are a compromise between the size recommendations by SAP and the maximum storage throughput the different VM types provide.
virtual-machines Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations.md
The basic configuration of a VM node for SAP HANA scale-out looks like:
- All other disk volumes aren't shared among the different nodes and aren't based on NFS. Installation configurations and steps for scale-out HANA installations with non-shared **/han).
-Sizing the volumes or disks, you need to check the document [SAP HANA TDI Storage Requirements](https://archive.sap.com/kmuuid2/70c8e423-c8aa-3210-3fae-e043f5c1ca92/SAP%20HANA%20TDI%20-%20Storage%20Requirements.pdf), for the size required dependent on the number of worker nodes. The document releases a formula you need to apply to get the required capacity of the volume
+Sizing the volumes or disks, you need to check the document [SAP HANA TDI Storage Requirements](https://www.sap.com/documents/2017/09/e6519450-d47c-0010-82c7-eda71af511fa.html), for the size required dependent on the number of worker nodes. The document releases a formula you need to apply to get the required capacity of the volume
The other design criteria that is displayed in the graphics of the single node configuration for a scale-out SAP HANA VM is the VNet, or better the subnet configuration. SAP highly recommends a separation of the client/application facing traffic from the communications between the HANA nodes. As shown in the graphics, this goal is achieved by having two different vNICs attached to the VM. Both vNICs are in different subnets, have two different IP addresses. You then control the flow of traffic with routing rules using NSGs or user-defined routes.
Get familiar with the articles as listed
- [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md) - [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux](./sap-hana-scale-out-standby-netapp-files-rhel.md) - [High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server](./sap-hana-high-availability.md)-- [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](./sap-hana-high-availability-rhel.md)
+- [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](./sap-hana-high-availability-rhel.md)
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md
In the table below typical SAP communication ports are listed. Basically it is s
**) sid = SAP-System-ID For more information, see [TCP/IP Ports Used by SAP Applications]
-(https://scn.sap.com/docs/DOC-17124). Using this document, you can open dedicated ports in the VPN device necessary for specific SAP products and scenarios.
+(https://help.sap.com/docs/Security/575a9f0e56f34c6e8138439eefc32b16/616a3c0b1cc748238de9c0341b15c63c.html). Using this document, you can open dedicated ports in the VPN device necessary for specific SAP products and scenarios.
Other security measures when deploying VMs in such a scenario could be to create a [Network Security Group][virtual-networks-nsg] to define access rules.
We can separate the discussion about SAP high availability in Azure into two par
and how it can be combined with Azure infrastructure HA.
-SAP High Availability in Azure has some differences compared to SAP High Availability in an on-premises physical or virtual environment. The following paper from SAP describes [standard SAP High Availability configurations in virtualized environments on Windows](https://scn.sap.com/docs/DOC-44415). There is no sapinst-integrated SAP-HA configuration for Linux. For more information about SAP HA on-premises for Linux, see [SAP High Availability Partner Information](https://scn.sap.com/docs/DOC-8541).
+SAP High Availability in Azure has some differences compared to SAP High Availability in an on-premises physical or virtual environment. The following paper from SAP describes [standard SAP High Availability configurations in virtualized environments on Windows](https://help.sap.com/docs/SAP_NETWEAVER_703/a2cf03bc73a44b2a87d535cdb35e529e/45237d7e9f9b4002e10000000a155369.html). There is no sapinst-integrated SAP-HA configuration for Linux. For more information about SAP HA on-premises for Linux, see [SAP High Availability Partner Information](https://scn.sap.com/docs/DOC-8541).
### Azure Infrastructure High Availability
Read the articles:
- [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md) - [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_general.md)-- [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
+- [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
virtual-machines Sap Hana Availability Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-across-regions.md
If you are using the scenario of sharing the DR target with a QA system in one V
- There are two [operation modes](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/627bd11e86c84ec2b9fcdf585d24011c.html) with delta_datashipping and logreplay, which are available for such a scenario - Both operation modes have different memory requirements without preloading data-- Delta_datashipping might require drastically less memory without the preload option than logreplay could require. See chapter 4.3 of the SAP document [How To Perform System Replication for SAP HANA](https://archive.sap.com/kmuuid2/9049e009-b717-3110-ccbd-e14c277d84a3/How%20to%20Perform%20System%20Replication%20for%20SAP%20HANA.pdf)
+- Delta_datashipping might require drastically less memory without the preload option than logreplay could require. See chapter 4.3 of the SAP document [How To Perform System Replication for SAP HANA](https://www.sap.com/documents/2017/07/606a676e-c97c-0010-82c7-eda71af511fa.html)
- The memory requirement of logreplay operation mode without preload is not deterministic and depends on the columnstore structures loaded. In extreme cases, you might require 50% of the memory of the primary instance. The memory for logreplay operation mode is independent on whether you chose to have the data preloaded set or not.
For step-by-step guidance on setting up these configurations in Azure, see:
- [Set up SAP HANA system replication in Azure VMs](sap-hana-high-availability.md) - [High availability for SAP HANA by using system replication](https://blogs.sap.com/2018/01/08/your-sap-on-azure-part-4-high-availability-for-sap-hana-using-system-replication/)-
-
---
-
-
virtual-machines Sap Hana Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-overview.md
These articles provide a good overview of using SAP HANA in Azure:
It's also a good idea to be familiar with these articles about SAP HANA: - [High availability for SAP HANA](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/6d252db7cdd044d19ad85b46e6c294a4.html)-- [FAQ: High availability for SAP HANA](https://archive.sap.com/documents/docs/DOC-66702)-- [Perform system replication for SAP HANA](https://archive.sap.com/documents/docs/DOC-47702)
+- [FAQ: High availability for SAP HANA](https://www.sap.com/documents/2016/05/c6f37cb5-737c-0010-82c7-eda71af511fa.html)
+- [Perform system replication for SAP HANA](https://www.sap.com/documents/2017/07/606a676e-c97c-0010-82c7-eda71af511fa.html)
- [SAP HANA 2.0 SPS 01 WhatΓÇÖs new: High availability](https://blogs.sap.com/2017/05/15/sap-hana-2.0-sps-01-whats-new-high-availability-by-the-sap-hana-academy/) - [Network recommendations for SAP HANA system replication](https://www.sap.com/documents/2016/06/18079a1c-767c-0010-82c7-eda71af511fa.html) - [SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/b74e16a9e09541749a745f41246a065e.html)
Measure your availability requirement against the SLAs that Azure components can
## Next steps - Learn about [SAP HANA availability within one Azure region](./sap-hana-availability-one-region.md).-- Learn about [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md).
+- Learn about [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md).
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Register subscription for public preview
-
-1. Go to the [Preview features](https://portal.azure.com/#blade/Microsoft_Azure_Resources/PreviewFeaturesBlade) page.
-
-1. Search for **AllowAzureNetworkManager**.
-
-1. Select the checkbox next to *AllowAzureNetworkManager* and then select **+ Register**.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/registration.png" alt-text="Screenshot of preview feature registration page.":::
- ## Create Virtual Network Manager 1. Select **+ Create a resource** and search for **Network Manager**. Then select **Create** to begin setting up Azure Virtual Network Manager.
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
Resource|Scenario|Steps|
- You can't delete a prefix if any addresses within it are assigned to public IP address resources associated to a resource. Dissociate all public IP address resources that are assigned IP addresses from the prefix first. For more information on disassociating public IP addresses, see [Manage public IP addresses](virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address). - IPv6 is supported on basic public IPs with **dynamic** allocation only. Dynamic allocation means the IPv6 address will change if you delete and redeploy your resource in Azure. - Standard IPv6 public IPs support static (reserved) allocation. -- Standard internal load balancers support dynamic allocation from within the subnet to which they're assigned.
+- Standard internal load balancers support dynamic allocation from within the subnet to which they're assigned.
+- Routing preference Internet IPs are not supported in a public IP address prefix.
## Pricing
virtual-network Routing Preference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-overview.md
The price difference between both options is reflected in the internet egress da
* Internet routing preference is only compatible with zone-redundant standard SKU of public IP address. Basic SKU of public IP address is not supported. * Internet routing preference currently supports only IPv4 public IP addresses. IPv6 public IP addresses are not supported.
+* Internet routing preference IPs are not supported in a public IP address prefix.
### Regional Unavailability Internet routing preference is available in all regions except:
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
A couple important notes about the NAT gateway and Azure App Services integratio
* Virtual network integration does not provide inbound private access to your app from the virtual network. * Because of the nature of how virtual network integration operates, the traffic from virtual network integration does not show up in Azure Network Watcher or NSG flow logs.
-### Port 25 cannot be used for regional VNet integration with NAT gateway
-
-Port 25 is an SMTP port that is used to send email. Azure app services regional Virtual network integration cannot use port 25 by design. While it is possible to have the block on port 25 removed, having this block removed will still not allow you to use port 25 with your Azure App services. Azure App services regional virtual network integration cannot use port 25 by design.
-
-If NAT gateway is enabled on the integration subnet with your Azure App services, NAT gateway can still be used to connect outbound to the internet on other ports except port 25.
-
-**Work around solution:**
-* Set up port forwarding to a Windows VM to route traffic to Port 25.
- ## NAT gateway public IP not being used for outbound traffic ### VMs hold on to prior SNAT IP with active connection after NAT gateway added to a VNet
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
description: Learn about virtual network peering in Azure, including how it enables you to connect networks in Azure Virtual Network. documentationcenter: na-+ Previously updated : 11/15/2019 Last updated : 07/10/2022 -
+#customer intent: As a cloud architect, I need to know how to use virtual network peering for connecting virtual networks. This will allow me to design connectivity correctly, understand future scalability options, and limitations.
# Virtual network peering
Network traffic between peered virtual networks is private. Traffic between the
For peered virtual networks, resources in either virtual network can directly connect with resources in the peered virtual network.
-The network latency between virtual machines in peered virtual networks in the same region is the same as the latency within a single virtual network. The network throughput is based on the bandwidth that's allowed for the virtual machine, proportionate to its size. There isn't any additional restriction on bandwidth within the peering.
+The network latency between virtual machines in peered virtual networks in the same region is the same as the latency within a single virtual network. The network throughput is based on the bandwidth that's allowed for the virtual machine, proportionate to its size. There isn't any extra restriction on bandwidth within the peering.
The traffic between virtual machines in peered virtual networks is routed directly through the Microsoft backbone infrastructure, not through a gateway or over the public Internet. You can apply network security groups in either virtual network to block access to other virtual networks or subnets.
-When configuring virtual network peering, either open or close the network security group rules between the virtual networks. If you open full connectivity between peered virtual networks, you can apply network security groups to block or deny specific access. Full connectivity is the default option. To learn more about network security groups, see [Security groups](./network-security-groups-overview.md).
+When you configure virtual network peering, either open or close the network security group rules between the virtual networks. If you open full connectivity between peered virtual networks, you can apply network security groups to block or deny specific access. Full connectivity is the default option. To learn more about network security groups, see [Security groups](./network-security-groups-overview.md).
-## Resize the address of Azure virtual networks that are peered
+## Resize the address space of Azure virtual networks that are peered
-You can resize the address of Azure virtual networks that are peered without incurring any downtime. This feature is useful when you need to grow or resize the virtual networks in Azure after scaling your workloads. With this feature, existing peerings on the virtual network do not need to be deleted before adding or deleting an address prefix on the virtual network. This feature can work for both IPv4 and IPv6 address spaces.
+You can resize the address space of Azure virtual networks that are peered without incurring any downtime on the currently peered address space. This feature is useful when you need to resize the virtual network's address space after scaling your workloads. After resizing the address space, all that is required is for peers to be synced with the new address space changes. Resizing works for both IPv4 and IPv6 address spaces.
-Note:
+Addresses can be resized in the following ways:
-This feature does not support the following scenarios if the virtual network to be updated is peered with:
+- Modifying the address range prefix of an existing address range (For example changing 10.1.0.0/16 to 10.1.0.0/18)
+- Adding address ranges to a virtual network
+- Deleting address ranges from a virtual network
-* A classic virtual network
-* A managed virtual network such as the Azure VWAN hub
+Synching of virtual network peers can be performed through the Azure portal or with Azure PowerShell.
+> [!IMPORTANT]
+> This feature doesn't support scenarios where the virtual network to be updated is peered with:
+> * A classic virtual network
+> * A managed virtual network such as the Azure VWAN hub
## Service chaining
vpn-gateway Point To Site Vpn Client Cert Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-mac.md
Configure authentication settings. There are two sets of instructions. Choose th
:::image type="content" source="./media/point-to-site-vpn-client-cert-mac/connected.png" alt-text="Screenshot shows Connected." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/connected.png":::
-## <a name="openvpn-macOS"></a>OpenVPN - macOS steps
+## <a name="openvpn-macOS"></a>OpenVPN Client - macOS steps
>[!INCLUDE [OpenVPN Mac](../../includes/vpn-gateway-vwan-config-openvpn-mac.md)]
-## <a name="OpenVPN-iOS"></a>OpenVPN - iOS steps
+## <a name="OpenVPN-iOS"></a>OpenVPN Client - iOS steps
+
+The following steps use **OpenVPN Connect** from the App store.
>[!INCLUDE [OpenVPN iOS](../../includes/vpn-gateway-vwan-config-openvpn-ios.md)]
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
DRS 2.0 includes 17 rule groups, as shown in the following table. Each group con
|**[APPLICATION-ATTACK-RFI](#drs931-10)**|Protection against remote file inclusion attacks| |**[APPLICATION-ATTACK-RCE](#drs932-10)**|Protection against remote command execution| |**[APPLICATION-ATTACK-PHP](#drs933-10)**|Protect against PHP-injection attacks|
-|**[CROSS-SITE-SCRIPTING](#drs941-10)**|XSS - Cross-site Scripting|
+|**[APPLICATION-ATTACK-XSS](#drs941-10)**|Protect against cross-site scripting attacks|
|**[APPLICATION-ATTACK-SQLI](#drs942-10)**|Protect against SQL-injection attacks| |**[APPLICATION-ATTACK-SESSION-FIXATION](#drs943-10)**|Protect against session-fixation attacks| |**[APPLICATION-ATTACK-SESSION-JAVA](#drs944-10)**|Protect against JAVA attacks|
Front Door.
|933110|PHP Injection Attack: PHP Script File Upload Found| |933120|PHP Injection Attack: Configuration Directive Found| |933130|PHP Injection Attack: Variables Found|
-|933131|PHP Injection Attack: Variables Found|
|933140|PHP Injection Attack: I/O Stream Found| |933150|PHP Injection Attack: High-Risk PHP Function Name Found| |933151|PHP Injection Attack: Medium-Risk PHP Function Name Found| |933160|PHP Injection Attack: High-Risk PHP Function Call Found|
-|933161|PHP Injection Attack: Low-Value PHP Function Call Found|
|933170|PHP Injection Attack: Serialized Object Injection| |933180|PHP Injection Attack: Variable Function Call Found| |933200|PHP Injection Attack: Wrapper scheme detected|
Front Door.
|933110|PHP Injection Attack: PHP Script File Upload Found| |933120|PHP Injection Attack: Configuration Directive Found| |933130|PHP Injection Attack: Variables Found|
-|933131|PHP Injection Attack: Variables Found|
|933140|PHP Injection Attack: I/O Stream Found| |933150|PHP Injection Attack: High-Risk PHP Function Name Found| |933151|PHP Injection Attack: Medium-Risk PHP Function Name Found| |933160|PHP Injection Attack: High-Risk PHP Function Call Found|
-|933161|PHP Injection Attack: Low-Value PHP Function Call Found|
|933170|PHP Injection Attack: Serialized Object Injection| |933180|PHP Injection Attack: Variable Function Call Found|