Updates from: 04/18/2023 01:09:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Quickstart Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md
In this quickstart, you use an ASP.NET application to sign in using a social ide
## Run the application in Visual Studio 1. In the sample application project folder, open the **B2C-WebAPI-DotNet.sln** solution in Visual Studio.
-1. For this quickstart, you run both the **TaskWebApp** and **TaskService** projects at the same time. Right-click the **B2C-WebAPI-DotNet** solution in Solution Explorer, and then select **Set StartUp Projects**.
+1. For this quickstart, you run both the **TaskWebApp** and **TaskService** projects at the same time. Right-click the **B2C-WebAPI-DotNet** solution in Solution Explorer, and then select **Configure StartUp Projects...**.
1. Select **Multiple startup projects** and change the **Action** for both projects to **Start**. 1. Select **OK**. 1. Press **F5** to debug both applications. Each application opens in its own browser tab:
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Previously updated : 04/14/2023 Last updated : 04/17/2023
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 10/20/2022 Last updated : 04/17/2023 # Plan cloud HR application to Azure Active Directory user provisioning
-Historically, IT staff have relied on manual methods to create, update, and delete employees. They've used methods such as uploading CSV files or custom scripts to sync employee data. These provisioning processes are error prone, insecure, and hard to manage.
+Historically, IT staff has relied on manual methods to create, update, and delete employees. They've used methods such as uploading CSV files or custom scripts to sync employee data. These provisioning processes are error prone, insecure, and hard to manage.
To manage the identity lifecycles of employees, vendors, or contingent workers, [Azure Active Directory (Azure AD) user provisioning service](../app-provisioning/user-provisioning.md) offers integration with cloud-based human resources (HR) applications. Examples of applications include Workday or SuccessFactors.
The following video provides guidance on planning your HR-driven provisioning in
The Azure AD user provisioning service enables automation of the following HR-based identity lifecycle management scenarios: -- **New employee hiring:** When a new employee is added to the cloud HR app, a user account is automatically created in Active Directory and Azure AD with the option to write back the email address and username attributes to the cloud HR app.
+- **New employee hiring:** Adding an employee to the cloud HR app automatically creates a user in Active Directory and Azure AD. Adding a user account includes the option to write back the email address and username attributes to the cloud HR app.
- **Employee attribute and profile updates:** When an employee record such as name, title, or manager is updated in the cloud HR app, their user account is automatically updated in Active Directory and Azure AD. - **Employee terminations:** When an employee is terminated in the cloud HR app, their user account is automatically disabled in Active Directory and Azure AD. - **Employee rehires:** When an employee is rehired in the cloud HR app, their old account can be automatically reactivated or reprovisioned to Active Directory and Azure AD.
The cloud HR app integration with Azure AD user provisioning is ideally suited f
- Want a prebuilt, cloud-based solution for cloud HR user provisioning. - Require direct user provisioning from the cloud HR app to Active Directory or Azure AD. - Require users to be provisioned by using data obtained from the cloud HR app.-- Require joining, moving, and leaving users to be synced to one or more Active Directory forests, domains, and OUs based only on change information detected in the cloud HR app.
+- Syncing users who are joining, moving, and leaving. The sync happens between one or more Active Directory forests, domains, and OUs based only on change information detected in the cloud HR app.
- Use Microsoft 365 for email. ## Learn
This article uses the following terms:
This capability of HR-driven IT provisioning offers the following significant business benefits: - **Increase productivity:** You can now automate the assignment of user accounts and Microsoft 365 licenses and provide access to key groups. Automating assignments gives new hires immediate access to their job tools and increases productivity.-- **Manage risk:** You can increase security by automating changes based on employee status or group memberships with data flowing in from the cloud HR app. Automating changes ensures that user identities and access to key apps update automatically when users transition or leave the organization.
+- **Manage risk:** Automate changes based on employee status or group membership to increase security. This automation ensures that user identities and access to key apps update automatically. For example, an update in the HR app when a user transitions or leaves the organization flows in automatically.
- **Address compliance and governance:** Azure AD supports native audit logs for user provisioning requests performed by apps of both source and target systems. With auditing, you can track who has access to the apps from a single screen. - **Manage cost:** Automatic provisioning reduces costs by avoiding inefficiencies and human error associated with manual provisioning. It reduces the need for custom-developed user provisioning solutions built over time by using legacy and outdated platforms.
This capability of HR-driven IT provisioning offers the following significant bu
To configure the cloud HR app to Azure AD user provisioning integration, you require a valid [Azure AD Premium license](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) and a license for the cloud HR app, such as Workday or SuccessFactors.
-You also need a valid Azure AD Premium P1 or higher subscription license for every user that will be sourced from the cloud HR app and provisioned to either Active Directory or Azure AD. Any improper number of licenses owned in the cloud HR app might lead to errors during user provisioning.
+You also need a valid Azure AD Premium P1 or higher subscription license for every user that is sourced from the cloud HR app and provisioned to either Active Directory or Azure AD. Any improper number of licenses owned in the cloud HR app might lead to errors during user provisioning.
### Prerequisites
Include a representative from the HR organization who can provide inputs on exis
### Plan communications
-Communication is critical to the success of any new service. Proactively communicate with your users about when and how their experience will change. Let them know how to gain support if they experience issues.
+Communication is critical to the success of any new service. Proactively communicate with your users about when and how their experience is changing. Let them know how to gain support if they experience issues.
### Plan a pilot
This is the most common deployment topology. Use this topology, if you need to p
### Deployment topology 2: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain
-This topology supports business requirements where attribute mapping and provisioning logic differs based on user type (employee/contractor), user location or user's business unit. You can also use this topology to delegate the administration and maintenance of inbound user provisioning based on division or country.
+This topology supports business requirements where attribute mapping and provisioning logic differ based on user type (employee/contractor), user location or user's business unit. You can also use this topology to delegate the administration and maintenance of inbound user provisioning based on division or country.
:::image type="content" source="media/plan-cloud-hr-provision/topology-2-separate-apps-with-single-ad-domain.png" alt-text="Screenshot of separate apps to provision users from Cloud HR to single AD domain" lightbox="media/plan-cloud-hr-provision/topology-2-separate-apps-with-single-ad-domain.png":::
For example, if you want to create users in OU based on the HR attribute **Munic
Switch([Municipality], "OU=Default,OU=Users,DC=contoso,DC=com", "Dallas", "OU=Dallas,OU=Users,DC=contoso,DC=com", "Austin", "OU=Austin,OU=Users,DC=contoso,DC=com", "Seattle", "OU=Seattle,OU=Users,DC=contoso,DC=com", "London", "OU=London,OU=Users,DC=contoso,DC=com") `
-With this expression, if the Municipality value is Dallas, Austin, Seattle, or London, the user account will be created in the corresponding OU. If there's no match, then the account is created in the default OU.
+With this expression, if the Municipality value is Dallas, Austin, Seattle, or London, the user account is created in the corresponding OU. If there's no match, then the account is created in the default OU.
## Plan for password delivery of new user accounts When you initiate the Joiners process, you need to set and deliver a temporary password of new user accounts. With cloud HR to Azure AD user provisioning, you can roll out the Azure AD [self-service password reset](../authentication/tutorial-enable-sspr.md) (SSPR) capability for the user on day one.
-SSPR is a simple means for IT administrators to enable users to reset their passwords or unlock their accounts. You can provision the **Mobile Number** attribute from the cloud HR app to Active Directory and sync it with Azure AD. After the **Mobile Number** attribute is in Azure AD, you can enable SSPR for the user's account. Then on day one, the new user can use the registered and verified mobile number for authentication. Refer to the [SSPR documentation](../authentication/howto-sspr-authenticationdata.md) for details on how to pre-populate authentication contact information.
+SSPR is a simple means for IT administrators to enable users to reset their passwords or unlock their accounts. You can provision the **Mobile Number** attribute from the cloud HR app to Active Directory and sync it with Azure AD. After the **Mobile Number** attribute is in Azure AD, you can enable SSPR for the user's account. Then on day one, the new user can use the registered and verified mobile number for authentication. Refer to the [SSPR documentation](../authentication/howto-sspr-authenticationdata.md) for details on how to prepopulate authentication contact information.
## Plan for initial cycle
After you configure the cloud HR app to Azure AD user provisioning, run test cas
|User is terminated in the cloud HR app.|- The user account is disabled in Active Directory.</br>- The user can't log into any enterprise apps protected by Active Directory. |User supervisory organization is updated in the cloud HR app.|Based on the attribute mapping, the user account moves from one OU to another in Active Directory.| |HR updates the user's manager in the cloud HR app.|The manager field in Active Directory is updated to reflect the new manager's name.|
-|HR rehires an employee into a new role.|Behavior depends on how the cloud HR app is configured to generate employee IDs:</br>- If the old employee ID is reused for a rehire, the connector enables the existing Active Directory account for the user.</br>- If the rehire gets a new employee ID, the connector creates a new Active Directory account for the user.|
+|HR rehires an employee into a new role.|Behavior depends on how the cloud HR app is configured to generate employee IDs:</br>- If the old employee ID is used for a rehired employee, the connector enables the existing Active Directory account for the user.</br>- If the rehired employee gets a new employee ID, the connector creates a new Active Directory account for the user.|
|HR converts the employee to a contract worker or vice versa.|A new Active Directory account is created for the new persona and the old account gets disabled on the conversion effective date.| Use the previous results to determine how to transition your automatic user provisioning implementation into production based on your established timelines.
active-directory Authorization Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authorization-basics.md
One method for achieving ABAC with Azure Active Directory is using [dynamic grou
Authorization logic is often implemented within the applications or solutions where access control is required. In many cases, application development platforms offer middleware or other API solutions that simplify the implementation of authorization. Examples include use of the [AuthorizeAttribute](/aspnet/core/security/authorization/simple?view=aspnetcore-5.0&preserve-view=true) in ASP.NET or [Route Guards](./scenario-spa-sign-in.md?tabs=angular2#sign-in-with-a-pop-up-window) in Angular.
-For authorization approaches that rely on information about the authenticated entity, an application evaluates information exchanged during authentication. For example, by using the information that was provided within a [security token](./security-tokens.md)). For information not contained in a security token, an application might make extra calls to external resources.
+For authorization approaches that rely on information about the authenticated entity, an application evaluates information exchanged during authentication. For example, by using the information that was provided within a [security token](./security-tokens.md). For information not contained in a security token, an application might make extra calls to external resources.
It's not strictly necessary for developers to embed authorization logic entirely within their applications. Instead, dedicated authorization services can be used to centralize authorization implementation and management.
It's not strictly necessary for developers to embed authorization logic entirely
- To learn about custom role-based access control implementation in applications, see [Role-based access control for application developers](./custom-rbac-for-developers.md). - To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](./application-model.md).-- For an example of configuring simple authentication-based authorization, see [Configure your App Service or Azure Functions app to use Azure AD login](../../app-service/configure-authentication-provider-aad.md).
+- For an example of configuring simple authentication-based authorization, see [Configure your App Service or Azure Functions app to use Azure AD login](../../app-service/configure-authentication-provider-aad.md).
active-directory Developer Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-glossary.md
An identity used by a software workload like an application, service, script, or
## Workload identity federation
-Allows you to securely access Azure AD protected resources from external apps and services without needing to manage secrets (for supported scenarios). For more information, see [workload identity federation](workload-identity-federation.md).)
+Allows you to securely access Azure AD protected resources from external apps and services without needing to manage secrets (for supported scenarios). For more information, see [workload identity federation](workload-identity-federation.md).
## Next steps
active-directory Howto Build Services Resilient To Metadata Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-build-services-resilient-to-metadata-refresh.md
services.Configure<JwtBearerOptions>(AzureADDefaults.JwtBearerAuthenticationSche
// shouldn’t be necessary as it’s true by default options.RefreshOnIssuerKeyNotFound = true; …
-};
+});
``` ## ASP.NET/ OWIN
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
> > > |Where:| Description | > > |||
-> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`)) |
> > #### acquireTokenSilent: Get an access token silently >
> > > |Where: | Description | > > |||
-> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`)) |
> > | `account` | The account a token is being requested for. This quickstart is about a single account application. If you want to build a multi-account app you'll need to define logic to identify which account to use for token requests using `accountsFromDeviceForParameters:completionBlock:` and passing correct `accountIdentifier` | > > [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
active-directory Msal Authentication Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-authentication-flows.md
To satisfy either requirement, one of these operations must have been completed:
- You as the application developer have selected **Grant** in the Azure portal for yourself. - A tenant admin has selected **Grant/revoke admin consent for {tenant domain}** in the **API permissions** tab of the app registration in the Azure portal; see [Add permissions to access your web API](quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api). - You've provided a way for users to consent to the application; see [User consent](../manage-apps/user-admin-consent-overview.md#user-consent).-- You've provided a way for the tenant admin to consent for the application; see [Administrator consent]../manage-apps/user-admin-consent-overview.md#administrator-consent).
+- You've provided a way for the tenant admin to consent for the application; see [Administrator consent](../manage-apps/user-admin-consent-overview.md#admin-consent).
For more information on consent, see [Permissions and consent](v2-permissions-and-consent.md#consent).
active-directory Msal Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-configuration.md
The list of authorities that are known and trusted by you. In addition to the au
| Property | Data Type | Required | Notes | |--|-||-| | `type` | String | Yes | Specifies the audience your app wants to target. Possible values: `AzureADandPersonalMicrosoftAccount`, `PersonalMicrosoftAccount`, `AzureADMultipleOrgs`, `AzureADMyOrg` |
-| `tenant_id` | String | Yes | Required only when `"type":"AzureADMyOrg"`. Optional for other `type` values. This can be a tenant domain such as `contoso.com`, or a tenant ID such as `72f988bf-86f1-41af-91ab-2d7cd011db46`) |
+| `tenant_id` | String | Yes | Required only when `"type":"AzureADMyOrg"`. Optional for other `type` values. This can be a tenant domain such as `contoso.com`, or a tenant ID such as `72f988bf-86f1-41af-91ab-2d7cd011db46` |
### authorization_user_agent
active-directory Msal Logging Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-python.md
from opencensus.ext.azure.log_exporter import AzureLogHandler
APP_INSIGHTS_KEY = os.getenv('APP_INSIGHTS_KEY')
-logging.getLogger("msal").addHandler(AzureLogHandler(connection_string='InstrumentationKey={0}'.format(APP_INSIGHTS_KEY))
+logging.getLogger("msal").addHandler(AzureLogHandler(connection_string='InstrumentationKey={0}'.format(APP_INSIGHTS_KEY)))
``` ### Personal and organizational data in Python
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-ios.md
> > > |Where:| Description | > > |||
-> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`)) |
> > #### acquireTokenSilent: Get an access token silently >
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md
If you have several subdomains and your scenario requires that, upon successful
In this approach: 1. Create a "shared" redirect URI per application to process the security tokens you receive from the authorization endpoint.
-1. Your application can send application-specific parameters (such as subdomain URL where the user originated or anything like branding information) in the state parameter. When using a state parameter, guard against CSRF protection as specified in [section 10.12 of RFC 6749](https://tools.ietf.org/html/rfc6749#section-10.12)).
+1. Your application can send application-specific parameters (such as subdomain URL where the user originated or anything like branding information) in the state parameter. When using a state parameter, guard against CSRF protection as specified in [section 10.12 of RFC 6749](https://tools.ietf.org/html/rfc6749#section-10.12).
1. The application-specific parameters will include all the information needed for the application to render the correct experience for the user, that is, construct the appropriate application state. The Azure AD authorization endpoint strips HTML from the state parameter so make sure you are not passing HTML content in this parameter. 1. When Azure AD sends a response to the "shared" redirect URI, it will send the state parameter back to the application. 1. The application can then use the value in the state parameter to determine which URL to further send the user to. Make sure you validate for CSRF protection.
active-directory Scenario Protected Web Api Verification Scope App Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles.md
Like on action, you can also declare these required scopes in the configuration,
using Microsoft.Identity.Web [Authorize]
-[RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")
+[RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")]
public class TodoListController : Controller { // GET: api/values
active-directory Scenario Spa Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-call-api.md
Use the acquired access token as a bearer in an HTTP request to call any web API
fetch(graphEndpoint, options) .then(function (response) { //do something with response
- }
+ })
``` # [Angular](#tab/angular)
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
using Microsoft.Identity.Web;
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(Configuration, "AzureAd") .EnableTokenAcquisitionToCallDownstreamApi()
- .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta"))
+ .AddDownstreamWebApi("MyApi", Configuration.GetSection("MyApiScope"))
.AddInMemoryTokenCaches(); // ... ```
+If the web app needs to call another API resource, repeat the `.AddDownstreamWebApi()` method with the relevant scope as shown in the following snippet:
+
+```csharp
+using Microsoft.Identity.Web;
+
+// ...
+builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, "AzureAd")
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddDownstreamWebApi("MyApi", Configuration.GetSection("MyApiScope"))
+ .AddDownstreamWebApi("MyApi2", Configuration.GetSection("MyApi2Scope"))
+ .AddInMemoryTokenCaches();
+// ...
+```
+
+Note that `.EnableTokenAcquisitionToCallDownstreamApi` is called without any parameter, which means that the access token will be acquired just in time as the controller requests the token by specifying the scope.
+
+The scope can also be passed in when calling `.EnableTokenAcquisitionToCallDownstreamApi`, which would make the web app acquire the token during the initial user login itself. The token will then be pulled from the cache when controller requests it.
Similar to web apps, various token cache implementations can be chosen. For details, see [Microsoft identity web - Token cache serialization](https://aka.ms/ms-id-web/token-cache-serialization) on GitHub.
active-directory Tutorial V2 Shared Device Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-shared-device-mode.md
PublicClientApplication.create(this.getApplicationCOntext(),
loadAccount(); } @Override
- public void onError(MsalException exception{
+ public void onError(MsalException exception){
/*Fail to initialize PublicClientApplication */ } });
The `loadAccount` method retrieves the account of the signed in user. The `onAcc
```java private void loadAccount() {
- mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback()
+ mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback())
{ @Overide public void onAccountLoaded(@Nullable IAccount activeAccount)
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Previously updated : 01/05/2023 Last updated : 04/17/2023
The OAuth 2.0 authorization code grant type, or _auth code flow_, enables a client application to obtain authorized access to protected resources like web APIs. The auth code flow requires a user-agent that supports redirection from the authorization server (the Microsoft identity platform) back to your application. For example, a web browser, desktop, or mobile application operated by a user to sign in to your app and access their data.
-This article describes low-level protocol details usually required only when manually crafting and issuing raw HTTP requests to execute the flow, which we do **not** recommend. Instead, use a [Microsoft-built and supported authentication library](reference-v2-libraries.md) to get security tokens and call protected web APIs in your apps.
+This article describes low-level protocol details required only when manually crafting and issuing raw HTTP requests to execute the flow, which we do **not** recommend. Instead, use a [Microsoft-built and supported authentication library](reference-v2-libraries.md) to get security tokens and call protected web APIs in your apps.
## Applications that support the auth code flow
Redirect URIs for SPAs that use the auth code flow require special configuration
The `spa` redirect type is backward-compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow.
-If you attempt to use the authorization code flow without setting up CORS for your redirect URI, you will see this error in the console:
+If you attempt to use the authorization code flow without setting up CORS for your redirect URI, you'll see this error in the console:
```http access to XMLHttpRequest at 'https://login.microsoftonline.com/common/v2.0/oauth2/token' from origin 'yourApp.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&code_challenge_method=S256 ```
-> [!TIP]
-> Select the link below to execute this request! After signing in, your browser should be redirected to `http://localhost/myapp/` with a `code` in the address bar.
-> <a href="https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=code&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&response_mode=query&scope=openid%20offline_access%20https%3A%2F%2Fgraph.microsoft.com%2Fmail.read&state=12345" target="_blank">https://login.microsoftonline.com/common/oauth2/v2.0/authorize...</a>
- | Parameter | Required/optional | Description | |--|-|--| | `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. Valid values are `common`, `organizations`, `consumers`, and tenant identifiers. For guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to sign them into the resource tenant. For more information, see [Endpoints](active-directory-v2-protocols.md#endpoints). |
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `response_type` | required | Must include `code` for the authorization code flow. Can also include `id_token` or `token` if using the [hybrid flow](#request-an-id-token-as-well-or-hybrid-flow). | | `redirect_uri` | required | The `redirect_uri` of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except it must be URL-encoded. For native and mobile apps, use one of the recommended values: `https://login.microsoftonline.com/common/oauth2/nativeclient` for apps using embedded browsers or `http://localhost` for apps that use system browsers. | | `scope` | required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. For the `/authorize` leg of the request, this parameter can cover multiple resources. This value allows your app to get consent for multiple web APIs you want to call. |
-| `response_mode` | recommended | Specifies how the identity platform should return the requested token to your app. <br/><br/>Supported values:<br/><br/>- `query`: Default when requesting an access token. Provides the code as a query string parameter on your redirect URI. The `query` parameter is not supported when requesting an ID token by using the implicit flow. <br/>- `fragment`: Default when requesting an ID token by using the implicit flow. Also supported if requesting *only* a code.<br/>- `form_post`: Executes a POST containing the code to your redirect URI. Supported when requesting a code.<br/><br/> |
+| `response_mode` | recommended | Specifies how the identity platform should return the requested token to your app. <br/><br/>Supported values:<br/><br/>- `query`: Default when requesting an access token. Provides the code as a query string parameter on your redirect URI. The `query` parameter isn't supported when requesting an ID token by using the implicit flow. <br/>- `fragment`: Default when requesting an ID token by using the implicit flow. Also supported if requesting *only* a code.<br/>- `form_post`: Executes a POST containing the code to your redirect URI. Supported when requesting a code.<br/><br/> |
| `state` | recommended | A value included in the request that is also returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred. For instance, it could encode the page or view they were on. | | `prompt` | optional | Indicates the type of user interaction that is required. Valid values are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` forces the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite. It ensures that the user isn't presented with any interactive prompt. If the request can't be completed silently by using single-sign on, the Microsoft identity platform returns an `interaction_required` error.<br/>- `prompt=consent` triggers the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` interrupts single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> | | `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user. Apps can use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
error=access_denied
| Parameter | Description | |-||
-| `error` | An error code string that can be used to classify types of errors, and to react to errors. This part of the error is provided so that the app can react appropriately to the error, but does not explain in depth why an error occurred. |
+| `error` | An error code string that can be used to classify types of errors, and to react to errors. This part of the error is provided so that the app can react appropriately to the error, but doesn't explain in depth why an error occurred. |
| `error_description` | A specific error message that can help a developer identify the cause of an authentication error. This part of the error contains most of the useful information about _why_ the error occurred. | #### Error codes for authorization endpoint errors
The following table describes the various error codes that can be returned in th
| `unsupported_response_type` | The authorization server doesn't support the response type in the request. | Fix and resubmit the request. This error is a development error typically caught during initial testing. In the [hybrid flow](#request-an-id-token-as-well-or-hybrid-flow), this error signals that you must enable the ID token implicit grant setting on the client app registration. | | `server_error` | The server encountered an unexpected error.| Retry the request. These errors can result from temporary conditions. The client application might explain to the user that its response is delayed to a temporary error. | | `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. |
-| `invalid_resource` | The target resource is invalid because it does not exist, Azure AD can't find it, or it's not correctly configured. | This error indicates the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
+| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's not correctly configured. | This error indicates the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
| `login_required` | Too many or no users found. | The client requested silent authentication (`prompt=none`), but a single user couldn't be found. This error may mean there are multiple users active in the session, or no users. This error takes into account the tenant chosen. For example, if there are two Azure AD accounts active and one Microsoft account, and `consumers` is chosen, silent authentication works. | | `interaction_required` | The request requires user interaction. | Another authentication step or consent is required. Retry the request without `prompt=none`. |
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `redirect_uri` | required | The same `redirect_uri` value that was used to acquire the `authorization_code`. | | `grant_type` | required | Must be `authorization_code` for the authorization code flow. | | `code_verifier` | recommended | The same `code_verifier` that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
-| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. Don't use the application secret in a native app or single page app because a `client_secret` can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which can store the `client_secret` securely on the server side. Like all parameters here, the client secret must be URL-encoded before being sent. This step is usually done by the SDK. For more information on URI encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
+| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. Don't use the application secret in a native app or single page app because a `client_secret` can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which can store the `client_secret` securely on the server side. Like all parameters here, the client secret must be URL-encoded before being sent. This step is done by the SDK. For more information on URI encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
### Request an access token with a certificate credential
This example is an Error response:
## Use the access token
-Now that you've successfully acquired an `access_token`, you can use the token in requests to web APIs by including it in the `Authorization` header:
+Now that you have successfully acquired an `access_token`, you can use the token in requests to web APIs by including it in the `Authorization` header:
```http GET /v1.0/me/messages
active-directory Web App Quickstart Portal Node Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md
Last updated 04/12/2023
# Portal quickstart for React SPA > In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure AD CIAM.-
-> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
-> ## Prerequisites
->
-> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-> * [Node.js](https://nodejs.org/en/download/)
-> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
->
-> ## Run the sample
>
-> 1. Unzip the downloaded file.
->
-> 1. In your terminal, locate the folder that contains the `package.json` file, then run the following command:
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> 1. Make sure you've installed [Node.js](https://nodejs.org/en/download/).
>
+> 1. Unzip the sample, `cd` into the folder that contains `package.json`, then run the following commands:
> ```console > npm install && npm start > ```
->
-> 1. Open your browser and visit `http://locahost:3000`.
->
-> 1. Select the **Sign-in** link on the navigation bar, then follow the prompts.
+> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in** link, then follow the prompts.
>
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 04/07/2023 Last updated : 04/17/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on April 7th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on April 17th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Audio Conferencing select dial-out | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) | | Microsoft Teams (Free) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | Microsoft Teams Essentials | Teams_Ess | fde42873-30b6-436b-b361-21af5a6b84ae | TeamsEss (f4f2f6de-6830-442b-a433-e92249faebe2) | Microsoft Teams Essentials (f4f2f6de-6830-442b-a433-e92249faebe2) |
+| Microsoft Teams Essentials (AAD Identity) | TEAMS_ESSENTIALS_AAD | 3ab6abff-666f-4424-bfb7-f0bc274ec7bc | EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>ONEDRIVE_BASIC_P2 (4495894f-534f-41ca-9d3b-0ebf1220a423)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf) | Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OneDrive for Business (Basic 2) (4495894f-534f-41ca-9d3b-0ebf1220a423)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf) |
| Microsoft Teams Exploratory | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 | | Microsoft Teams Phone Standard | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft Teams Phone Standard for DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
active-directory Allow Deny List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/allow-deny-list.md
Previously updated : 08/31/2022 Last updated : 04/17/2023
To add a blocklist:
2. Select **Azure Active Directory** > **Users** > **User settings**. 3. Under **External users**, select **Manage external collaboration settings**. 4. Under **Collaboration restrictions**, select **Deny invitations to the specified domains**.
-5. Under **TARGET DOMAINS**, enter the name of one of the domains that you want to block. For multiple domains, enter each domain on a new line. For example:
+5. Under **Target domains**, enter the name of one of the domains that you want to block. For multiple domains, enter each domain on a new line. For example:
- ![Screenshot showing the deny option with added domains.](./media/allow-deny-list/DenyListSettings.png)
+ :::image type="content" source="media/allow-deny-list/DenyListSettings.PNG" alt-text="Screenshot showing the deny option with added domains.":::
6. When you're done, select **Save**.
To add an allowlist:
2. Select **Azure Active Directory** > **Users** > **User settings**. 3. Under **External users**, select **Manage external collaboration settings**. 4. Under **Collaboration restrictions**, select **Allow invitations only to the specified domains (most restrictive)**.
-5. Under **TARGET DOMAINS**, enter the name of one of the domains that you want to allow. For multiple domains, enter each domain on a new line. For example:
+5. Under **Target domains**, enter the name of one of the domains that you want to allow. For multiple domains, enter each domain on a new line. For example:
- ![Screenshot showing the allow option with added domains.](./media/allow-deny-list/AllowListSettings.png)
+ :::image type="content" source="media/allow-deny-list/AllowlistSettings.PNG" alt-text="Screenshot showing the allow option with added domains.":::
6. When you're done, select **Save**.
Remove-AzureADPolicy -Id $currentpolicy.Id
## Next steps -- For an overview of Azure AD B2B, see [What is Azure AD B2B collaboration?](what-is-b2b.md)-- To learn more about managing B2B collaboration in your organization, see [External collaboration settings](external-collaboration-settings-configure.md).--- For information about Conditional Access and B2B collaboration, see [Conditional Access for B2B collaboration users](authentication-conditional-access.md).
+- [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md)
+- [External collaboration settings](external-collaboration-settings-configure.md).
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
Previously updated : 04/03/2023 Last updated : 04/17/2023
The following diagram illustrates the flow when email one-time passcode authenti
| Step | Description | |--|--| | **1** |The user requests access to a resource in another tenant. The resource redirects the user to its resource tenant, a trusted IdP.|
-| **2** | The resource tenant identifies the user as an [external email one-time passcode (OTP) user](./one-time-passcode.md) and sends an email with the OTP to the user.|
+| **2** | The resource tenant identifies the user as an external email one-time passcode (OTP) user and sends an email with the OTP to the user.|
| **3** | The user retrieves the OTP and submits the code. The resource tenant evaluates the user against its Conditional Access policies. | **4** | Once all Conditional Access policies are satisfied, the resource tenant issues a token and redirects the user to its resource. | ## Conditional Access for external users
-Organizations can enforce [Conditional Access](../conditional-access/overview.md) policies for external B2B collaboration and B2B direct connect users in the same way that theyΓÇÖre enabled for full-time employees and members of the organization. With the introduction of cross-tenant access settings, you can also trust MFA and device claims from external Azure AD organizations. This section describes important considerations for applying Conditional Access to users outside of your organization.
+Organizations can enforce Conditional Access policies for external B2B collaboration and B2B direct connect users in the same way that theyΓÇÖre enabled for full-time employees and members of the organization. With the introduction of cross-tenant access settings, you can also trust MFA and device claims from external Azure AD organizations. This section describes important considerations for applying Conditional Access to users outside of your organization.
### Assigning Conditional Access policies to external user types
When configuring a Conditional Access policy, you have granular control over the
- **B2B direct connect users** - External users who are able to access your resources via B2B direct connect, which is a mutual, two-way connection with another Azure AD organization that allows single sign-on access to certain Microsoft applications (currently, Microsoft Teams Connect shared channels). B2B direct connect users donΓÇÖt have a presence in your Azure AD organization, but are instead managed from within the application (for example, by the Teams shared channel owner). - **Local guest users** - Local guest users have credentials that are managed in your directory. Before Azure AD B2B collaboration was available, it was common to collaborate with distributors, suppliers, vendors, and others by setting up internal credentials for them and designating them as guests by setting the user object UserType to Guest. - **Service provider users** - Organizations that serve as cloud service providers for your organization (the isServiceProvider property in the Microsoft Graph [partner-specific configuration](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner) is true).-- **Other external users** - Applies to any users who don't fall into the categories above, but who are not considered internal members of your organization, meaning they don't authenticate internally via Azure AD, and the user object created in the resource Azure AD directory does not have a UserType of Member.
+- **Other external users** - Applies to any users who don't fall into the categories above, but who aren't considered internal members of your organization, meaning they don't authenticate internally via Azure AD, and the user object created in the resource Azure AD directory doesn't have a UserType of Member.
>[!NOTE] > The "All guest and external users" selection has now been replaced with "Guest and external users" and all its sub types. For customers who previously had a Condtional Access policy with "All guest and external users" selected will now see "Guest and external users" along with all sub types being selected. This change in UX does not have any functional impact on how policy is evaluated by Conditional Access backend. The new selection provides customers the needed granularity to choose specifc types of guest and external users to include/exclude from user scope when creating their Conditional Access policy.
The following PowerShell cmdlets are available to *proof up* or request MFA regi
### Authentication strength policies for external users
-[Authentication strength](https://aka.ms/b2b-auth-strengths) is a Conditional Access control that lets you define a specific combination of multifactor authentication (MFA) methods that an external user must complete to access your resources. This control is especially useful for restricting external access to sensitive apps in your organization because you can enforce specific authentication methods, such as a phishing-resistant method, for external users.
+Authentication strength is a Conditional Access control that lets you define a specific combination of multifactor authentication (MFA) methods that an external user must complete accessing your resources. This control is especially useful for restricting external access to sensitive apps in your organization because you can enforce specific authentication methods, such as a phishing-resistant method, for external users.
You also have the ability to apply authentication strength to the different types of [guest or external users](#assigning-conditional-access-policies-to-external-user-types) that you collaborate or connect with. This means you can enforce authentication strength requirements that are unique to your B2B collaboration, B2B direct connect, and other external access scenarios.
When device trust settings are enabled, Azure AD checks a user's authentication
### Device filters
-When creating Conditional Access policies for external users, you can evaluate a policy based on the device attributes of a registered device in Azure AD. By using the *filter for devices* condition, you can target specific devices using the [supported operators and properties](../conditional-access/concept-condition-filters-for-devices.md#supported-operators-and-device-properties-for-filters) and the other available assignment conditions in your Conditional Access policies.
+When creating Conditional Access policies for external users, you can evaluate a policy based on the device attributes of a registered device in Azure AD. By using the *filter for devices* condition, you can target specific devices using the supported operators and properties and the other available assignment conditions in your Conditional Access policies.
Device filters can be used together with cross-tenant access settings to base policies on devices that are managed in other organizations. For example, suppose you want to block devices from an external Azure AD tenant based on a specific device attribute. You can set up a device attribute-based policy by doing the following:
For more information, see [Identity Protection and B2B users](../identity-protec
For more information, see the following articles: - [Zero Trust policies for allowing guest access and B2B external user access](/microsoft-365/security/office-365-security/identity-access-policies-guest-access?view=o365-worldwide&preserve-view=true)-- [What is Azure AD B2B collaboration?](./what-is-b2b.md) - [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md)-- [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) - [Frequently Asked Questions (FAQs)](./faq.yml)
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Last updated 3/28/2023 -+ zone_pivot_groups: enterprise-apps-all
zone_pivot_groups: enterprise-apps-all
In this article, you learn how to configure permissions classifications in Azure Active Directory (Azure AD). Permission classifications allow you to identify the impact that different permissions have according to your organization's policies and risk evaluations. For example, you can use permission classifications in consent policies to identify the set of permissions that users are allowed to consent to.
-Currently, only the "Low impact" permission classification is supported. Only delegated permissions that don't require admin consent can be classified as "Low impact".
+Three permission classifications are supported: "Low", "Medium" (preview), and "High" (preview). Currently, only delegated permissions that don't require admin consent can be classified.
The minimum permissions needed to do basic sign-in are `openid`, `profile`, `email`, and `offline_access`, which are all delegated permissions on the Microsoft Graph. With these permissions an app can read details of the signed-in user's profile, and can maintain this access even when the user is no longer using the app.
The minimum permissions needed to do basic sign-in are `openid`, `profile`, `ema
To configure permission classifications, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: A global administrator, or owner of the service principal.
+- One of the following roles: Global Administrator, Application Administrator, or Cloud Application Administrator
## Manage permission classifications
Follow these steps to classify permissions using the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator), [Application Administrator](../roles/permissions-reference.md#application-administrator), or [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) 1. Select **Azure Active Directory** > **Enterprise applications** > **Consent and permissions** > **Permission classifications**.
-1. Choose **Add permissions** to classify another permission as "Low impact".
+1. Choose the tab for the permission classification you'd like to update.
+1. Choose **Add permissions** to classify another permission.
1. Select the API and then select the delegated permission(s). In this example, we've classified the minimum set of permission required for single sign-on:
You can use the latest [Azure AD PowerShell](/powershell/module/azuread/?preserv
Run the following command to connect to Azure AD PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article. ```powershell
-Connect-AzureAD -Scopes
+Connect-AzureAD
``` ### List the current permission classifications
Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant".
```powershell $params = @{ -
- PermissionId = $delegatedPermission.Id
-
- PermissionName = $delegatedPermission.Value
-
- Classification = "Low"
-
+ PermissionId = $delegatedPermission.Id
+ PermissionName = $delegatedPermission.Value
+ Classification = "Low"
} New-MgServicePrincipalDelegatedPermissionClassification -ServicePrincipalId $api.Id -BodyParameter $params
Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant".
1. Find the delegated permission classification you wish to remove: ```powershell
- $classifications= Get-MgServicePrincipalDelegatedPermissionClassification -ServicePrincipalId $api.Id
+ $classifications = Get-MgServicePrincipalDelegatedPermissionClassification -ServicePrincipalId $api.Id
$classificationToRemove = $classifications | Where-Object {$_.PermissionName -eq "openid"} ```
DELETE https://graph.microsoft.com/v1.0/servicePrincipals(appId='00000003-0000-0
## Next steps - [Manage app consent policies](manage-app-consent-policies.md)-- [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
+- [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
$smssignin = Get-MgUserAuthenticationPhoneMethod -UserId $userId
##### End the script ```
+#### Symptom - Users fail to provision with error "AzureActiveDirectoryForbidden"
+
+Users in scope fail to provision. The provisioning logs details include the following error message:
+
+```
+The provisioning service was forbidden from performing an operation on Azure Active Directory, which is unusual.
+A simultaneous change to the target object may have occurred, in which case, the operation might succeed when it is retried.
+Alternatively, the target of the operation, or one of its properties, may be mastered on-premises, in which case,
+the provisioning service is not permitted to update it, and the corresponding source entry should be removed from the provisioning service's scope.
+Otherwise, authorizations may have been customized in such a way as to prevent the provisioning service from modifying the target object or one of its properties;
+if so, then, again, the corresponding source entry should be removed from scope.
+This operation was retried 0 times.
+```
+
+**Cause**
+
+This error indicates the Guest invite settings in the target tenant are configured with the most restrictive setting: "No one in the organization can invite guest users including admins (most restrictive)".
+
+**Solution**
+
+Change the Guest invite settings in the target tenant to a less restrictive setting. For more information, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
+ ## Next steps - [Tutorial: Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
Follow the steps below to send logs from Azure Active Directory to Azure Monitor
* `ADFSSignInLogs` Active Directory Federation Services (ADFS) * `RiskyUsers` * `UserRiskEvents`
-
+ * `RiskyServicePrincipals`
+ * `ServicePrincipalRiskEvents`
- The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview.
-
- * `AADServicePrincipalRiskEvents`
+1. The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview.
* `EnrichedOffice365AuditLogs` * `MicrosoftGraphActivityLogs` * `NetworkAccessTrafficLogs`
- * `RiskyServicePrincipals`
1. Select the **Destination details** for where you'd like to send the logs. Choose any or all of the following destinations. Additional fields appear, depending on your selection.
If you do not see logs appearing in the selected destination after 15 minutes, s
* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) * [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md) * [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)++
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/17/2023
Use the information below to make a decision between using the AWS Single Sign-O
| SAML certificate| Single certificate| Separate certificates per app / account | ## AWS Single-Account Access architecture
-![Diagram of Azure AD and AWS relationship](./media/amazon-web-service-tutorial/tutorial_amazonwebservices_image.png)
+![Screenshot showing Azure AD and AWS relationship.](./media/amazon-web-service-tutorial/tutorial_amazonwebservices_image.png)
You can configure multiple identifiers for multiple instances. For example:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot showing Edit Basic SAML Configuration.](common/edit-urls.png)
1. In the **Basic SAML Configuration** section, update both **Identifier (Entity ID)** and **Reply URL** with the same default value: `https://signin.aws.amazon.com/saml`. You must select **Save** to save the configuration changes.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. AWS application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot showing default attributes.](common/default-attributes.png)
1. In addition to above, AWS application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** (Step 3) dialog box, select **Add a certificate**.
- ![Create new SAML Certificate](common/add-saml-certificate.png)
+ ![Screenshot showing Create new SAML Certificate.](common/add-saml-certificate.png)
1. Generate a new SAML signing certificate, and then select **New Certificate**. Enter an email address for certificate notifications.
- ![New SAML Certificate](common/new-saml-certificate.png)
+ ![Screenshot showing New SAML Certificate.](common/new-saml-certificate.png)
1. In the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](./media/amazon-web-service-tutorial/certificate.png)
+ ![Screenshot showing the Certificate download link.](./media/amazon-web-service-tutorial/certificate.png)
1. In the **Set up AWS Single-Account Access** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot showing Copy configuration URLs.](common/copy-configuration-urls.png)
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different browser window, sign-on to your AWS company site as an administrator.
-2. Select **AWS Home**.
+1. In AWS home page, search for **IAM** and click it.
- ![Screenshot of AWS company site, with AWS Home icon highlighted][11]
+ ![Screenshot of AWS services page, with IAM highlighted.](./media/amazon-web-service-tutorial/identity-access-management.png)
-3. Select **Identity and Access Management**.
+1. Go to **Access management** -> **Identity Providers** and click **Add provider** button.
- ![Screenshot of AWS services page, with IAM highlighted][12]
+ ![Screenshot of IAM page, with Identity Providers and Create Provider highlighted.](./media/amazon-web-service-tutorial/add-provider.png)
-4. Select **Identity Providers** > **Create Provider**.
+1. In the **Add an Identity provider** page, perform the following steps:
- ![Screenshot of IAM page, with Identity Providers and Create Provider highlighted][13]
+ ![Screenshot of Configure Provider.](./media/amazon-web-service-tutorial/adding-provider.png)
-5. On the **Configure Provider** page, perform the following steps:
+ a. For **Provider type**, select **SAML**.
- ![Screenshot of Configure Provider][14]
+ b. For **Provider name**, type a provider name (for example: *WAAD*).
- a. For **Provider Type**, select **SAML**.
+ c. To upload your downloaded **metadata file** from the Azure portal, select **Choose file**.
- b. For **Provider Name**, type a provider name (for example: *WAAD*).
+ d. Click **Add provider**.
- c. To upload your downloaded **metadata file** from the Azure portal, select **Choose File**.
+1. Select **Roles** > **Create role**.
- d. Select **Next Step**.
+ ![Screenshot of Roles page.](./media/amazon-web-service-tutorial/create-role.png)
-6. On the **Verify Provider Information** page, select **Create**.
+1. On the **Create role** page, perform the following steps:
- ![Screenshot of Verify Provider Information, with Create highlighted][15]
+ ![Screenshot of Create role page.](./media/amazon-web-service-tutorial/creating-role.png)
-7. Select **Roles** > **Create role**.
- ![Screenshot of Roles page][16]
+ a. Choose **Trusted entity type**, select **SAML 2.0 federation**.
-8. On the **Create role** page, perform the following steps:
-
- ![Screenshot of Create role page][19]
-
- a. Under **Select type of trusted entity**, select **SAML 2.0 federation**.
-
- b. Under **Choose a SAML 2.0 Provider**, select the **SAML provider** you created previously (for example: *WAAD*).
+ b. Under **SAML 2.0 based provider**, select the **SAML provider** you created previously (for example: *WAAD*).
c. Select **Allow programmatic and AWS Management Console access**.
- d. Select **Next: Permissions**.
+ d. Select **Next**.
-9. On the **Attach permissions policies** dialog box, attach the appropriate policy, per your organization. Then select **Next: Review**.
+1. On the **Permissions policies** dialog box, attach the appropriate policy, per your organization. Then select **Next**.
- ![Screenshot of Attach permissions policy dialog box][33]
+ ![Screenshot of Attach permissions policy dialog box.](./media/amazon-web-service-tutorial/permissions-to-role.png)
-10. On the **Review** dialog box, perform the following steps:
+1. On the **Review** dialog box, perform the following steps:
- ![Screenshot of Review dialog box][34]
+ ![Screenshot of Review dialog box.](./media/amazon-web-service-tutorial/review-role.png)
a. In **Role name**, enter your role name.
- b. In **Role description**, enter the description.
+ b. In **Description**, enter the role description.
c. Select **Create role**.
- d. Create as many roles as needed, and map them to the identity provider.
-
-11. Use AWS service account credentials for fetching the roles from the AWS account in Azure AD user provisioning. For this, open the AWS console home.
+ d. Create as many roles as needed and map them to the identity provider.
-12. Select **Services**. Under **Security, Identity & Compliance**, select **IAM**.
+1. Use AWS service account credentials for fetching the roles from the AWS account in Azure AD user provisioning. For this, open the AWS console home.
- ![Screenshot of AWS console home, with Services and IAM highlighted](./media/amazon-web-service-tutorial/fetchingrole1.png)
+1. In the IAM section, select **Policies** and click **Create policy**.
-13. In the IAM section, select **Policies**.
+ ![Screenshot of IAM section, with Policies highlighted.](./media/amazon-web-service-tutorial/create-policy.png)
- ![Screenshot of IAM section, with Policies highlighted](./media/amazon-web-service-tutorial/fetchingrole2.png)
+1. Create your own policy to fetch all the roles from AWS accounts.
-14. Create a new policy by selecting **Create policy** for fetching the roles from the AWS account in Azure AD user provisioning.
-
- ![Screenshot of Create role page, with Create policy highlighted](./media/amazon-web-service-tutorial/fetchingrole3.png)
-
-15. Create your own policy to fetch all the roles from AWS accounts.
-
- ![Screenshot of Create policy page, with JSON highlighted](./media/amazon-web-service-tutorial/policy1.png)
+ ![Screenshot of Create policy page, with JSON highlighted.](./media/amazon-web-service-tutorial/creating-policy.png)
a. In **Create policy**, select the **JSON** tab.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
} ```
- c. Select **Review policy** to validate the policy.
+ c. Click **Next: Tags**.
+
+1. You can also add the required tags in the below page and click **Next: Review**.
- ![Screenshot of Create policy page](./media/amazon-web-service-tutorial/policy5.png)
+ ![Screenshot of Create policy tag page.](./media/amazon-web-service-tutorial/tag-policy.png)
-16. Define the new policy.
+1. Define the new policy.
- ![Screenshot of Create policy page, with Name and Description fields highlighted](./media/amazon-web-service-tutorial/policy2.png)
+ ![Screenshot of Create policy page, with Name and Description fields highlighted.](./media/amazon-web-service-tutorial/review-policy.png)
a. For **Name**, enter **AzureAD_SSOUserRole_Policy**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
c. Select **Create policy**.
-17. Create a new user account in the AWS IAM service.
-
- a. In the AWS IAM console, select **Users**.
-
- ![Screenshot of AWS IAM console, with Users highlighted](./media/amazon-web-service-tutorial/policy3.png)
-
- b. To create a new user, select **Add user**.
-
- ![Screenshot of Add user button](./media/amazon-web-service-tutorial/policy4.png)
-
- c. In the **Add user** section:
-
- ![Screenshot of Add user page, with User name and Access type highlighted](./media/amazon-web-service-tutorial/adduser1.png)
-
- * Enter the user name as **AzureADRoleManager**.
-
- * For the access type, select **Programmatic access**. This way, the user can invoke the APIs and fetch the roles from the AWS account.
+1. Create a new user account in the AWS IAM service.
- * Select **Next Permissions**.
+ a. In the AWS IAM console, select **Users** and click **Add users**.
-18. Create a new policy for this user.
+ ![Screenshot of AWS IAM console, with Users highlighted.](./media/amazon-web-service-tutorial/create-user.png)
- ![Screenshot shows the Add user page where you can create a policy for the user.](./media/amazon-web-service-tutorial/adduser2.png)
+ b. In the **Specify user details** section, enter the user name as **AzureADRoleManager** and select **Next**.
- a. Select **Attach existing policies directly**.
+ ![Screenshot of Add user page, with User name and Access type highlighted.](./media/amazon-web-service-tutorial/user-details.png)
- b. Search for the newly created policy in the filter section **AzureAD_SSOUserRole_Policy**.
+ c. Create a new policy for this user.
- c. Select the policy, and then select **Next: Review**.
+ ![Screenshot shows the Add user page where you can create a policy for the user.](./media/amazon-web-service-tutorial/permissions-to-user.png)
-19. Review the policy to the attached user.
+ d. Select **Attach existing policies directly**.
- ![Screenshot of Add user page, with Create user highlighted](./media/amazon-web-service-tutorial/adduser3.png)
+ e. Search for the newly created policy in the filter section **AzureAD_SSOUserRole_Policy**.
- a. Review the user name, access type, and policy mapped to the user.
+ f. Select the policy, and then select **Next**.
- b. Select **Create user**.
+1. Review your choices and select **Create user**.
-20. Download the user credentials of a user.
+1. To download the user credentials of a user, enable the console access in **Security credentials** tab.
- ![Screenshot shows the Add user page with a Download c s v button to get user credentials.](./media/amazon-web-service-tutorial/adduser4.png)
+ ![Screenshot shows the Security credentials.](./media/amazon-web-service-tutorial/enable-console-access.png)
- a. Copy the user **Access key ID** and **Secret access key**.
+1. Enter these credentials into the Azure AD user provisioning section to fetch the roles from the AWS console.
- b. Enter these credentials into the Azure AD user provisioning section to fetch the roles from the AWS console.
+ ![Screenshot shows the download the user credentials.](./media/amazon-web-service-tutorial/download-password.png)
- c. Select **Close**.
> [!NOTE] > AWS has a set of permissions/limts are required to configure AWS SSO. To know more information on AWS limits, please refer [this](https://docs.aws.amazon.com/singlesignon/latest/userguide/limits.html) page.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure AD management portal, in the AWS app, go to **Provisioning**.
- ![Screenshot of AWS app, with Provisioning highlighted](./media/amazon-web-service-tutorial/provisioning.png)
+ ![Screenshot of AWS app, with Provisioning highlighted.](./media/amazon-web-service-tutorial/provisioning.png)
2. Enter the access key and secret in the **clientsecret** and **Secret Token** fields, respectively.
- ![Screenshot of Admin Credentials dialog box](./media/amazon-web-service-tutorial/provisioning1.png)
+ ![Screenshot of Admin Credentials dialog box.](./media/amazon-web-service-tutorial/provisioning1.png)
a. Enter the AWS user access key in the **clientsecret** field.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
3. In the **Settings** section, for **Provisioning Status**, select **On**. Then select **Save**.
- ![Screenshot of Settings section, with On highlighted](./media/amazon-web-service-tutorial/provisioning2.png)
+ ![Screenshot of Settings section, with On highlighted.](./media/amazon-web-service-tutorial/provisioning2.png)
> [!NOTE] > The provisioning service imports roles only from AWS to Azure AD. The service does not provision users and groups from Azure AD to AWS.
You can also use Microsoft My Apps to test the application in any mode. When you
## Known issues
-* AWS Single-Account Access provisioning integration cannot be used in the the AWS China regions.
+* AWS Single-Account Access provisioning integration cannot be used in the AWS China regions.
* In the **Provisioning** section, the **Mappings** subsection shows a "Loading..." message, and never displays the attribute mappings. The only provisioning workflow supported today is the import of roles from AWS into Azure AD for selection during a user or group assignment. The attribute mappings for this are predetermined, and aren't configurable.
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/29/2023 Last updated : 04/13/2023
To have a storage volume persist for your workload, you can use a StatefulSet. T
volumeClaimTemplates: - metadata: name: persistent-storage
- annotations:
- volume.beta.kubernetes.io/storage-class: azureblob-nfs-premium
spec:
+ storageClassName: azureblob-nfs-premium
accessModes: ["ReadWriteMany"] resources: requests:
To have a storage volume persist for your workload, you can use a StatefulSet. T
volumeClaimTemplates: - metadata: name: persistent-storage
- annotations:
- volume.beta.kubernetes.io/storage-class: azureblob-fuse-premium
spec:
+ storageClassName: azureblob-fuse-premium
accessModes: ["ReadWriteMany"] resources: requests:
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview)
+ Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. - Previously updated : 03/21/2023+ Last updated : 04/17/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod, either from a pre-reserved set of IPs on every node, or from a separate subnet reserved for pods. This approach requires planning IP addresses and could lead to address exhaustion, which introduces difficulties scaling your clusters as your application demands grow.
-With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
+With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an Overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
-## Overview of overlay networking
+## Overview of Overlay networking
-In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
+In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
-A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
+A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
-Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
+Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (Overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
-Outbound (egress) connectivity to the internet for overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
+Outbound (egress) connectivity to the internet for Overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
Ingress connectivity to the cluster can be achieved using an ingress controller such as Nginx or [HTTP application routing](./http-application-routing.md).
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking | | Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency | | Kubernetes Network Policies | Azure Network Policies, Calico, Cilium | Calico |
-| OS platforms supported | Linux and Windows Server 2022 | Linux only |
+| OS platforms supported | Linux and Windows Server 2022(Preview) | Linux only |
## IP address planning -- **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. Cluster can't scale to another subnet but you can add new nodepools in another subnet within the same VNet for expansion. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
+- **Cluster Nodes**: When setting up your AKS cluster, make sure your VNet subnet has enough room to grow for future scaling. Keep in mind that clusters can't scale across subnets, but you can always add new node pools in another subnet within the same VNet for extra space. Note that a `/24`subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
-- **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
+- **Pods**: The Overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
The following are additional factors to consider when planning pods IP address space:
You can configure the maximum number of pods per node at the time of cluster cre
## Choosing a network model to use
-Azure CNI offers two IP addressing options for pods - the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
+Azure CNI offers two IP addressing options for pods - the traditional configuration that assigns VNet IPs to pods, and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
-Use overlay networking when:
+Use Overlay networking when:
- You would like to scale to a large number of pods, but have limited IP address space in your VNet. - Most of the pod communication is within the cluster.
Use the traditional VNet option when:
Azure CNI Overlay has the following limitations: -- You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.-- Windows Server 2019 node pools are not supported for overlay.-- Traffic from host network pods is not able to reach Windows overlay pods.
+- You can't use Application Gateway as an Ingress Controller (AGIC) for an Overlay cluster.
+- Windows support is still in Preview
+ - Windows Server 2019 node pools are **not** supported for Overlay
+ - Traffic from host network pods is not able to reach Windows Overlay pods.
+- Sovereign Clouds are not supported
+- Virtual Machine Scale Sets (VMAS) are not supported for Overlay
+- Dualstack networking is not supported in Overlay
+- You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead.
-## Install the aks-preview Azure CLI extension
+## Set up Overlay clusters
+
+>[!NOTE]
+> You must have CLI version 2.47.0 or later to use the `--network-plugin-mode` argument. For Windows, you must have the latest aks-preview Azure CLI extension installed and can follow the instructions below.
+
+Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName`, `resourceGroup`, and `location`.
+
+```azurecli-interactive
+clusterName="myOverlayCluster"
+resourceGroup="myResourceGroup"
+location="westcentralus"
+az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
+```
+
+## Install the aks-preview Azure CLI extension - Windows only
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
When the status reflects *Registered*, refresh the registration of the *Microsof
az provider register --namespace Microsoft.ContainerService ```
-## Set up overlay clusters
+## Upgrade an existing cluster to CNI Overlay - Preview
-Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName`, `resourceGroup`, and `location`.
-
-```azurecli-interactive
-clusterName="myOverlayCluster"
-resourceGroup="myResourceGroup"
-location="westcentralus"
-
-az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
-```
-
-## Upgrade an existing cluster to CNI Overlay
+> [!NOTE]
+> The upgrade capability is still in preview and requires the preview AKS Azure CLI extension.
You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:
You can update an existing Azure CNI cluster to Overlay if the cluster meets cer
- **not** have network policies enabled - **not** be using any Windows node pools with docker as the container runtime
-The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
+The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to Overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
> [!WARNING]
-> Due to the limitation around Windows overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to overlay.
+> Due to the limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to Overlay.
-While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
+While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, Overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
-This network disruption will only occur during the upgrade. Once the migration to overlay has completed for all node pools, all overlay pods will be able to communicate successfully with the Windows pods.
+This network disruption will only occur during the upgrade. Once the migration to Overlay has completed for all node pools, all Overlay pods will be able to communicate successfully with the Windows pods.
> [!NOTE]
-> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows overlay pods.
+> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows Overlay pods.
## Next steps
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
`CiliumNetworkPolicy` custom resources aren't officially supported. We recommend that customers use Kubernetes `NetworkPolicy` resources to configure network policies.
+- *Does AKS configure CPU or memory limits on the Cilium daemonset?*
+
+ No, AKS does not configure CPU or memory limits on the Cilium daemonset because Cilium is a critical system component for pod networking and network policy enforcement.
+ ## Next steps Learn more about networking in AKS in the following articles:
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Disks for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 03/23/2023 Last updated : 04/11/2023 # Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS)
For more information on Kubernetes volumes, see [Storage options for application
* You need an Azure [storage account][azure-storage-account]. * Make sure you have Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-* The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count changes based on the size of the node/node pool. Run the [kubectl get][kubectl-get] command to determine the number of volumes that can be allocated per node:
+* The Azure Disk CSI driver has a per-node volume limit. The volume count changes based on the size of the node/node pool. Run the [kubectl get][kubectl-get] command to determine the number of volumes that can be allocated per node:
```console kubectl get CSINode <nodename> -o yaml
This section provides guidance for cluster administrators who want to provision
|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`| |fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows| |cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
-|location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver will use the same resource group name as current AKS cluster| |DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] IOPS Capability (minimum: 2 IOPS/GiB ) | 100~160000 | No | `500`| |DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
This section provides guidance for cluster administrators who want to provision
|diskAccessID | Azure Resource ID of the DiskAccess resource to use private endpoints on disks | | No | ``| |enableBursting | [Enable on-demand bursting][on-demand-bursting] beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. | `true`, `false` | No | `false`| |useragent | User agent used for [customer usage attribution][customer-usage-attribution] | | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`|
-|enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
|subscriptionID | Specify Azure subscription ID where the Azure Disks is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.| | | **Following parameters are only for v2** | | | |
-| enableAsyncAttach | The v2 driver uses a different strategy to manage Azure API throttling and ignores this parameter. | | No | |
| maxShares | The total number of shared disk mounts allowed for the disk. Setting the value to 2 or more enables attachment replicas. | Supported values depend on the disk size. See [Share an Azure managed disk][share-azure-managed-disk] for supported values. | No | 1 | | maxMountReplicaCount | The number of replicas attachments to maintain. | This value must be in the range `[0..(maxShares - 1)]` | No | If `accessMode` is `ReadWriteMany`, the default is `0`. Otherwise, the default is `maxShares - 1` |
Each AKS cluster includes four pre-created storage classes, two of them configur
1. The *default* storage class provisions a standard SSD Azure Disk. * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance. 1. The *managed-csi-premium* storage class provisions a premium Azure Disk.
- * Premium disks are backed by SSD-based high-performance, low-latency disks. They're ideal for VMs running production workloads. When you use the Azure Disks CSI driver on AKS, you can also use the `managed-csi` storage class, which is backed by Standard SSD locally redundant storage (LRS).
+ * Premium disks are backed by SSD-based high-performance, low-latency disks. They're ideal for VMs running production workloads. When you use the Azure Disk CSI driver on AKS, you can also use the `managed-csi` storage class, which is backed by Standard SSD locally redundant storage (LRS).
It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class using the `kubectl edit sc` command, or you can create your own custom storage class. For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger][disk-host-cache-setting]. For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts].
Once the persistent volume claim has been created and the disk successfully prov
1. Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest:
- ```yaml
- kind: Pod
- apiVersion: v1
- metadata:
- name: mypod
- spec:
- containers:
- - name: mypod
+ ```yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - name: mypod
image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine resources: requests:
- cpu: 100m
- memory: 128Mi
+ cpu: 100m
+ memory: 128Mi
limits: cpu: 250m memory: 256Mi volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: azure-managed-disk
- ```
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: azure-managed-disk
+ ```
2. Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
Once the persistent volume claim has been created and the disk successfully prov
To use Azure ultra disk, see [Use ultra disks on Azure Kubernetes Service (AKS)][use-ultra-disks].
-### Back up a persistent volume
-
-To back up the data in your persistent volume, take a snapshot of the managed disk for the volume. You can then use this snapshot to create a restored disk and attach to pods as a means of restoring the data.
-
-1. Get the volume name with the [kubectl get][kubectl-get] command, such as for the PVC named *azure-managed-disk*:
-
- ```bash
- kubectl get pvc azure-managed-disk
- ```
-
- The output of the command resembles the following example:
-
- ```console
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- azure-managed-disk Bound pvc-faf0f176-8b8d-11e8-923b-deb28c58d242 5Gi RWO managed-premium 3m
- ```
-
-2. This volume name forms the underlying Azure disk name. Query for the disk ID with [az disk list][az-disk-list] and provide your PVC volume name, as shown in the following example:
-
- ```azurecli
- az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv
-
- /subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
- ```
-
-3. Use the disk ID to create a snapshot disk with [az snapshot create][az-snapshot-create]. The following example creates a snapshot named *pvcSnapshot* in the same resource group as the AKS cluster *MC_myResourceGroup_myAKSCluster_eastus*. You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to. Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.
-
- ```azurecli
- az snapshot create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
- --name pvcSnapshot \
- --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
- ```
-
-### Restore and use a snapshot
-
-1. To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with [az disk create][az-disk-create]. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named *pvcRestored* from the snapshot named *pvcSnapshot*:
-
- ```azurecli
- az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot
- ```
-
-2. To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the [az disk show][az-disk-show] command. The following example gets the disk ID for *pvcRestored* created in the previous step:
-
- ```azurecli
- az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv
- ```
-
-3. Create a pod manifest named `azure-restored.yaml` and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at */mnt/azure*:
-
- ```yaml
- kind: Pod
- apiVersion: v1
- metadata:
- name: mypodrestored
- spec:
- containers:
- - name: mypodrestored
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- azureDisk:
- kind: Managed
- diskName: pvcRestored
- diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
- ```
-
-4. Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
- ```bash
- kubectl apply -f azure-restored.yaml
- ```
-
- The output of the command resembles the following example:
-
- ```console
- pod/mypodrestored created
- ```
-
-5. You can use `kubectl describe pod mypodrestored` to view details of the pod, such as the following condensed example that shows the volume information:
-
- ```bash
- kubectl describe pod mypodrestored
- ```
-
- The output of the command resembles the following example:
-
- ```console
- [...]
- Volumes:
- volume:
- Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
- DiskName: pvcRestored
- DiskURI: /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
- Kind: Managed
- FSType: ext4
- CachingMode: ReadWrite
- ReadOnly: false
- [...]
- ```
- ### Using Azure tags For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/16/2023 Last updated : 04/12/2023 # Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
The Azure Disks Container Storage Interface (CSI) driver is a [CSI specification
The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles.
-To create an AKS cluster with CSI driver support, see [Enable CSI driver on AKS](csi-storage-drivers.md). This article describes how to use the Azure Disks CSI driver version 1.
+To create an AKS cluster with CSI driver support, see [Enable CSI driver on AKS](csi-storage-drivers.md). This article describes how to use the Azure Disk CSI driver version 1.
> [!NOTE]
-> Azure Disks CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure Disks CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure Disk CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure Disk CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
> [!NOTE] > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-## Azure Disks CSI driver features
+## Azure Disk CSI driver features
-In addition to in-tree driver features, Azure Disks CSI driver supports the following features:
+In addition to in-tree driver features, Azure Disk CSI driver supports the following features:
- Performance improvements during concurrent disk attach and detach - In-tree drivers attach or detach disks in serial, while CSI drivers attach or detach disks in batch. There's significant improvement when there are multiple disks attaching to one node.
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
- [Resize disk PV without downtime](#resize-a-persistent-volume-without-downtime) > [!NOTE]
-> Depending on the VM SKU that's being used, the Azure Disks CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
+> Depending on the VM SKU that's being used, the Azure Disk CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
## Storage class driver dynamic disks parameters
For more information on Kubernetes volumes, see [Storage options for application
A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes].
-When you use the Azure Disks CSI driver on AKS, there are two more built-in `StorageClasses` that use the Azure Disks CSI storage driver. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
+When you use the Azure Disk CSI driver on AKS, there are two more built-in `StorageClasses` that use the Azure Disk CSI storage driver. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
- `managed-csi`: Uses Azure Standard SSD locally redundant storage (LRS) to create a managed disk. - `managed-csi-premium`: Uses Azure Premium LRS to create a managed disk.
storageclass.storage.k8s.io/azuredisk-csi-waitforfirstconsumer created
## Volume snapshots
-The Azure Disks CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html). As part of this capability, the driver can perform either *full* or [*incremental* snapshots](../virtual-machines/disks-incremental-snapshots.md) depending on the value set in the `incremental` parameter (by default, it's true).
+The Azure Disk CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html). As part of this capability, the driver can perform either *full* or [*incremental* snapshots](../virtual-machines/disks-incremental-snapshots.md) depending on the value set in the `incremental` parameter (by default, it's true).
The following table provides details for all of the parameters.
The following table provides details for all of the parameters.
### Create a volume snapshot
+> [!NOTE]
+> Before proceeding, ensure that the application is not writing data to the source disk.
+ For an example of this capability, create a [volume snapshot class](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/snapshot/storageclass-azuredisk-snapshot.yaml) with the [kubectl apply][kubectl-apply] command: ```bash
allowVolumeExpansion: true
## Windows containers
-The Azure Disks CSI driver supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart][aks-quickstart-cli] to add a Windows node pool.
+The Azure Disk CSI driver supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart][aks-quickstart-cli] to add a Windows node pool.
After you have a Windows node pool, you can now use the built-in storage classes like `managed-csi`. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into the file `data.txt` by running the following [kubectl apply][kubectl-apply] command:
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 01/18/2023 Last updated : 04/11/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
To create an AKS cluster with CSI drivers support, see [Enable CSI drivers on AK
> [!NOTE] > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-## Azure Files CSI driver new features
+## Azure File CSI driver new features
-In addition to the original in-tree driver features, Azure Files CSI driver supports the following new features:
+In addition to the original in-tree driver features, Azure File CSI driver supports the following new features:
- Network File System (NFS) version 4.1 - [Private endpoint][private-endpoint-overview]
A storage class is used to define how an Azure file share is created. A storage
> [!NOTE] > Azure Files supports Azure Premium Storage. The minimum premium file share capacity is 100 GiB.
-When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that uses the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
+When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that uses the Azure File CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
- `azurefile-csi`: Uses Azure Standard Storage to create an Azure Files share. - `azurefile-csi-premium`: Uses Azure Premium Storage to create an Azure Files share.
The output of the command resembles the following example:
storageclass.storage.k8s.io/my-azurefile created ```
-The Azure Files CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html) and the underlying file shares.
+The Azure File CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html) and the underlying file shares.
> [!NOTE] > This driver only supports snapshot creation, restore from snapshot is not supported by this driver. Snapshots can be restored from Azure portal or CLI. For more information about creating and restoring a snapshot, see [Overview of share snapshots for Azure Files][share-snapshots-overview].
spec:
volumeClaimTemplates: - metadata: name: persistent-storage
- annotations:
- volume.beta.kubernetes.io/storage-class: azurefile-csi-nfs
spec:
+ storageClassName: azurefile-csi-nfs
accessModes: ["ReadWriteMany"] resources: requests:
accountname.file.core.windows.net:/accountname/pvc-fa72ec43-ae64-42e4-a8a2-55660
## Windows containers
-The Azure Files CSI driver also supports Windows nodes and containers. To use Windows containers, follow the [Windows containers quickstart](./learn/quick-windows-container-deploy-cli.md) to add a Windows node pool.
+The Azure File CSI driver also supports Windows nodes and containers. To use Windows containers, follow the [Windows containers quickstart](./learn/quick-windows-container-deploy-cli.md) to add a Windows node pool.
After you have a Windows node pool, use the built-in storage classes like `azurefile-csi` or create a custom one. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into a file `data.txt` by running the [kubectl apply][kubectl-apply] command:
The output of the commands resembles the following example:
[access-tiers-overview]: ../storage/blobs/access-tiers-overview.md [tag-resources]: ../azure-resource-manager/management/tag-resources.md [statically-provision-a-volume]: azure-csi-files-storage-provision.md#statically-provision-a-volume
-[azure-private-endpoint-dns]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration
+[azure-private-endpoint-dns]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration
aks Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-cost.md
+
+ Title: Optimize Costs in Azure Kubernetes Service (AKS)
+
+description: Recommendations for optimizing costs in Azure Kubernetes Service (AKS).
+ Last updated : 04/13/2023+++
+# Optimize costs in Azure Kubernetes Service (AKS)
+
+Cost optimization is about understanding your different configuration options and recommended best practices to reduce unnecessary expenses and improve operational efficiencies. Before you use this article, you should see the [cost optimization section](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service#cost-optimization) in the Azure Well-Architected Framework.
+
+When discussing cost optimization with Azure Kubernetes Service, it's important to distinguish between *cost of cluster resources* and *cost of workload resources*. Cluster resources are a shared responsibility between the cluster admin and their resource provider, while workload resources are the domain of a developer. Azure Kubernetes Service has considerations and recommendations for both of these roles.
+
+## Design checklist
+
+> [!div class="checklist"]
+> - **Cluster architecture:** Use appropriate VM SKU per node pool and reserved instances where long-term capacity is expected.
+> - **Cluster and workload architectures:** Use appropriate managed disk tier and size.
+> - **Cluster architecture:** Review performance metrics, starting with CPU, memory, storage, and network, to identify cost optimization opportunities by cluster, nodes, and namespace.
+> - **Cluster and workload architecture:** Use autoscale features to scale in when workloads are less active.
+
+## Recommendations
+
+Explore the following table of recommendations to optimize your AKS configuration for cost.
+
+| Recommendation | Benefit |
+|-|--|
+|**Cluster architecture**: Utilize AKS cluster pre-set configurations. |From the Azure portal, the **cluster preset configurations** option helps offload this initial challenge by providing a set of recommended configurations that are cost-conscious and performant regardless of environment. Mission critical applications may require more sophisticated VM instances, while small development and test clusters may benefit from the lighter-weight, preset options where availability, Azure Monitor, Azure Policy, and other features are turned off by default. The **Dev/Test** and **Cost-optimized** pre-sets help remove unnecessary added costs.|
+|**Cluster architecture:** Consider using [ephemeral OS disks](cluster-configuration.md#ephemeral-os).|Ephemeral OS disks provide lower read/write latency, along with faster node scaling and cluster upgrades. Containers aren't designed to have local state persisted to the managed OS disk, and this behavior offers limited value to AKS. AKS defaults to an ephemeral OS disk if you chose the right VM series and the OS disk can fit in the VM cache or temporary storage SSD.|
+|**Cluster and workload architectures:** Use the [Start and Stop feature](start-stop-cluster.md) in Azure Kubernetes Services (AKS).|The AKS Stop and Start cluster feature allows AKS customers to pause an AKS cluster, saving time and cost. The stop and start feature keeps cluster configurations in place and customers can pick up where they left off without reconfiguring the clusters.|
+|**Workload architecture:** Consider using [Azure Spot VMs](spot-node-pool.md) for workloads that can handle interruptions, early terminations, and evictions.|For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates for you to schedule on a spot node pool. Using spot VMs for nodes with your AKS cluster allows you to take advantage of unused capacity in Azure at a significant cost savings.|
+|**Cluster architecture:** Enforce [resource quotas](operator-best-practices-scheduler.md) at the namespace level.|Resource quotas provide a way to reserve and limit resources across a development team or project. These quotas are defined on a namespace and can be used to set quotas on compute resources, storage resources, and object counts. When you define resource quotas, all pods created in the namespace must provide limits or requests in their pod specifications.|
+|**Cluster architecture:** Sign up for [Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md). | If you properly planned for capacity, your workload is predictable and exists for an extended period of time, sign up for [Azure Reserved Instances](../virtual-machines/prepay-reserved-vm-instances.md) to further reduce your resource costs.|
+|**Cluster architecture:** Use Kubernetes [Resource Quotas](operator-best-practices-scheduler.md#enforce-resource-quotas). | Resource quotas can be used to limit resource consumption for each namespace in your cluster, and by extension resource utilization for the Azure service.|
+|**Cluster and workload architectures:** Cost management using monitoring and observability tools. | OpenCost on AKS introduces a new community-driven [specification](https://github.com/opencost/opencost/blob/develop/spec/opencost-specv01.md) and implementation to bring greater visibility into current and historic Kubernetes spend and resource allocation. OpenCost, born out of [Kubecost](https://www.kubecost.com/), is an open-source, vendor-neutral [CNCF sandbox project](https://www.cncf.io/sandbox-projects/) that recently became a [FinOps Certified Solution](https://www.finops.org/certifications/finops-certified-solution/). Customer specific prices are now included using the [Azure Consumption Price Sheet API](/rest/api/consumption/price-sheet), ensuring accurate cost reporting that accounts for consumption and savings plan discounts. For out-of-cluster analysis or to ingest allocation data into an existing BI pipeline, you can export a CSV with daily infrastructure cost breakdown by Kubernetes constructs (namespace, controller, service, pod, job and more) to your Azure Storage Account or local storage with minimal configuration. CSV also includes resource utilization metrics for CPU, GPU, memory, load balancers, and persistent volumes. For in-cluster visualization, OpenCost UI enables real-time cost drill down by Kubernetes constructs. Alternatively, directly query the OpenCost API to access cost allocation data. For more information on Azure specific integration, see [OpenCost docs](https://www.opencost.io/docs).|
+|**Cluster architecture:** Improve cluster operations efficiency.|Managing multiple clusters increases operational overhead for engineers. [AKS auto upgrade](auto-upgrade-cluster.md) and [AKS Node Auto-Repair](node-auto-repair.md) helps improve day-2 operations. Learn more about [best practices for AKS Operators](operator-best-practices-cluster-isolation.md).|
+
+## Next steps
+
+- Explore and analyze costs with [Cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md).
+- [Azure Advisor recommendations](../advisor/advisor-cost-recommendations.md) for cost can highlight the over-provisioned services and ways to lower cost.
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
Last updated 12/15/2021
-# Use dual-stack kubenet networking in Azure Kubernetes Service (AKS) (Preview)
+# Use dual-stack kubenet networking in Azure Kubernetes Service (AKS)
AKS clusters can now be deployed in a dual-stack (using both IPv4 and IPv6 addresses) mode when using [kubenet][kubenet] networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6). This article shows you how to use dual-stack networking with an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts]. ## Limitations
-> [!NOTE]
-> Dual-stack kubenet networking is currently not available in sovereign clouds. This note will be removed when rollout is complete.
* Azure Route Tables have a hard limit of 400 routes per table. Because each node in a dual-stack cluster requires two routes, one for each IP address family, dual-stack clusters are limited to 200 nodes.
-* During preview, service objects are only supported with `externalTrafficPolicy: Local`.
+* In Mariner node pools, service objects are only supported with `externalTrafficPolicy: Local`.
* Dual-stack networking is required for the Azure Virtual Network and the pod CIDR - single stack IPv6-only isn't supported for node or pod IP addresses. Services can be provisioned on IPv4 or IPv6. * Features **not supported on dual-stack kubenet** include: * [Azure network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy)
This article shows you how to use dual-stack networking with an AKS cluster. For
* All prerequisites from [configure kubenet networking](configure-kubenet.md) apply. * AKS dual-stack clusters require Kubernetes version v1.21.2 or greater. v1.22.2 or greater is recommended to take advantage of the [out-of-tree cloud controller manager][aks-out-of-tree], which is the default on v1.22 and up.
-* Azure CLI with the `aks-preview` extension 0.5.48 or newer.
* If using Azure Resource Manager templates, schema version 2021-10-01 is required.
-## Install the aks-preview Azure CLI extension
--
-To install the aks-preview extension, run the following command:
-
-```azurecli
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-## Register the 'AKS-EnableDualStack' feature flag
-
-Register the `AKS-EnableDualStack` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-EnableDualStack"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
-
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "AKS-EnableDualStack"
-```
-
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Overview of dual-stack networking in Kubernetes Kubernetes v1.23 brings stable upstream support for [IPv4/IPv6 dual-stack][kubernetes-dual-stack] clusters, including pod and service networking. Nodes and pods are always assigned both an IPv4 and an IPv6 address, while services can be single-stack on either address family or dual-stack.
nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-145084
> [!IMPORTANT] > There are currently two limitations pertaining to IPv6 services in AKS. These are both preview limitations and work is underway to remove them.
-> * Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. This traffic cannot be routed to a pod and thus traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` will fail. During preview, IPv6 services MUST be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node, in order to function.
+> * Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Mariner node pools, this traffic cannot be routed to a pod and thus traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` will fail. IPv6 services MUST be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node, in order to function.
> * Only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service will only receive a public IP for its first listed IP family. In order to provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6. IPv6 services in Kubernetes can be exposed publicly similarly to an IPv4 service.
IPv6 services in Kubernetes can be exposed publicly similarly to an IPv4 service
# [`kubectl expose`](#tab/kubectl) ```bash-interactive
-kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer --overrides='{"spec":{"externalTrafficPolicy":"Local"}}'
-kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"externalTrafficPolicy":"Local", "ipFamilies": ["IPv6"]}}'
+kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer'
+kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilies": ["IPv6"]}}'
``` ```
metadata:
app: nginx name: nginx-ipv4 spec:
- externalTrafficPolicy: Local
+ externalTrafficPolicy: Cluster
ports: - port: 80 protocol: TCP
metadata:
app: nginx name: nginx-ipv6 spec:
- externalTrafficPolicy: Local
+ externalTrafficPolicy: Cluster
ipFamilies: - IPv6 ports:
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 02/20/2023 Last updated : 04/12/2023 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
The following limitations apply when you integrate KMS etcd encryption with AKS:
* Bring your own (BYO) Azure Key Vault from another tenant isn't supported. * With KMS enabled, you can't change associated Azure Key Vault model (public, private). To [change associated key vault mode][changing-associated-key-vault-mode], you need to disable and enable KMS again. * If a cluster is enabled KMS with private key vault and not using the `API Server VNet integration` tunnel, then stop/start cluster is not allowed.
+* Using the virtual machine scale set (VMSS) API to scale down nodes in the cluster to zero will deallocate the nodes, causing the cluster to go down and unrecoverable.
+ KMS supports [public key vault][Enable-KMS-with-public-key-vault] and [private key vault][Enable-KMS-with-private-key-vault].
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
Title: Windows Server node pools FAQ
description: See the frequently asked questions when you run Windows Server node pools and application workloads in Azure Kubernetes Service (AKS). Previously updated : 10/12/2020 Last updated : 04/13/2023 #Customer intent: As a cluster operator, I want to see frequently asked questions when running Windows node pools and application workloads.
Azure Disks and Azure Files are the supported volume types, and are accessed as
The master nodes (the control plane) in an AKS cluster are hosted by the AKS service. You won't be exposed to the operating system of the nodes hosting the master components. All AKS clusters are created with a default first node pool, which is Linux-based. This node pool contains system services that are needed for the cluster to function. We recommend that you run at least two nodes in the first node pool to ensure the reliability of your cluster and the ability to do cluster operations. The first Linux-based node pool can't be deleted unless the AKS cluster itself is deleted.
+In some cases, if you are planning to run Windows-based workloads on an AKS cluster, you should consider deploying a Linux node pool for the following reasons:
+- If you are planning to run Windows and Linux workloads, you can deploy a Windows and Linux node pool on the same AKS cluster to run the workloads side by side.
+- When deploying infrastructure-related components based on Linux, such as Ngix and others, these workloads require a Linux node pool alongside your Windows node pools. For development and test scenarios, you can use control plane nodes. For production workloads, we recommend deploying separate Linux node pools for performance and reliability.
+ ## How do I patch my Windows nodes? To get the latest patches for Windows nodes, you can either [upgrade the node pool][nodepool-upgrade] or [upgrade the node image][upgrade-node-image]. Windows Updates are not enabled on nodes in AKS. AKS releases new node pool images as soon as patches are available, and it's the user's responsibility to upgrade node pools to stay current on patches and hotfixes. This patch process is also true for the Kubernetes version being used. [AKS release notes][aks-release-notes] indicate when new versions are available. For more information on upgrading the Windows Server node pool, see [Upgrade a node pool in AKS][nodepool-upgrade]. If you're only interested in updating the node image, see [AKS node image upgrades][upgrade-node-image].
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
Previously updated : 06/10/2021- Last updated : 04/17/2023+ # Integrate API Management in an internal virtual network with Application Gateway
To follow the steps described in this article, you must have:
## Scenario
-In this article, you learn how to use a single API Management instance for internal and external consumers and make it act as a single front end for both on-premises and cloud APIs. You'll also understand how to expose only a subset of your APIs for external consumption by using routing functionality available in Application Gateway. In the example, the APIs are highlighted in green.
+In this article, you learn how to use a single API Management instance for internal and external consumers and make it act as a single front end for both on-premises and cloud APIs. You'll create an API Management instance of the newer single-tenant version 2 (stv2) type. You'll also understand how to expose only a subset of your APIs for external consumption by using routing functionality available in Application Gateway. In the example, the APIs are highlighted in green.
In the first setup example, all your APIs are managed only from within your virtual network. Internal consumers can access all your internal and external APIs. Traffic never goes out to the internet. High-performance connectivity can be delivered via Azure ExpressRoute circuits. In the example, the internal consumers are highlighted in orange.
Resource Manager requires that all resource groups specify a location. This loca
The following example shows how to create a virtual network by using Resource Manager. The virtual network in this example consists of separate subnets for Application Gateway and API Management.
-1. Create network security groups (NSGs) and NSG rules for the Application Gateway and API Management subnets.
+1. Create a network security group (NSG) and NSG rules for the Application Gateway subnet.
```powershell $appGwRule1 = New-AzNetworkSecurityRuleConfig -Name appgw-in -Description "AppGw inbound" `
The following example shows how to create a virtual network by using Resource Ma
$appGwRule2 = New-AzNetworkSecurityRuleConfig -Name appgw-in-internet -Description "AppGw inbound Internet" ` -Access Allow -Protocol "TCP" -Direction Inbound -Priority 110 -SourceAddressPrefix ` Internet -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 443
+
$appGwNsg = New-AzNetworkSecurityGroup -ResourceGroupName $resGroupName -Location $location -Name ` "NSG-APPGW" -SecurityRules $appGwRule1, $appGwRule2
+ ```
+
+1. Create a network security group (NSG) and NSG rules for the API Management subnet. [API Management stv2 requires several specific NSG rules](api-management-using-with-internal-vnet.md#enable-vnet-connection).
+
+ ```powershell
+ $apimRule1 = New-AzNetworkSecurityRuleConfig -Name APIM-Management -Description "APIM inbound" `
+ -Access Allow -Protocol Tcp -Direction Inbound -Priority 100 -SourceAddressPrefix ApiManagement `
+ -SourcePortRange * -DestinationAddressPrefix VirtualNetwork -DestinationPortRange 3443
+ $apimRule2 = New-AzNetworkSecurityRuleConfig -Name AllowAppGatewayToAPIM -Description "Allows inbound App Gateway traffic to APIM" `
+ -Access Allow -Protocol Tcp -Direction Inbound -Priority 110 -SourceAddressPrefix "10.0.0.0/24" `
+ -SourcePortRange * -DestinationAddressPrefix "10.0.1.0/24" -DestinationPortRange 443
+ $apimRule3 = New-AzNetworkSecurityRuleConfig -Name AllowAzureLoadBalancer -Description "Allows inbound Azure Infrastructure Load Balancer traffic to APIM" `
+ -Access Allow -Protocol Tcp -Direction Inbound -Priority 120 -SourceAddressPrefix AzureLoadBalancer `
+ -SourcePortRange * -DestinationAddressPrefix "10.0.1.0/24" -DestinationPortRange 6390
+ $apimRule4 = New-AzNetworkSecurityRuleConfig -Name AllowKeyVault -Description "Allows outbound traffic to Azure Key Vault" `
+ -Access Allow -Protocol Tcp -Direction Outbound -Priority 100 -SourceAddressPrefix "10.0.1.0/24" `
+ -SourcePortRange * -DestinationAddressPrefix AzureKeyVault -DestinationPortRange 443
- $apimRule1 = New-AzNetworkSecurityRuleConfig -Name apim-in -Description "APIM inbound" `
- -Access Allow -Protocol Tcp -Direction Inbound -Priority 100 -SourceAddressPrefix `
- ApiManagement -SourcePortRange * -DestinationAddressPrefix VirtualNetwork -DestinationPortRange 3443
$apimNsg = New-AzNetworkSecurityGroup -ResourceGroupName $resGroupName -Location $location -Name `
- "NSG-APIM" -SecurityRules $apimRule1
+ "NSG-APIM" -SecurityRules $apimRule1, $apimRule2, $apimRule3, $apimRule4
``` 1. Assign the address range 10.0.0.0/24 to the subnet variable to be used for Application Gateway while you create a virtual network.
The following example shows how to create a virtual network by using Resource Ma
The following example shows how to create an API Management instance in a virtual network configured for internal access only.
+1. API Management stv2 requires a public IP with a `DomainNameLabel`:
+
+ ```powershell
+ $apimPublicIpAddressId = New-AzPublicIpAddress -ResourceGroupName $resGroupName -name "pip-apim" -location $location `
+ -AllocationMethod Static -Sku Standard -Force -DomainNameLabel "apim-contoso"
+ ```
+ 1. Create an API Management virtual network object by using the subnet `$apimSubnetData` you created. ```powershell
The following example shows how to create an API Management instance in a virtua
1. Create an API Management instance inside the virtual network. This example creates the service in the Developer service tier. Substitute a unique name for your API Management instance. ```powershell
+ $domain = "contoso.net"
$apimServiceName = "ContosoApi" # API Management service instance name, must be globally unique $apimOrganization = "Contoso" # Organization name
- $apimAdminEmail = "admin@contoso.com" # Administrator's email address
- $apimService = New-AzApiManagement -ResourceGroupName $resGroupName -Location $location -Name $apimServiceName -Organization $apimOrganization -AdminEmail $apimAdminEmail -VirtualNetwork $apimVirtualNetwork -VpnType "Internal" -Sku "Developer"
+ $apimAdminEmail = "admin@contoso.net" # Administrator's email address
+
+ $apimService = New-AzApiManagement -ResourceGroupName $resGroupName -Location $location -Name $apimServiceName -Organization $apimOrganization `
+ -AdminEmail $apimAdminEmail -VirtualNetwork $apimVirtualNetwork -VpnType "Internal" -Sku "Developer" -PublicIpAddressId $apimPublicIpAddressId.Id
``` It can take between 30 and 40 minutes to create and activate an API Management instance in this tier. After the previous command succeeds, see [DNS configuration required to access internal virtual network API Management service](api-management-using-with-internal-vnet.md#dns-configuration) to confirm access to it.
To set up custom domain names in API Management:
1. Initialize the following variables with the details of the certificates with private keys for the domains and the trusted root certificate. In this example, we use `api.contoso.net`, `portal.contoso.net`, and `management.contoso.net`. ```powershell
- $gatewayHostname = "api.contoso.net" # API gateway host
- $portalHostname = "portal.contoso.net" # API developer portal host
- $managementHostname = "management.contoso.net" # API management endpoint host
+ $gatewayHostname = "api.$domain" # API gateway host
+ $portalHostname = "portal.$domain" # API developer portal host
+ $managementHostname = "management.$domain" # API management endpoint host
$gatewayCertPfxPath = "C:\Users\Contoso\gateway.pfx" # Full path to api.contoso.net .pfx file $portalCertPfxPath = "C:\Users\Contoso\portal.pfx" # Full path to portal.contoso.net .pfx file $managementCertPfxPath = "C:\Users\Contoso\management.pfx" # Full path to management.contoso.net .pfx file
To configure a private DNS zone for DNS resolution in the virtual network:
1. Create a private DNS zone and link the virtual network. ```powershell
- $myZone = New-AzPrivateDnsZone -Name "contoso.net" -ResourceGroupName $resGroupName
- $link = New-AzPrivateDnsVirtualNetworkLink -ZoneName contoso.net `
+ $myZone = New-AzPrivateDnsZone -Name $domain -ResourceGroupName $resGroupName
+ $link = New-AzPrivateDnsVirtualNetworkLink -ZoneName $domain `
-ResourceGroupName $resGroupName -Name "mylink" ` -VirtualNetworkId $vnet.id ```
To configure a private DNS zone for DNS resolution in the virtual network:
```powershell $apimIP = $apimService.PrivateIPAddresses[0]
- New-AzPrivateDnsRecordSet -Name api -RecordType A -ZoneName contoso.net `
+ New-AzPrivateDnsRecordSet -Name api -RecordType A -ZoneName $domain `
-ResourceGroupName $resGroupName -Ttl 3600 ` -PrivateDnsRecords (New-AzPrivateDnsRecordConfig -IPv4Address $apimIP)
- New-AzPrivateDnsRecordSet -Name portal -RecordType A -ZoneName contoso.net `
+ New-AzPrivateDnsRecordSet -Name portal -RecordType A -ZoneName $domain `
-ResourceGroupName $resGroupName -Ttl 3600 ` -PrivateDnsRecords (New-AzPrivateDnsRecordConfig -IPv4Address $apimIP)
- New-AzPrivateDnsRecordSet -Name management -RecordType A -ZoneName contoso.net `
+ New-AzPrivateDnsRecordSet -Name management -RecordType A -ZoneName $domain `
-ResourceGroupName $resGroupName -Ttl 3600 ` -PrivateDnsRecords (New-AzPrivateDnsRecordConfig -IPv4Address $apimIP) ```
Create a Standard public IP resource **publicIP01** in the resource group.
```powershell $publicip = New-AzPublicIpAddress -ResourceGroupName $resGroupName `
- -name "publicIP01" -location $location -AllocationMethod Static -Sku Standard
+ -name "pip-appgateway" -location $location -AllocationMethod Static -Sku Standard
``` An IP address is assigned to the application gateway when the service starts.
All configuration items must be set up before you create the application gateway
```powershell $gatewayRule = New-AzApplicationGatewayRequestRoutingRule -Name "gatewayrule" ` -RuleType Basic -HttpListener $gatewayListener -BackendAddressPool $apimGatewayBackendPool `
- -BackendHttpSettings $apimPoolGatewaySetting
+ -BackendHttpSettings $apimPoolGatewaySetting -Priority 10
$portalRule = New-AzApplicationGatewayRequestRoutingRule -Name "portalrule" ` -RuleType Basic -HttpListener $portalListener -BackendAddressPool $apimPortalBackendPool `
- -BackendHttpSettings $apimPoolPortalSetting
+ -BackendHttpSettings $apimPoolPortalSetting -Priority 20
$managementRule = New-AzApplicationGatewayRequestRoutingRule -Name "managementrule" ` -RuleType Basic -HttpListener $managementListener -BackendAddressPool $apimManagementBackendPool `
- -BackendHttpSettings $apimPoolManagementSetting
+ -BackendHttpSettings $apimPoolManagementSetting -Priority 30
``` > [!TIP]
app-service Operating System Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/operating-system-functionality.md
At its core, App Service is a service running on top of the Azure PaaS (platform
- An operating system drive (`%SystemDrive%`), whose size varies depending on the size of the VM. - A resource drive (`%ResourceDrive%`) used by App Service internally.
+A best practice is to always use the environment variables `%SystemDrive%` and `%ResourceDrive%` instead of hard-coded file paths. The root path returned from these two environment variables has shifted over time from `d:\` to `c:\`. However, older applications hard-coded with file path references to `d:\` will continue to work because the App Service platform automatically remaps `d:\` to instead point at `c:\`. As noted above, it is highly recommended to always use the environment variables when building file paths and avoid confusion over platform changes to the default root file path.
+ It is important to monitor your disk utilization as your application grows. If the disk quota is reached, it can have adverse effects to your application. For example: - The app may throw an error indicating not enough space on the disk.
app-service Overview Hosting Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-hosting-plans.md
The _pricing tier_ of an App Service plan determines what App Service features y
- **Shared compute**: **Free** and **Shared**, the two base tiers, runs an app on the same Azure VM as other App Service apps, including apps of other customers. These tiers allocate CPU quotas to each app that runs on the shared resources, and the resources cannot scale out. - **Dedicated compute**: The **Basic**, **Standard**, **Premium**, **PremiumV2**, and **PremiumV3** tiers run apps on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances are available to you for scale-out.-- **Isolated**: This **Isolated** and **IsolatedV2** tiers run dedicated Azure VMs on dedicated Azure Virtual Networks. It provides network isolation on top of compute isolation to your apps. It provides the maximum scale-out capabilities.
+- **Isolated**: The **Isolated** and **IsolatedV2** tiers run dedicated Azure VMs on dedicated Azure Virtual Networks. It provides network isolation on top of compute isolation to your apps. It provides the maximum scale-out capabilities.
[!INCLUDE [app-service-dev-test-note](../../includes/app-service-dev-test-note.md)]
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 04/03/2023
# Azure Policy Regulatory Compliance controls for Azure App Service [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
-provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
+provides Microsoft created and managed initiative definitions, known as *built-ins*, for the
**compliance domains** and **security controls** related to different compliance standards. This page lists the **compliance domains** and **security controls** for Azure App Service. You can assign the built-ins for a **security control** individually to help make your Azure resources
compliant with the specific standard.
## Release notes
+### April 2023
+
+- **App Service apps that use Java should use the latest 'Java version'**
+ - Rename of policy to "App Service apps that use Java should use a specified 'Java version'"
+ - Update policy so that it requires a version specification before assignment
+- **App Service apps that use Python should use the latest 'Python version'**
+ - Rename of policy to "App Service apps that use Python should use a specified 'Python version'"
+ - Update policy so that it requires a version specification before assignment
+- **Function apps that use Java should use the latest 'Java version'**
+ - Rename of policy to "Function apps that use Java should use a specified 'Java version'"
+ - Update policy so that it requires a version specification before assignment
+- **Function apps that use Python should use the latest 'Python version'**
+ - Rename of policy to "Function apps that use Python should use a specified 'Python version'"
+ - Update policy so that it requires a version specification before assignment
+- **App Service apps that use PHP should use the latest 'PHP version'**
+ - Rename of policy to "App Service apps that use PHP should use a specified 'PHP version'"
+ - Update policy so that it requires a version specification before assignment
+- **App Service app slots that use Python should use a specified 'Python version'**
+ - New policy created
+- **Function app slots that use Python should use a specified 'Python version'**
+ - New policy created
+- **App Service app slots that use PHP should use a specified 'PHP version'**
+ - New policy created
+- **App Service app slots that use Java should use a specified 'Java version'**
+ - New policy created
+- **Function app slots that use Java should use a specified 'Java version'**
+ - New policy created
+ ### November 2022 - Deprecation of policy **App Service apps should enable outbound non-RFC 1918 traffic to Azure Virtual Network**
compliant with the specific standard.
- Deprecation of policy **Configure App Services to disable public network access** - Replaced by "Configure App Service apps to disable public network access" - Deprecation of policy **App Services should disable public network access**
- - Replaced by "App Service apps should disable public network access" to support _Deny_ effect
+ - Replaced by "App Service apps should disable public network access" to support *Deny* effect
- **App Service apps should disable public network access** - New policy created - **App Service app slots should disable public network access**
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
The `azd up` command created all of the resources for the sample application in
* **Azure Virtual Network**: A virtual network was created to enable the provisioned resources to securely connect and communicate with one another. Related configurations such as setting up a private DNS zone link were also applied. * **Azure App Service plan**: An App Service plan was created to host App Service instances. App Service plans define what compute resources are available for one or more web apps. * **Azure App Service**: An App Service instance was created in the new App Service plan to host and run the deployed application. In this case a Linux instance was created and configured to run Python apps. Additional configurations were also applied to the app service, such as setting the Postgres connection string and secret keys.
-* **Azure Database for PostgresSQL**: A Postgres database and server were created for the app hosted on App Service to connect to. The required admin user, network and connection settings were also configured.
+* **Azure Database for PostgreSQL**: A Postgres database and server were created for the app hosted on App Service to connect to. The required admin user, network and connection settings were also configured.
* **Azure Application Insights**: Application insights was set up and configured for the app hosted on the App Service. This service enables detailed telemetry and monitoring for your application. You can inspect the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project to understand how each of these resources were provisioned in more detail. The `resources.bicep` file defines most of the different services created in Azure. For example, the App Service plan and App Service web app instance were created and connected using the following Bicep code:
Advance to the next tutorial to learn how to secure your app with a custom domai
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
Title: Azure Automation Update Management Supported Clients
description: This article describes the supported Windows and Linux operating systems with Azure Automation Update Management. Previously updated : 01/04/2023 Last updated : 04/17/2023
All operating systems are assumed to be x64. x86 is not supported for any operat
|Operating system |Notes | |||
+| Windows Server 2022 (Datacenter)| |
|Windows Server 2019 (Datacenter/Standard including Server Core)<br><br>Windows Server 2016 (Datacenter/Standard excluding Server Core)<br><br>Windows Server 2012 R2(Datacenter/Standard)<br><br>Windows Server 2012 | | |Windows Server 2008 R2 (RTM and SP1 Standard)| Update Management supports assessments and patching for this operating system. The [Hybrid Runbook Worker](../automation-windows-hrw-install.md) is supported for Windows Server 2008 R2. |
azure-app-configuration Concept Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-private-endpoint.md
Service account owners can manage consent requests and private endpoints through
### Private endpoints for App Configuration
-When creating a private endpoint, you must specify the App Configuration store to which it connects. If you have multiple App Configuration stores, you need a separate private endpoint for each store.
+When creating a private endpoint, you must specify the App Configuration store to which it connects. If you enable the geo-replication for an App Configuration store, you can connect to all replicas of the store using the same private endpoint. If you have multiple App Configuration stores, you need a separate private endpoint for each store.
### Connecting to private endpoints
When you create a private endpoint, the DNS CNAME resource record for the config
When you resolve the endpoint URL from within the VNet hosting the private endpoint, it resolves to the private endpoint of the store. When resolved from outside the VNet, the endpoint URL resolves to the public endpoint. When you create a private endpoint, the public endpoint is disabled.
-If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the service endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `[Your-store-name].privatelink.azconfig.io` with the private endpoint IP address.
+If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the service endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `[Your-store-name].privatelink.azconfig.io` (or `[Your-store-name]-[replica-name].privatelink.azconfig.io` for a replica if the geo-replication is enabled) with the private endpoint IP address.
> [!TIP] > When using a custom or on-premises DNS server, you should configure your DNS server to resolve the store name in the `privatelink` subdomain to the private endpoint IP address. You can do this by delegating the `privatelink` subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
azure-app-configuration Enable Dynamic Configuration Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
The configuration refresh is triggered by the incoming requests to your web app.
![Launching updated quickstart app locally](./media/quickstarts/aspnet-core-app-launch-local-after.png)
+## Logging and monitoring
+
+Logs are output upon configuration refresh and contain detailed information on key-values retrieved from your App Configuration store and configuration changes made to your application.
+
+- A default `ILoggerFactory` is added automatically when `services.AddAzureAppConfiguration()` is invoked. The App Configuration provider uses this `ILoggerFactory` to create an instance of `ILogger`, which outputs these logs. ASP.NET Core uses `ILogger` for logging by default, so you don't need to make additional code changes to enable logging for the App Configuration provider.
+- Logs are output at different log levels. The default level is `Information`.
+
+ | Log Level | Description |
+ |||
+ | Debug | Logs include the key and label of key-values your application monitors for changes from your App Configuration store. The information also includes whether the key-value has changed compared with what your application has already loaded. Enable logs at this level to troubleshoot your application if a configuration change didn't happen as expected. |
+ | Information | Logs include the keys of configuration settings updated during a configuration refresh. Values of configuration settings are omitted from the log to avoid leaking sensitive data. You can monitor logs at this level to ensure your application picks up expected configuration changes. |
+ | Warning | Logs include failures and exceptions that occurred during configuration refresh. Occasional occurrences can be ignored because the configuration provider will continue using the cached data and attempt to refresh the configuration next time. You can monitor logs at this level for repetitive warnings that may indicate potential issues. For example, you rotated the connection string but forgot to update your application. |
+
+ You can enable logging at the `Debug` log level by adding the following example to your `appsettings.json` file. This example applies to all other log levels as well.
+ ```json
+ "Logging": {
+ "LogLevel": {
+ "Microsoft.Extensions.Configuration.AzureAppConfiguration": "Debug"
+ }
+ }
+ ```
+- The logging category is `Microsoft.Extensions.Configuration.AzureAppConfiguration.Refresh`, which appears before each log. Here are some example logs at each log level:
+ ```console
+ dbug: Microsoft.Extensions.Configuration.AzureAppConfiguration.Refresh[0]
+ Key-value read from App Configuration. Change:'Modified' Key:'ExampleKey' Label:'ExampleLabel' Endpoint:'https://examplestore.azconfig.io'
+
+ info: Microsoft.Extensions.Configuration.AzureAppConfiguration.Refresh[0]
+ Setting updated. Key:'ExampleKey'
+
+ warn: Microsoft.Extensions.Configuration.AzureAppConfiguration.Refresh[0]
+ A refresh operation failed while resolving a Key Vault reference.
+ Key vault error. ErrorCode:'SecretNotFound' Key:'ExampleKey' Label:'ExampleLabel' Etag:'6LaqgBQM9C_Do2XyZa2gAIfj_ArpT52-xWwDSLb2hDo' SecretIdentifier:'https://examplevault.vault.azure.net/secrets/ExampleSecret'
+ ```
+
+Using `ILogger` is the preferred method in ASP.NET applications and is prioritized as the logging source if an instance of `ILoggerFactory` is present. However, if `ILoggerFactory` is not available, logs can alternatively be enabled and configured through the [instructions for .NET Core apps](./enable-dynamic-configuration-dotnet-core.md#logging-and-monitoring). For more information, see [logging in .NET Core and ASP.NET Core](/aspnet/core/fundamentals/logging).
+
+> [!NOTE]
+> Logging is available if you use version **6.0.0** or later of any of the following packages.
+> - `Microsoft.Extensions.Configuration.AzureAppConfiguration`
+> - `Microsoft.Azure.AppConfiguration.AspNetCore`
+> - `Microsoft.Azure.AppConfiguration.Functions.Worker`
+ ## Clean up resources [!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)]
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
Calling the `ConfigureRefresh` method alone won't cause the configuration to ref
> [!NOTE] > Since the cache expiration time was set to 10 seconds using the `SetCacheExpiration` method while specifying the configuration for the refresh operation, the value for the configuration setting will only be updated if at least 10 seconds have elapsed since the last refresh for that setting.
+## Logging and monitoring
+
+Logs are output upon configuration refresh and contain detailed information on key-values retrieved from your App Configuration store and configuration changes made to your application. If you have an ASP.NET Core application, see these instructions for [Logging and Monitoring in ASP.NET Core](./enable-dynamic-configuration-aspnet-core.md#logging-and-monitoring). Otherwise, you can enable logging using the instructions for [logging with the Azure SDK](/dotnet/azure/sdk/logging).
+
+- Logs are output at different event levels. The default level is `Informational`.
+
+ | Event Level | Description |
+ |||
+ | Verbose | Logs include the key and label of key-values your application monitors for changes from your App Configuration store. The information also includes whether the key-value has changed compared with what your application has already loaded. Enable logs at this level to troubleshoot your application if a configuration change didn't happen as expected. |
+ | Informational | Logs include the keys of configuration settings updated during a configuration refresh. Values of configuration settings are omitted from the log to avoid leaking sensitive data. You can monitor logs at this level to ensure your application picks up expected configuration changes. |
+ | Warning | Logs include failures and exceptions that occurred during configuration refresh. Occasional occurrences can be ignored because the configuration provider will continue using the cached data and attempt to refresh the configuration next time. You can monitor logs at this level for repetitive warnings that may indicate potential issues. For example, you rotated the connection string but forgot to update your application. |
+
+ You can enable logging at the `Verbose` event level by specifying the `EventLevel.Verbose` parameter, as done in the following example. These instructions apply to all other event levels as well. This example also enables logs for only the `Microsoft-Extensions-Configuration-AzureAppConfiguration-Refresh` category.
+ ```csharp
+ using var listener = new AzureEventSourceListener((eventData, text) =>
+ {
+ if (eventData.EventSource.Name == "Microsoft-Extensions-Configuration-AzureAppConfiguration-Refresh")
+ {
+ Console.WriteLine("[{1}] {0}: {2}", eventData.EventSource.Name, eventData.Level, text);
+ }
+ }, EventLevel.Verbose);
+ ```
+- The logging category is `Microsoft-Extensions-Configuration-AzureAppConfiguration-Refresh`, which appears before each log. Here are some example logs at each event level:
+ ```console
+ [Verbose] Microsoft-Extensions-Configuration-AzureAppConfiguration-Refresh:
+ Key-value read from App Configuration. Change:'Modified' Key:'ExampleKey' Label:'ExampleLabel' Endpoint:'https://examplestore.azconfig.io'
+
+ [Informational] Microsoft-Extensions-Configuration-AzureAppConfiguration-Refresh:
+ Setting updated. Key:'ExampleKey'
+
+ [Warning] Microsoft-Extensions-Configuration-AzureAppConfiguration-Refresh:
+ A refresh operation failed while resolving a Key Vault reference.
+ Key vault error. ErrorCode:'SecretNotFound' Key:'ExampleKey' Label:'ExampleLabel' Etag:'6LaqgBQM9C_Do2XyZa2gAIfj_ArpT52-xWwDSLb2hDo' SecretIdentifier:'https://examplevault.vault.azure.net/secrets/ExampleSecret'
+ ```
+
+> [!NOTE]
+> Logging is available if you use version **6.0.0** or later of any of the following packages.
+> - `Microsoft.Extensions.Configuration.AzureAppConfiguration`
+> - `Microsoft.Azure.AppConfiguration.AspNetCore`
+> - `Microsoft.Azure.AppConfiguration.Functions.Worker`
+ ## Clean up resources [!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)]
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 03/17/2023 Last updated : 04/17/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
For more information, see [Tutorial: Deploy applications using GitOps with Flux
The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
+### 1.7.3 (April 2023)
+
+Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
+
+- source-controller: v0.36.1
+- kustomize-controller: v0.35.1
+- helm-controller: v0.31.2
+- notification-controller: v0.33.0
+- image-automation-controller: v0.31.0
+- image-reflector-controller: v0.26.1
+
+Changes made for this version:
+
+- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
+- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true`
+- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled)
+ ### 1.7.0 (March 2023) Flux version: [Release v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)
Changes made for this version:
- Adds exception for [aad-pod-identity in flux extension](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-azure-ad-pod-identity-enabled) - Enables reconciler for flux extension
-### 1.6.1 (October 2022)
-
-Flux version: [Release v0.35.0](https://github.com/fluxcd/flux2/releases/tag/v0.35.0)
--- source-controller: v0.30.1-- kustomize-controller: v0.29.0-- helm-controller: v0.25.0-- notification-controller: v0.27.0-- image-automation-controller: v0.26.0-- image-reflector-controller: v0.22.0-
-Changes made for this version:
--- Upgrades Flux to [v0.35.0](https://github.com/fluxcd/flux2/releases/tag/v0.35.0)-- Implements fix for a security issue where some Flux controllers could be vulnerable to a denial of service attack. Users that have permissions to change Flux's objects, either through a Flux source or directly within a cluster, could provide invalid data to fields `spec.Interval` or `spec.Timeout` (and structured variations of these fields), causing the entire object type to stop being processed. This issue had two root causes: [Kubernetes type `metav1.Duration` not being fully compatible with the Go type `time.Duration`](https://github.com/kubernetes/apimachinery/issues/131), or a lack of validation within Flux to restrict allowed values.-- Adds support for [installing the `microsoft.flux` extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled)-- Fixes bug where [deleting the extension may fail on AKS with Windows node pool](https://github.com/Azure/AKS/issues/3191)-- Adds support for sasToken for Azure blob storage at account level as well as container level- ## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes [Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. The Dapr extension eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-94
### Fixed
+- Fixed an issue that could cause the guest configuration service (gc_service) to repeatedly crash and restart on Linux systems
- Resolved a rare condition under which the guest configuration service (gc_service) could consume excessive CPU resources - Removed "sudo" calls in internal install script that could be blocked if SELinux is enabled - Reduced how long network checks wait before determining a network endpoint is unreachable
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
If two agents use the same configuration, you will encounter inconsistent behavi
## Supported operating systems
-Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. Azure Arc does not run on x86 (32-bit) or ARM-based architectures.
+Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures.
* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported
Azure Arc supports the following Windows and Linux operating systems. Only x86-6
* Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) * Windows IoT Enterprise * Azure Stack HCI
+* CBL-Mariner 1.0, 2.0
* Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS * Debian 10 and 11 * CentOS Linux 7 and 8
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
See the developer guide for your language to learn more about writing logs from
+ [C# (.NET class library)](functions-dotnet-class-library.md#logging) + [Java](functions-reference-java.md#logger)
-+ [JavaScript](functions-reference-node.md#write-trace-output-to-logs)
++ [JavaScript](functions-reference-node.md#logging) + [PowerShell](functions-reference-powershell.md#logging) + [Python](functions-reference-python.md#logging)
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
zone_pivot_groups: functions-nodejs-model
# Azure Functions JavaScript developer guide
-This guide is an introduction to developing Azure Functions using JavaScript or TypeScript. The article assumes that you've already read the [Azure Functions developer guide](functions-reference.md).
+This guide is an introduction to developing Azure Functions using JavaScript or TypeScript. The article assumes that you have already read the [Azure Functions developer guide](functions-reference.md).
> [!IMPORTANT] > The content of this article changes based on your choice of the Node.js programming model in the selector at the top of this page. The version you choose should match the version of the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package you are using in your app. If you do not have that package listed in your `package.json`, the default is v3. Learn more about the differences between v3 and v4 in the [upgrade guide](./functions-node-upgrade-v4.md).
The following table shows each version of the Node.js programming model along wi
| 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | | 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | -
-## JavaScript function basics
-
-A JavaScript (Node.js) function is an exported `function` that executes when triggered ([triggers are configured in function.json](functions-triggers-bindings.md)). The first argument passed to every function is a `context` object, which is used for receiving and sending binding data, logging, and communicating with the runtime.
- ## Folder structure
-The required folder structure for a JavaScript project looks like the following. This default can be changed. For more information, see the [scriptFile](#using-scriptfile) section.
+
+The required folder structure for a JavaScript project looks like the following example:
```
-FunctionsProject
- | - MyFirstFunction
+<project_root>/
+ | - .vscode/
+ | - node_modules/
+ | - myFirstFunction/
| | - index.js | | - function.json
- | - MySecondFunction
+ | - mySecondFunction/
| | - index.js | | - function.json
- | - SharedCode
- | | - myFirstHelperFunction.js
- | | - mySecondHelperFunction.js
- | - node_modules
+ | - .funcignore
| - host.json
+ | - local.settings.json
| - package.json ```
-At the root of the project, there's a shared [host.json](functions-host-json.md) file that can be used to configure the function app. Each function has a folder with its own code file (.js) and binding configuration file (function.json). The name of `function.json`'s parent directory is always the name of your function.
+The main project folder, *<project_root>*, can contain the following files:
+
+- **.vscode/**: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings).
+- **myFirstFunction/function.json**: Contains configuration for the function's trigger, inputs, and outputs. The name of the directory determines the name of your function.
+- **myFirstFunction/index.js**: Stores your function code. To change this default file path, see [using scriptFile](#using-scriptfile).
+- **.funcignore**: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *test/* to ignore test cases, and *local.settings.json* to prevent local app settings being published.
+- **host.json**: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
+- **local.settings.json**: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
+- **package.json**: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts.
::: zone-end ::: zone pivot="nodejs-model-v4"
-## Folder structure
- The recommended folder structure for a JavaScript project looks like the following example: ``` <project_root>/ | - .vscode/
+ | - node_modules/
| - src/ | | - functions/ | | | - myFirstFunction.js
The recommended folder structure for a JavaScript project looks like the followi
The main project folder, *<project_root>*, can contain the following files:
-* *.vscode/*: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings).
-* *src/functions/*: The default location for all functions and their related triggers and bindings.
-* *test/*: (Optional) Contains the test cases of your function app.
-* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *test/* to ignore test cases, and *local.settings.json* to prevent local app settings being published.
-* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
-* *local.settings.json*: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
-* *package.json*: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts.
+- **.vscode/**: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings).
+- **src/functions/**: The default location for all functions and their related triggers and bindings.
+- **test/**: (Optional) Contains the test cases of your function app.
+- **.funcignore**: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *test/* to ignore test cases, and *local.settings.json* to prevent local app settings being published.
+- **host.json**: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
+- **local.settings.json**: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
+- **package.json**: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts.
::: zone-end
+<a name="exporting-an-async-function"></a>
+<a name="exporting-a-function"></a>
-<a name="#exporting-an-async-function"></a>
+## Registering a function
-## Exporting a function
-JavaScript functions must be exported via [`module.exports`](https://nodejs.org/api/modules.html#modules_module_exports) (or [`exports`](https://nodejs.org/api/modules.html#modules_exports)). Your exported function should be a JavaScript function that executes when triggered.
+The v3 model registers a function based on the existence of two files. First, you need a `function.json` file located in a folder one level down from the root of your app. The name of the folder determines the function's name and the file contains configuration for your function's inputs/outputs. Second, you need a JavaScript file containing your code. By default, the model looks for an `index.js` file in the same folder as your `function.json`. Your code must export a function using [`module.exports`](https://nodejs.org/api/modules.html#modules_module_exports) (or [`exports`](https://nodejs.org/api/modules.html#modules_exports)). To customize the file location or export name of your function, see [configuring your function's entry point](functions-reference-node.md#configure-function-entry-point).
-By default, the Functions runtime looks for your function in `index.js`, where `index.js` shares the same parent directory as its corresponding `function.json`. In the default case, your exported function should be the only export from its file or the export named `run` or `index`. To configure the file location and export name of your function, see [configuring your function's entry point](functions-reference-node.md#configure-function-entry-point).
+The function you export should always be declared as an [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) in the v3 model. You can export a synchronous function, but then you must call [`context.done()`](#contextdone) to signal that your function is completed, which is deprecated and not recommended.
-Your exported function is passed several arguments on execution. The first argument it takes is always a `context` object.
+Your function is passed an [invocation `context`](#invocation-context) as the first argument and your [inputs](#inputs) as the remaining arguments.
-When using the [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) declaration or plain JavaScript [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise), you don't need to explicitly call the [`context.done`](#contextdone-method) callback to signal that your function has completed. Your function completes when the exported async function/Promise completes.
+The following example is a simple function that logs that it was triggered and responds with `Hello, world!`:
-The following example is a simple function that logs that it was triggered and immediately completes execution.
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "authLevel": "anonymous",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+}
+```
```javascript
-module.exports = async function (context) {
- context.log('JavaScript trigger function processed a request.');
+module.exports = async function (context, request) {
+ context.log('Http function was triggered.');
+ context.res = { body: 'Hello, world!' };
}; ```
-When exporting an async function, you can also configure an output binding to take the `return` value. This option is recommended if you only have one output binding.
-
-If your function is synchronous (doesn't return a Promise), you must pass the `context` object, as calling `context.done` is required for correct use. This option isn't recommended, for more information on the alternative, see [Use `async` and `await`](#use-async-and-await).
-```javascript
-// You should include `context`
-// Other arguments like `myTrigger` are optional
-module.exports = function(context, myTrigger, myInput, myOtherInput) {
- // function logic goes here :)
- context.done();
-};
-```
-### Returning from the function
+The programming model loads your functions based on the `main` field in your `package.json`. This field can be set to a single file like `src/index.js` or a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)) specifying multiple files like `src/functions/*.js`.
-To assign an output using `return`, change the `name` property to `$return` in `function.json`.
+In order to register a function, you must import the `app` object from the `@azure/functions` npm module and call the method specific to your trigger type. The first argument when registering a function is the function name. The second argument is an `options` object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration isn't necessary, you can pass the handler directly as the second argument instead of an `options` object.
-```json
-{
- "type": "http",
- "direction": "out",
- "name": "$return"
-}
-```
+Registering a function can be done from any file in your project, as long as that file is loaded (directly or indirectly) based on the `main` field in your `package.json` file. The function should be registered at a global scope because you can't register functions once executions have started.
-In this case, your function should look like the following example:
+The following example is a simple function that logs that it was triggered and responds with `Hello, world!`:
```javascript
-module.exports = async function (context, req) {
- context.log('JavaScript HTTP trigger function processed a request.');
- // You can call and await an async method here
- return {
- body: "Hello, world!"
- };
-}
+const { app } = require('@azure/functions');
+
+app.http('httpTrigger1', {
+ methods: ['POST', 'GET'],
+ handler: async (request, context) => {
+ context.log('Http function was triggered.');
+ return { body: 'Hello, world!' };
+ }
+});
```
-## Bindings
-In JavaScript, [bindings](functions-triggers-bindings.md) are configured and defined in a function's function.json. Functions interact with bindings in several ways.
+
+<a name="bindings"></a>
+
+## Inputs and outputs
++
+Your function is required to have exactly one primary input called the trigger. It may also have secondary inputs and/or outputs. Inputs and outputs are configured in your `function.json` files and are also referred to as [bindings](./functions-triggers-bindings.md).
### Inputs
-Input are divided into two categories in Azure Functions: one is the trigger input and the other is the secondary input. Trigger and other input bindings (bindings of `direction === "in"`) are used in the following ways:
-
- ```javascript
- module.exports = async function(context, myTrigger, myInput, myOtherInput) { ... };
- ```
-
-
- ```javascript
- module.exports = async function(context) {
- context.log("This is myTrigger: " + context.bindings.myTrigger);
- context.log("This is myInput: " + context.bindings.myInput);
- context.log("This is myOtherInput: " + context.bindings.myOtherInput);
- };
- ```
+
+Inputs are bindings with `direction` set to `in`. The main difference between a trigger and a secondary input is that the `type` for a trigger ends in `Trigger`, for example type [`blobTrigger`](./functions-bindings-storage-blob-trigger.md) vs type [`blob`](./functions-bindings-storage-blob-input.md). Most functions only use a trigger, and not many secondary input types are supported.
+
+Inputs can be accessed in several ways:
+
+- **_[Recommended]_ As arguments passed to your function:** Use the arguments in the same order that they're defined in `function.json`. The `name` property defined in `function.json` doesn't need to match the name of your argument, although it's recommended for the sake of organization.
+
+ ```javascript
+ module.exports = async function (context, myTrigger, myInput, myOtherInput) { ... };
+ ```
+
+- **As properties of [`context.bindings`](#contextbindings):** Use the key matching the `name` property defined in `function.json`.
+
+ ```javascript
+ module.exports = async function (context) {
+ context.log("This is myTrigger: " + context.bindings.myTrigger);
+ context.log("This is myInput: " + context.bindings.myInput);
+ context.log("This is myOtherInput: " + context.bindings.myOtherInput);
+ };
+ ```
+
+<a name="returning-from-the-function"></a>
### Outputs
-Outputs (bindings of `direction === "out"`) can be set in several ways. In all cases, the `name` property of the binding as defined in *function.json* corresponds to the name of the object member written to in your function.
-
-You can assign data to output bindings in one of the following ways (don't combine these methods):
--- **_[Recommended for multiple outputs]_ Returning an object.** If you're using an async/Promise returning function, you can return an object with assigned output data. In the following example, the output bindings are named "httpResponse" and "queueOutput" in *function.json*.-
- ```javascript
- module.exports = async function(context) {
- let retMsg = 'Hello, world!';
- return {
- httpResponse: {
- body: retMsg
- },
- queueOutput: retMsg
- };
- };
- ```
-
-
-- **_[Recommended for single output]_ Returning a value directly and using the $return binding name.** This only works for async/Promise returning functions. See example in [exporting an async function](#exporting-a-function). -- **Assigning values to `context.bindings`** You can assign values directly to context.bindings.-
- ```javascript
- module.exports = async function(context) {
- let retMsg = 'Hello, world!';
- context.bindings.httpResponse = {
- body: retMsg
- };
- context.bindings.queueOutput = retMsg;
- };
- ```
-### Bindings data type
+Outputs are bindings with `direction` set to `out` and can be set in several ways:
-To define the data type for an input binding, use the `dataType` property in the binding definition. For example, to read the content of an HTTP request in binary format, use the type `binary`:
+- **_[Recommended for single output]_ Return the value directly:** If you're using an async function, you can return the value directly. You must change the `name` property of the output binding to `$return` in `function.json` like in the following example:
-```json
-{
- "type": "httpTrigger",
- "name": "req",
- "direction": "in",
- "dataType": "binary"
-}
-```
+ ```json
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ }
+ ```
-Options for `dataType` are: `binary`, `stream`, and `string`.
+ ```javascript
+ module.exports = async function (context, request) {
+ return {
+ body: "Hello, world!"
+ };
+ }
+ ```
+- **_[Recommended for multiple outputs]_ Return an object containing all outputs:** If you're using an async function, you can return an object with a property matching the name of each binding in your `function.json`. The following example uses output bindings named "httpResponse" and "queueOutput":
+ ```json
+ {
+ "name": "httpResponse",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "queueOutput",
+ "type": "queue",
+ "direction": "out",
+ "queueName": "helloworldqueue",
+ "connection": "storage_APPSETTING"
+ }
+ ```
-## Registering a function
+ ```javascript
+ module.exports = async function (context, request) {
+ let message = 'Hello, world!';
+ return {
+ httpResponse: {
+ body: message
+ },
+ queueOutput: message
+ };
+ };
+ ```
-The programming model loads your functions based on the `main` field in your `package.json`. This field can be set to a single file like `src/index.js` or a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)) specifying multiple files like `src/functions/*.js`.
+- **Set values on `context.bindings`:** If you're not using an async function or you don't want to use the previous options, you can set values directly on `context.bindings`, where the key matches the name of the binding. The following example uses output bindings named "httpResponse" and "queueOutput":
-In order to register a function, you must import the `app` object from the `@azure/functions` npm module and call the method specific to your trigger type. The first argument when registering a function is the function name. The second argument is an `options` object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration isn't necessary, you can pass the handler directly as the second argument instead of an `options` object.
+ ```json
+ {
+ "name": "httpResponse",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "queueOutput",
+ "type": "queue",
+ "direction": "out",
+ "queueName": "helloworldqueue",
+ "connection": "storage_APPSETTING"
+ }
+ ```
-Registering a function can be done from any file in your project, as long as that file is loaded (directly or indirectly) based on the `main` field in your `package.json` file. The function should be registered at a global scope because you can't register functions once executions have started.
+ ```javascript
+ module.exports = async function (context, request) {
+ let message = 'Hello, world!';
+ context.bindings.httpResponse = {
+ body: message
+ };
+ context.bindings.queueOutput = message;
+ };
+ ```
-The following example is a simple function that logs that it was triggered and responds with `Hello, world!`.
+### Bindings data type
-```javascript
-const { app } = require('@azure/functions');
+You can use the `dataType` property on an input binding to change the type of your input, however it has some limitations:
+- In Node.js, only `string` and `binary` are supported (`stream` isn't)
+- For HTTP inputs, the `dataType` property is ignored. Instead, use properties on the `request` object to get the body in your desired format. For more information, see [HTTP request](#http-request).
-app.http('httpTrigger1', {
- methods: ['POST', 'GET'],
- handler: async (_request, context) => {
- context.log('Http function processed request');
+In the following example of a [storage queue trigger](./functions-bindings-storage-queue-trigger.md), the default type of `myQueueItem` is a `string`, but if you set `dataType` to `binary`, the type changes to a Node.js `Buffer`.
- return { body: 'Hello, world!' };
+```json
+{
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in",
+ "queueName": "helloworldqueue",
+ "connection": "storage_APPSETTING",
+ "dataType": "binary"
+}
+```
+
+```javascript
+const { Buffer } = require('node:buffer');
+
+module.exports = async function (context, myQueueItem) {
+ if (typeof myQueueItem === 'string') {
+ context.log('myQueueItem is a string');
+ } else if (Buffer.isBuffer(myQueueItem)) {
+ context.log('myQueueItem is a buffer');
}
-});
+};
```
-## Inputs and outputs
+
-Your function is required to have exactly one primary input called the trigger. It may also have secondary inputs, a primary output called the return output, and/or secondary outputs. Inputs and outputs are also referred to as bindings outside the context of the Node.js programming model. Before v4 of the model, these bindings were configured in `function.json` files.
+Your function is required to have exactly one primary input called the trigger. It may also have secondary inputs, a primary output called the return output, and/or secondary outputs. Inputs and outputs are also referred to as [bindings](./functions-triggers-bindings.md) outside the context of the Node.js programming model. Before v4 of the model, these bindings were configured in `function.json` files.
### Trigger input
The trigger is the only required input or output. For most trigger types, you re
```javascript const { app } = require('@azure/functions');+ app.http('helloWorld1', { route: 'hello/world', handler: async (request, ...) => {
app.http('helloWorld1', {
### Return output
-The return output is optional, and in some cases configured by default. For example, an http trigger registered with `app.http` is configured to return an http response output automatically. For most output types, you specify the return configuration on the `options` argument with the help of the `output` object exported from the `@azure/functions` module. During execution, you set this output by returning it from your handler.
+The return output is optional, and in some cases configured by default. For example, an HTTP trigger registered with `app.http` is configured to return an HTTP response output automatically. For most output types, you specify the return configuration on the `options` argument with the help of the `output` object exported from the `@azure/functions` module. During execution, you set this output by returning it from your handler.
+
+The following example uses a [timer trigger](./functions-bindings-timer.md) and a [storage queue output](./functions-bindings-storage-queue-output.md):
```javascript const { app, output } = require('@azure/functions');+ app.timer('timerTrigger1', { ... return: output.storageQueue({
app.timer('timerTrigger1', {
In addition to the trigger and return, you may specify extra inputs or outputs on the `options` argument when registering a function. The `input` and `output` objects exported from the `@azure/functions` module provide type-specific methods to help construct the configuration. During execution, you get or set the values with `context.extraInputs.get` or `context.extraOutputs.set`, passing in the original configuration object as the first argument.
-The following example is a function triggered by a storage queue, with an extra blob input that is copied to an extra blob output.
+The following example is a function triggered by a [storage queue](./functions-bindings-storage-queue-trigger.md), with an extra [storage blob input](./functions-bindings-storage-blob-input.md) that is copied to an extra [storage blob output](./functions-bindings-storage-blob-output.md). The queue message should be the name of a file and replaces `{queueTrigger}` as the blob name to be copied, with the help of a [binding expression](./functions-bindings-expressions-patterns.md).
```javascript const { app, input, output } = require('@azure/functions');
app.storageQueue('copyBlob1', {
The `app`, `trigger`, `input`, and `output` objects exported by the `@azure/functions` module provide type-specific methods for most types. For all the types that aren't supported, a `generic` method has been provided to allow you to manually specify the configuration. The `generic` method can also be used if you want to change the default settings provided by a type-specific method.
-The following example is a simple http triggered function using generic methods instead of type-specific methods.
+The following example is a simple HTTP triggered function using generic methods instead of type-specific methods.
```javascript const { app, output, trigger } = require('@azure/functions');
app.generic('helloWorld1', {
::: zone-end
+<a name="context-object"></a>
-## context object
+## Invocation context
-The runtime uses a `context` object to pass data to and from your function and the runtime. Used to read and set data from bindings and for writing to logs, the `context` object is always the first parameter passed to a function.
+Each invocation of your function is passed an invocation `context` object, used to read inputs, set outputs, write to logs, and read various metadata. In the v3 model, the context object is always the first argument passed to your handler.
-```javascript
-module.exports = async function(context){
+The `context` object has the following properties:
- // function logic goes here
+| Property | Description |
+| | |
+| **`invocationId`** | The ID of the current function invocation. |
+| **`executionContext`** | See [execution context](#contextexecutioncontext). |
+| **`bindings`** | See [bindings](#contextbindings). |
+| **`bindingData`** | Metadata about the trigger input for this invocation, not including the value itself. For example, an [event hub trigger](./functions-bindings-event-hubs-trigger.md) has an `enqueuedTimeUtc` property. |
+| **`traceContext`** | The context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). |
+| **`bindingDefinitions`** | The configuration of your inputs and outputs, as defined in `function.json`. |
+| **`req`** | See [HTTP request](#http-request). |
+| **`res`** | See [HTTP response](#http-response). |
- context.log("The function has executed.");
-};
-```
+### context.executionContext
-The context passed into your function exposes an `executionContext` property, which is an object with the following properties:
+The `context.executionContext` object has the following properties:
-| Property name | Type | Description |
-||||
-| `invocationId` | String | Provides a unique identifier for the specific function invocation. |
-| `functionName` | String | Provides the name of the running function |
-| `functionDirectory` | String | Provides the functions app directory. |
+| Property | Description |
+| | |
+| **`invocationId`** | The ID of the current function invocation. |
+| **`functionName`** | The name of the function that is being invoked. The name of the folder containing the `function.json` file determines the name of the function. |
+| **`functionDirectory`** | The folder containing the `function.json` file. |
+| **`retryContext`** | See [retry context](#contextexecutioncontextretrycontext). |
-The following example shows how to return the `invocationId`.
+#### context.executionContext.retryContext
-```javascript
-module.exports = async function (context, req) {
- context.res = {
- body: context.executionContext.invocationId
- };
-};
-```
+The `context.executionContext.retryContext` object has the following properties:
-## context.bindings property
+| Property | Description |
+| | |
+| **`retryCount`** | A number representing the current retry attempt. |
+| **`maxRetryCount`** | Maximum number of times an execution is retried. A value of `-1` means to retry indefinitely. |
+| **`exception`** | Exception that caused the retry. |
-```js
-context.bindings
-```
+<a name="contextbindings-property"></a>
-Returns a named object that is used to read or assign binding data. Input and trigger binding data can be accessed by reading properties on `context.bindings`. Output binding data can be assigned by adding data to `context.bindings`
+### context.bindings
-For example, the following binding definitions in your function.json let you access the contents of a queue from `context.bindings.myInput` and assign outputs to a queue using `context.bindings.myOutput`.
+The `context.bindings` object is used to read inputs or set outputs. The following example is a [storage queue trigger](./functions-bindings-storage-queue-trigger.md), which uses `context.bindings` to copy a [storage blob input](./functions-bindings-storage-blob-input.md) to a [storage blob output](./functions-bindings-storage-blob-output.md). The queue message's content replaces `{queueTrigger}` as the file name to be copied, with the help of a [binding expression](./functions-bindings-expressions-patterns.md).
```json {
- "type":"queue",
- "direction":"in",
- "name":"myInput"
- ...
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in",
+ "connection": "storage_APPSETTING",
+ "queueName": "helloworldqueue"
}, {
- "type":"queue",
- "direction":"out",
- "name":"myOutput"
- ...
+ "name": "myInput",
+ "type": "blob",
+ "direction": "in",
+ "connection": "storage_APPSETTING",
+ "path": "helloworld/{queueTrigger}"
+},
+{
+ "name": "myOutput",
+ "type": "blob",
+ "direction": "out",
+ "connection": "storage_APPSETTING",
+ "path": "helloworld/{queueTrigger}-copy"
} ``` ```javascript
-// myInput contains the input data, which may have properties such as "name"
-var author = context.bindings.myInput.name;
-// Similarly, you can set your output data
-context.bindings.myOutput = {
- some_text: 'hello world',
- a_number: 1 };
+module.exports = async function (context, myQueueItem) {
+ const blobValue = context.bindings.myInput;
+ context.bindings.myOutput = blobValue;
+};
```
-## context.bindingData property
+<a name="contextdone-method"></a>
-```js
-context.bindingData
-```
+### context.done
-Returns a named object that contains trigger metadata and function invocation data (`invocationId`, `sys.methodName`, `sys.utcNow`, `sys.randGuid`). For an example of trigger metadata, see this [event hubs example](functions-bindings-event-hubs-trigger.md).
+The `context.done` method is deprecated. Before async functions were supported, you would signal your function is done by calling `context.done()`:
-## context.done method
+```javascript
+module.exports = function (context, request) {
+ context.log("this pattern is now deprecated");
+ context.done();
+};
+```
-The `context.done` method is deprecated. The function should be marked as async even if there's no awaited function call inside the function, and the function doesn't need to call `context.done` to indicate the end of the function.
+Now, it's recommended to remove the call to `context.done()` and mark your function as async so that it returns a promise (even if you don't `await` anything). As soon as your function finishes (in other words, the returned promise resolves), the v3 model knows your function is done.
```javascript
-//you don't need an awaited function call inside to use async
-module.exports = async function (context, req) {
- context.log("you don't need an awaited function call inside to use async")
+module.exports = async function (context, request) {
+ context.log("you don't need context.done or an awaited call")
}; ```
-## context.log method
-```js
-context.log(message)
-```
-Allows you to write to the streaming function logs at the default trace level, with other logging levels available. Trace logging is described in detail in the next section.
-## Write trace output to logs
+Each invocation of your function is passed an invocation `context` object, with information about your invocation and methods used for logging. In the v4 model, the `context` object is typically the second argument passed to your handler.
-In Functions, you use the `context.log` methods to write trace output to the logs and the console. When you call `context.log()`, your message is written to the logs at the default trace level, which is the _info_ trace level. Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application logs and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md).
+The `InvocationContext` class has the following properties:
-The following example writes a log at the info trace level, including the invocation ID:
+| Property | Description |
+| | |
+| **`invocationId`** | The ID of the current function invocation. |
+| **`functionName`** | The name of the function. |
+| **`extraInputs`** | Used to get the values of extra inputs. For more information, see [extra inputs and outputs](#extra-inputs-and-outputs). |
+| **`extraOutputs`** | Used to set the values of extra outputs. For more information, see [extra inputs and outputs](#extra-inputs-and-outputs). |
+| **`retryContext`** | See [retry context](#retry-context). |
+| **`traceContext`** | The context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). |
+| **`triggerMetadata`** | Metadata about the trigger input for this invocation, not including the value itself. For example, an [event hub trigger](./functions-bindings-event-hubs-trigger.md) has an `enqueuedTimeUtc` property. |
+| **`options`** | The options used when registering the function, after they've been validated and with defaults explicitly specified. |
-```javascript
-context.log("Something has happened. " + context.invocationId);
-```
+### Retry context
-All `context.log` methods support the same parameter format supported by the Node.js [util.format method](https://nodejs.org/api/util.html#util_util_format_format). Consider the following code, which writes function logs by using the default trace level:
+The `retryContext` object has the following properties:
-```javascript
-context.log('Node.js HTTP trigger function processed a request. RequestUri=' + req.originalUrl);
-context.log('Request Headers = ' + JSON.stringify(req.headers));
-```
+| Property | Description |
+| | |
+| **`retryCount`** | A number representing the current retry attempt. |
+| **`maxRetryCount`** | Maximum number of times an execution is retried. A value of `-1` means to retry indefinitely. |
+| **`exception`** | Exception that caused the retry. |
-You can also write the same code in the following format:
+For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies).
-```javascript
-context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl);
-context.log('Request Headers = ', JSON.stringify(req.headers));
-```
-> [!NOTE]
-> Don't use `console.log` to write trace outputs. Because output from `console.log` is captured at the function app level, it's not tied to a specific function invocation and isn't displayed in a specific function's logs. Also, version 1.x of the Functions runtime doesn't support using `console.log` to write to the console.
+<a name="contextlog-method"></a>
+<a name="write-trace-output-to-logs"></a>
-### Trace levels
+## Logging
-In addition to the default level, the following logging methods are available that let you write function logs at specific trace levels.
+In Azure Functions, it's recommended to use `context.log()` to write logs. Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application logs and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md).
-| Method | Description |
-| - | |
-| **context.log.error(_message_)** | Writes an error-level event to the logs. |
-| **context.log.warn(_message_)** | Writes a warning-level event to the logs. |
-| **context.log.info(_message_)** | Writes to info level logging, or lower. |
-| **context.log.verbose(_message_)** | Writes to verbose level logging. |
+> [!NOTE]
+> If you use the alternative Node.js `console.log` method, those logs are tracked at the app-level and will *not* be associated with any specific function. It is *highly recommended* to use `context` for logging instead of `console` so that all logs are associated with a specific function.
-The following example writes the same log at the warning trace level, instead of the info level:
+The following example writes a log at the default "information" level, including the invocation ID:
```javascript
-context.log.warn("Something has happened. " + context.invocationId);
+context.log(`Something has happened. Invocation ID: "${context.invocationId}"`);
```
-Because _error_ is the highest trace level, this trace is written to the output at all trace levels as long as logging is enabled.
+<a name="trace-levels"></a>
-### Configure the trace level for logging
+### Log levels
+
+In addition to the default `context.log` method, the following methods are available that let you write logs at specific levels:
++
+| Method | Description |
+| | - |
+| **`context.log.error()`** | Writes an error-level event to the logs. |
+| **`context.log.warn()`** | Writes a warning-level event to the logs. |
+| **`context.log.info()`** | Writes an information-level event to the logs. |
+| **`context.log.verbose()`** | Writes a trace-level event to the logs. |
+++
+| Method | Description |
+| | - |
+| **`context.trace()`** | Writes a trace-level event to the logs. |
+| **`context.debug()`** | Writes a debug-level event to the logs. |
+| **`context.info()`** | Writes an information-level event to the logs. |
+| **`context.warn()`** | Writes a warning-level event to the logs. |
+| **`context.error()`** | Writes an error-level event to the logs. |
+
-Azure Functions lets you define the threshold trace level for writing to the logs or the console. The specific threshold settings depend on your version of the Functions runtime.
+<a name="configure-the-trace-level-for-logging"></a>
-To set the threshold for traces written to the logs, use the `logging.logLevel` property in the host.json file. This JSON object lets you define a default threshold for all functions in your function app, plus you can define specific thresholds for individual functions. To learn more, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
+### Configure log level
+
+Azure Functions lets you define the threshold level to be used when tracking and viewing logs. To set the threshold, use the `logging.logLevel` property in the `host.json` file. This property lets you define a default level applied to all functions, or a threshold for each individual function. To learn more, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
+ ## Track custom data
const appInsights = require("applicationinsights");
appInsights.setup(); const client = appInsights.defaultClient;
-module.exports = async function (context, req) {
- context.log('JavaScript HTTP trigger function processed a request.');
-
+module.exports = async function (context, request) {
// Use this with 'tagOverrides' to correlate custom logs to the parent function invocation. var operationIdOverride = {"ai.operation.id":context.traceContext.traceparent};
The `tagOverrides` parameter sets the `operation_Id` to the function's invocatio
::: zone-end -
-## Invocation context
-
-Each invocation of your function is passed an invocation context object, with extra information about the context and methods used for logging. The `context` object is typically the second argument passed to your handler.
-
-The `InvocationContext` class has the following properties:
-
-| Property | Description |
-| | |
-| `invocationId` | The ID of the current function invocation. |
-| `functionName` | The name of the function. |
-| `extraInputs` | Used to get the values of extra inputs. For more information, see [`Extra inputs and outputs`](#extra-inputs-and-outputs). |
-| `extraOutputs` | Used to set the values of extra outputs. For more information, see [`Extra inputs and outputs`](#extra-inputs-and-outputs). |
-| `retryContext` | The context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies). |
-| `traceContext` | The context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). |
-| `triggerMetadata` | Metadata about the trigger input for this invocation other than the value itself. |
-| `options` | The options used when registering the function, after they've been validated and with defaults explicitly specified. |
-
-## Logging
+<a name="http-triggers-and-bindings"></a>
-In Azure Functions, you use the `context.log` method to write logs. When you call `context.log()`, your message is written with the default level "information". Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application logs and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md).
-
-> [!NOTE]
-> If you use the alternative Node.js `console.log` method, those logs are tracked at the app-level and will *not* be associated with any specific function. It is *highly recommended* to use `context` for logging instead of `console` so that all logs are associated with a specific function.
-
-The following example writes a log at the information level, including the invocation ID:
-
-```javascript
-context.log(`Something has happened. Invocation ID: "${context.invocationId}"`);
-```
-
-### Log levels
-
-In addition to the default `context.log` method, the following methods are available that let you write function logs at specific log levels.
-
-| Method | Description |
-| - | |
-| **context.trace(_message_)** | Writes a trace-level event to the logs. |
-| **context.debug(_message_)** | Writes a debug-level event to the logs. |
-| **context.info(_message_)** | Writes an information-level event to the logs. |
-| **context.warn(_message_)** | Writes a warning-level event to the logs. |
-| **context.error(_message_)** | Writes an error-level event to the logs. |
-
+## HTTP triggers
::: zone pivot="nodejs-model-v3"
-## HTTP triggers and bindings
-
-HTTP and webhook triggers and HTTP output bindings use request and response objects to represent the HTTP messaging.
-
-### Request object
+HTTP and webhook triggers use request and response objects to represent HTTP messages.
-The `context.req` (request) object has the following properties:
-| Property | Description |
-| - | -- |
-| _body_ | An object that contains the body of the request. |
-| _headers_ | An object that contains the request headers. |
-| _method_ | The HTTP method of the request. |
-| _originalUrl_ | The URL of the request. |
-| _params_ | An object that contains the routing parameters of the request. |
-| _query_ | An object that contains the query parameters. |
-| _rawBody_ | The body of the message as a string. |
+HTTP and webhook triggers use `HttpRequest` and `HttpResponse` objects to represent HTTP messages. The classes represent a subset of the [fetch standard](https://developer.mozilla.org/docs/Web/API/fetch), using Node.js's [`undici`](https://undici.nodejs.org/) package.
-### Response object
-The `context.res` (response) object has the following properties:
+<a name="request-object"></a>
+<a name="accessing-the-request-and-response"></a>
-| Property | Description |
-| | |
-| _body_ | An object that contains the body of the response. |
-| _headers_ | An object that contains the response headers. |
-| _isRaw_ | Indicates that formatting is skipped for the response. |
-| _status_ | The HTTP status code of the response. |
-| _cookies_ | An array of HTTP cookie objects that are set in the response. An HTTP cookie object has a `name`, `value`, and other cookie properties, such as `maxAge` or `sameSite`. |
+### HTTP Request
-### Accessing the request and response
-When you work with HTTP triggers, you can access the HTTP request and response objects in several ways:
+The request can be accessed in several ways:
-+ **From `req` and `res` properties on the `context` object.** In this way, you can use the conventional pattern to access HTTP data from the context object, instead of having to use the full `context.bindings.name` pattern. The following example shows how to access the `req` and `res` objects on the `context`:
+- **As the second argument to your function:**
```javascript
- // You can access your HTTP request off the context ...
- if(context.req.body.emoji === ':pizza:') context.log('Yay!');
- // and also set your HTTP response
- context.res = { status: 202, body: 'You successfully ordered more coffee!' };
+ module.exports = async function (context, request) {
+ context.log(`Http function processed request for url "${request.url}"`);
```
-+ **From the named input and output bindings.** In this way, the HTTP trigger and bindings work the same as any other binding. The following example sets the response object by using a named `response` binding:
+- **From the `context.req` property:**
- ```json
- {
- "type": "http",
- "direction": "out",
- "name": "response"
- }
- ```
```javascript
- context.bindings.response = { status: 201, body: "Insert succeeded." };
+ module.exports = async function (context, request) {
+ context.log(`Http function processed request for url "${context.req.url}"`);
```
-+ **_[Response only]_ By calling `context.res.send(body?: any)`.** An HTTP response is created with input `body` as the response body. `context.done()` is implicitly called.
-+ **_[Response only]_ By returning the response.** A special binding name of `$return` allows you to assign the function's return value to the output binding. The following HTTP output binding defines a `$return` output parameter:
+- **From the named input bindings:** This option works the same as any non HTTP binding. The binding name in `function.json` must match the key on `context.bindings`, or "request1" in the following example:
```json {
- "type": "http",
- "direction": "out",
- "name": "$return"
+ "name": "request1",
+ "type": "httpTrigger",
+ "direction": "in",
+ "authLevel": "anonymous",
+ "methods": [
+ "get",
+ "post"
+ ]
}
- ```
-
+ ```
```javascript
- return { status: 201, body: "Insert succeeded." };
+ module.exports = async function (context, request) {
+ context.log(`Http function processed request for url "${context.bindings.request1.url}"`);
```
-Request and response keys are in lowercase.
+The `HttpRequest` object has the following properties:
+
+| Property | Type | Description |
+| - | | -- |
+| **`method`** | `string` | HTTP request method used to invoke this function. |
+| **`url`** | `string` | Request URL. |
+| **`headers`** | `Record<string, string>` | HTTP request headers. This object is case sensitive. It's recommended to use `request.getHeader('header-name')` instead, which is case insensitive. |
+| **`query`** | `Record<string, string>` | Query string parameter keys and values from the URL. |
+| **`params`** | `Record<string, string>` | Route parameter keys and values. |
+| **`user`** | `HttpRequestUser | null` | Object representing logged-in user, either through Functions authentication, SWA Authentication, or null when no such user is logged in. |
+| **`body`** | `Buffer | string | any` | If the media type is "application/octet-stream" or "multipart/*", `body` is a Buffer. If the value is a JSON parse-able string, `body` is the parsed object. Otherwise, `body` is a string. |
+| **`rawBody`** | `Buffer | string` | If the media type is "application/octet-stream" or "multipart/*", `rawBody` is a Buffer. Otherwise, `rawBody` is a string. The only difference between `body` and `rawBody` is that `rawBody` doesn't JSON parse a string body. |
+| **`bufferBody`** | `Buffer` | The body as a buffer. |
::: zone-end ::: zone pivot="nodejs-model-v4"
-## HTTP triggers and bindings
-
-HTTP triggers, webhook triggers, and HTTP output bindings use `HttpRequest` and `HttpResponse` objects to represent HTTP messages. The classes represent a subset of the [fetch standard](https://developer.mozilla.org/docs/Web/API/fetch), using Node.js's [`undici`](https://undici.nodejs.org/) package.
-
-### Request
-
-The request can be accessed as the first argument to your handler for an http triggered function.
+The request can be accessed as the first argument to your handler for an HTTP triggered function.
```javascript async (request, context) => {
The `HttpRequest` object has the following properties:
| Property | Type | Description | | -- | | -- |
-| **`method`** | `string` | HTTP request method used to invoke this function |
-| **`url`** | `string` | Request URL |
-| **`headers`** | [`Headers`](https://developer.mozilla.org/docs/Web/API/Headers) | HTTP request headers |
-| **`query`** | [`URLSearchParams`](https://developer.mozilla.org/docs/Web/API/URLSearchParams) | Query string parameter keys and values from the URL |
-| **`params`** | `HttpRequestParams` | Route parameter keys and values |
+| **`method`** | `string` | HTTP request method used to invoke this function. |
+| **`url`** | `string` | Request URL. |
+| **`headers`** | [`Headers`](https://developer.mozilla.org/docs/Web/API/Headers) | HTTP request headers. |
+| **`query`** | [`URLSearchParams`](https://developer.mozilla.org/docs/Web/API/URLSearchParams) | Query string parameter keys and values from the URL. |
+| **`params`** | `Record<string, string>` | Route parameter keys and values. |
| **`user`** | `HttpRequestUser | null` | Object representing logged-in user, either through Functions authentication, SWA Authentication, or null when no such user is logged in. |
-| **`body`** | [`ReadableStream | null`](https://developer.mozilla.org/docs/Web/API/ReadableStream) | Body as a readable stream |
-| **`bodyUsed`** | `boolean` | A boolean indicating if the body has been read from already |
+| **`body`** | [`ReadableStream | null`](https://developer.mozilla.org/docs/Web/API/ReadableStream) | Body as a readable stream. |
+| **`bodyUsed`** | `boolean` | A boolean indicating if the body has been read from already. |
In order to access a request or response's body, the following methods can be used:
In order to access a request or response's body, the following methods can be us
> [!NOTE] > The body functions can be run only once; subsequent calls will resolve with empty strings/ArrayBuffers.
-### Response
+
+<a name="response-object"></a>
+
+### HTTP Response
-The response can be set in multiple different ways.
+
+The response can be set in several ways:
-+ **As a simple interface with type `HttpResponseInit`**: This option is the most concise way of returning responses.
+- **Set the `context.res` property:**
+
+ ```javascript
+ module.exports = async function (context, request) {
+ context.res = { body: `Hello, world!` };
+ ```
+
+- **Return the response:** If your function is async and you set the binding name to `$return` in your `function.json`, you can return the response directly instead of setting it on `context`.
+
+ ```json
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ }
+ ```
+
+ ```javascript
+ module.exports = async function (context, request) {
+ return { body: `Hello, world!` };
+ ```
+
+- **Set the named output binding:** This option works the same as any non HTTP binding. The binding name in `function.json` must match the key on `context.bindings`, or "response1" in the following example:
+
+ ```json
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "response1"
+ }
+ ```
+
+ ```javascript
+ module.exports = async function (context, request) {
+ context.bindings.response1 = { body: `Hello, world!` };
+ ```
+
+- **Call `context.res.send()`:** This option is deprecated. It implicitly calls `context.done()` and can't be used in an async function.
+
+ ```javascript
+ module.exports = function (context, request) {
+ context.res.send(`Hello, world!`);
+ ```
+
+If you create a new object when setting the response, that object must match the `HttpResponseSimple` interface, which has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| **`headers`** | `Record<string, string>` (optional) | HTTP response headers. |
+| **`cookies`** | `Cookie[]` (optional) | HTTP response cookies. |
+| **`body`** | `any` (optional) | HTTP response body. |
+| **`statusCode`** | `number` (optional) | HTTP response status code. If not set, defaults to `200`. |
+| **`status`** | `number` (optional) | The same as `statusCode`. This property is ignored if `statusCode` is set. |
+
+You can also modify the `context.res` object without overwriting it. The default `context.res` object uses the `HttpResponseFull` interface, which supports the following methods in addition to the `HttpResponseSimple` properties:
+
+| Method | Description |
+| -- | -- |
+| **`status()`** | Sets the status. |
+| **`setHeader()`** | Sets a header field. NOTE: `res.set()` and `res.header()` are also supported and do the same thing. |
+| **`getHeader()`** | Get a header field. NOTE: `res.get()` is also supported and does the same thing. |
+| **`removeHeader()`** | Removes a header. |
+| **`type()`** | Sets the "content-type" header. |
+| **`send()`** | This method is deprecated. It sets the body and calls `context.done()` to indicate a sync function is finished. NOTE: `res.end()` is also supported and does the same thing. |
+| **`sendStatus()`** | This method is deprecated. It sets the status code and calls `context.done()` to indicate a sync function is finished. |
+| **`json()`** | This method is deprecated. It sets the "content-type" to "application/json", sets the body, and calls `context.done()` to indicate a sync function is finished. |
+++
+The response can be set in several ways:
+
+- **As a simple interface with type `HttpResponseInit`:** This option is the most concise way of returning responses.
```javascript return { body: `Hello, world!` };
The response can be set in multiple different ways.
| Property | Type | Description | | -- | - | -- |
- | **`body`** | `BodyInit` (optional) | HTTP response body as one of [`ArrayBuffer`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), [`AsyncIterable<Uint8Array>`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array), [`Blob`](https://developer.mozilla.org/docs/Web/API/Blob), [`FormData`](https://developer.mozilla.org/docs/Web/API/FormData), [`Iterable<Uint8Array>`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array), [`NodeJS.ArrayBufferView`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), [`URLSearchParams`](https://developer.mozilla.org/docs/Web/API/URLSearchParams), `null`, or `string` |
- | **`jsonBody`** | `any` (optional) | A JSON-serializable HTTP Response body. If set, the `HttpResponseInit.body` property is ignored in favor of this property |
+ | **`body`** | `BodyInit` (optional) | HTTP response body as one of [`ArrayBuffer`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), [`AsyncIterable<Uint8Array>`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array), [`Blob`](https://developer.mozilla.org/docs/Web/API/Blob), [`FormData`](https://developer.mozilla.org/docs/Web/API/FormData), [`Iterable<Uint8Array>`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array), [`NodeJS.ArrayBufferView`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), [`URLSearchParams`](https://developer.mozilla.org/docs/Web/API/URLSearchParams), `null`, or `string`. |
+ | **`jsonBody`** | `any` (optional) | A JSON-serializable HTTP Response body. If set, the `HttpResponseInit.body` property is ignored in favor of this property. |
| **`status`** | `number` (optional) | HTTP response status code. If not set, defaults to `200`. |
- | **`headers`** | [`HeadersInit`](https://developer.mozilla.org/docs/Web/API/Headers) (optional) | HTTP response headers |
- | **`cookies`** | `Cookie[]` (optional) | HTTP response cookies |
+ | **`headers`** | [`HeadersInit`](https://developer.mozilla.org/docs/Web/API/Headers) (optional) | HTTP response headers. |
+ | **`cookies`** | `Cookie[]` (optional) | HTTP response cookies. |
-+ **As a class with type `HttpResponse`**: This option provides helper methods for reading and modifying various parts of the response like the headers.
+- **As a class with type `HttpResponse`:** This option provides helper methods for reading and modifying various parts of the response like the headers.
```javascript const response = new HttpResponse({ body: `Hello, world!` });
The response can be set in multiple different ways.
| Property | Type | Description | | -- | - | -- |
- | **`status`** | `number` | HTTP response status code |
- | **`headers`** | [`Headers`](https://developer.mozilla.org/docs/Web/API/Headers) | HTTP response headers |
- | **`cookies`** | `Cookie[]` | HTTP response cookies |
- | **`body`** | [`ReadableStream | null`](https://developer.mozilla.org/docs/Web/API/ReadableStream) | Body as a readable stream |
- | **`bodyUsed`** | `boolean` | A boolean indicating if the body has been read from already |
+ | **`status`** | `number` | HTTP response status code. |
+ | **`headers`** | [`Headers`](https://developer.mozilla.org/docs/Web/API/Headers) | HTTP response headers. |
+ | **`cookies`** | `Cookie[]` | HTTP response cookies. |
+ | **`body`** | [`ReadableStream | null`](https://developer.mozilla.org/docs/Web/API/ReadableStream) | Body as a readable stream. |
+ | **`bodyUsed`** | `boolean` | A boolean indicating if the body has been read from already. |
::: zone-end
The response can be set in multiple different ways.
By default, Azure Functions automatically monitors the load on your application and creates more host instances for Node.js as needed. Azure Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, see [How the Consumption and Premium plans work](event-driven-scaling.md).
-This scaling behavior is sufficient for many Node.js applications. For CPU-bound applications, you can improve performance further by using multiple language worker processes.
-
-By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers. This behavior makes it less likely that a CPU-intensive function blocks other functions from running.
+This scaling behavior is sufficient for many Node.js applications. For CPU-bound applications, you can improve performance further by using multiple language worker processes. You can increase the number of worker processes per host from the default of 1 up to a max of 10 by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers. This behavior makes it less likely that a CPU-intensive function blocks other functions from running. The setting applies to each host that Azure Functions creates when scaling out your application to meet demand.
-The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Azure Functions creates when scaling out your application to meet demand.
+> [!WARNING]
+> Use the `FUNCTIONS_WORKER_PROCESS_COUNT` setting with caution. Multiple processes running in the same instance can lead to unpredictable behavior and increase function load times. If you use this setting, it's *highly recommended* to offset these downsides by [running from a package file](./run-functions-from-deployment-package.md).
## Node version
az functionapp config set --linux-fx-version "node|18" --name "<MY_APP_NAME>" --
To learn more about Azure Functions runtime support policy, refer to this [article](./language-support-policy.md).
+<a name="access-environment-variables-in-code"></a>
+ ## Environment variables
-Add your own environment variables to a function app, in both your local and cloud environments, such as operational secrets (connection strings, keys, and endpoints) or environmental settings (such as profiling variables). Access these settings using `process.env` in your function code.
+Environment variables can be useful for operational secrets (connection strings, keys, endpoints, etc.) or environmental settings such as profiling variables. You can add environment variables in both your local and cloud environments and access them through `process.env` in your function code.
+
+The following example logs the `WEBSITE_SITE_NAME` environment variable:
++
+```javascript
+module.exports = async function (context) {
+ context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`);
+}
+```
+++
+```javascript
+async function timerTrigger1(myTimer, context) {
+ context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`);
+}
+```
+ ### In local development environment
When you run locally, your functions project includes a [`local.settings.json` f
"Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "node",
- "translatorTextEndPoint": "https://api.cognitive.microsofttranslator.com/",
- "translatorTextKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
- "languageWorkers__node__arguments": "--prof"
+ "CUSTOM_ENV_VAR_1": "hello",
+ "CUSTOM_ENV_VAR_2": "world"
} } ```
When you run in Azure, the function app lets you set and use [Application settin
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)]
-### Access environment variables in code
+### Worker environment variables
-Access application settings as environment variables using `process.env`, as shown here in the call to `context.log()` where we log the `WEBSITE_SITE_NAME` environment variable:
+There are several Functions environment variables specific to Node.js:
-
-```javascript
-async function timerTrigger1(context, myTimer) {
- context.log("WEBSITE_SITE_NAME: " + process.env["WEBSITE_SITE_NAME"]);
-}
-```
+#### languageWorkers__node__arguments
+This setting allows you to specify custom arguments when starting your Node.js process. It's most often used locally to start the worker in debug mode, but can also be used in Azure if you need custom arguments.
+> [!WARNING]
+> If possible, avoid using `languageWorkers__node__arguments` in Azure because it can have a negative effect on cold start times. Rather than using pre-warmed workers, the runtime has to start a new worker from scratch with your custom arguments.
-```javascript
-async function timerTrigger1(myTimer, context) {
- context.log("WEBSITE_SITE_NAME: " + process.env["WEBSITE_SITE_NAME"]);
-}
-```
+#### logging__logLevel__Worker
+This setting adjusts the default log level for Node.js-specific worker logs. By default, only warning or error logs are shown, but you can set it to `information` or `debug` to help diagnose issues with the Node.js worker. For more information, see [configuring log levels](./configure-monitoring.md#configure-log-levels).
## <a name="ecmascript-modules"></a>ECMAScript modules (preview)
To use ES modules in a function, change its filename to use a `.mjs` extension.
```js import { v4 as uuidv4 } from 'uuid';
-async function httpTrigger1(context, req) {
+async function httpTrigger1(context, request) {
context.res.body = uuidv4(); };
export default httpTrigger1;
```js import { v4 as uuidv4 } from 'uuid';
-async function httpTrigger1(req, context) {
+async function httpTrigger1(request, context) {
context.res.body = uuidv4(); }; ```
By default, a JavaScript function is executed from `index.js`, a file that share
`scriptFile` can be used to get a folder structure that looks like the following example: ```
-FunctionApp
- | - host.json
- | - myNodeFunction
+<project_root>/
+ | - node_modules/
+ | - myFirstFunction/
| | - function.json
- | - lib
+ | - lib/
| | - sayHello.js
- | - node_modules
- | | - ... packages ...
+ | - host.json
| - package.json ```
-The `function.json` for `myNodeFunction` should include a `scriptFile` property pointing to the file with the exported function to run.
+The `function.json` for `myFirstFunction` should include a `scriptFile` property pointing to the file with the exported function to run.
```json {
The `function.json` for `myNodeFunction` should include a `scriptFile` property
### Using `entryPoint`
-In `scriptFile` (or `index.js`), a function must be exported using `module.exports` in order to be found and run. By default, the function that executes when triggered is the only export from that file, the export named `run`, or the export named `index`. The following example uses `entryPoint` in `function.json`:
+In the v3 model, a function must be exported using `module.exports` in order to be found and run. By default, the function that executes when triggered is the only export from that file, the export named `run`, or the export named `index`. The following example sets `entryPoint` in `function.json` to a custom value, "logHello":
```json {
- "entryPoint": "logFoo",
+ "entryPoint": "logHello",
"bindings": [ ... ] } ```
-In Functions v2.x or higher, which supports the `this` parameter in user functions, the function code could then be as in the following example:
- ```javascript
-class MyObj {
- constructor() {
- this.foo = 1;
- };
-
- async logFoo(context) {
- context.log("Foo is " + this.foo);
- }
+async function logHello(context) {
+ context.log('Hello, world!');
}
-const myObj = new MyObj();
-module.exports = myObj;
+module.exports = { logHello };
```
-In this example, it's important to note that although an object is being exported, there are no guarantees for preserving state between executions.
- ::: zone-end ## Local debugging
-When started with the `--inspect` parameter, a Node.js process listens for a debugging client on the specified port. In Azure Functions runtime 2.x or higher, you can specify arguments to pass into the Node.js process that runs your code by adding the environment variable or App Setting `languageWorkers:node:arguments = <args>`.
-
-To debug locally, add `"languageWorkers:node:arguments": "--inspect=5858"` under `Values` in your [local.settings.json](./functions-develop-local.md#local-settings-file) file and attach a debugger to port 5858.
-
-When debugging using VS Code, the `--inspect` parameter is automatically added using the `port` value in the project's launch.json file.
-
-In runtime version 1.x, setting `languageWorkers:node:arguments` doesn't work. The debug port can be selected with the [`--nodeDebugPort`](./functions-run-local.md#start) parameter on Azure Functions Core Tools.
-
-> [!NOTE]
-> You can only configure `languageWorkers:node:arguments` when running the function app locally.
-
-## Testing
-
-Testing your functions includes:
+It's recommended to use VS Code for local debugging, which starts your Node.js process in debug mode automatically and attaches to the process for you. For more information, see [run the function locally](./create-first-function-vs-code-node.md#run-the-function-locally).
-* **HTTP end-to-end**: To test a function from its HTTP endpoint, you can use any tool that can make an HTTP request such as cURL, Postman, or JavaScript's fetch method.
-* **Integration testing**: Integration test includes the function app layer. This testing means you need to control the parameters into the function including the request and the context. The context is unique to each kind of trigger and means you need to know the incoming and outgoing bindings for that [trigger type](functions-triggers-bindings.md?tabs=javascript#supported-bindings).
--
-Learn more about integration testing and mocking the context layer with an experimental GitHub repo, [https://github.com/anthonychu/azure-functions-test-utils](https://github.com/anthonychu/azure-functions-test-utils).
--
-* **Unit testing**: Unit testing is performed within the function app. You can use any tool that can test JavaScript, such as Jest or Mocha.
+If you're using a different tool for debugging or want to start your Node.js process in debug mode manually, add `"languageWorkers__node__arguments": "--inspect"` under `Values` in your [local.settings.json](./functions-develop-local.md#local-settings-file). The `--inspect` argument tells Node.js to listen for a debug client, on port 9229 by default. For more information, see the [Node.js debugging guide](https://nodejs.org/en/docs/guides/debugging-getting-started).
## TypeScript
func azure functionapp publish <APP_NAME>
In this command, replace `<APP_NAME>` with the name of your function app.
-## Considerations for JavaScript functions
+<a name="considerations-for-javascript-functions"></a>
-When you work with JavaScript functions, be aware of the considerations in the following sections.
+## Recommendations
+
+This section describes several impactful patterns for Node.js apps that we recommend you follow.
### Choose single-vCPU App Service plans When you create a function app that uses the App Service plan, we recommend that you select a single-vCPU plan rather than a plan with multiple vCPUs. Today, Functions runs JavaScript functions more efficiently on single-vCPU VMs, and using larger VMs doesn't produce the expected performance improvements. When necessary, you can manually scale out by adding more single-vCPU VM instances, or you can enable autoscale. For more information, see [Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json).
-### Cold Start
+<a name="cold-start"></a>
+
+### Run from a package file
+
+When you develop Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the first time your function app starts after a period of inactivity, taking longer to start up. For JavaScript apps with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use this model by default, but if you're experiencing large cold starts you should check to make sure you're running this way.
-When you develop Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the first time your function app starts after a period of inactivity, taking longer to start up. For JavaScript functions with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use this model by default, but if you're experiencing large cold starts you should check to make sure you're running this way.
+<a name="connection-limits"></a>
-### Connection Limits
+### Use a single static client
-When you use a service-specific client in an Azure Functions application, don't create a new client with every function invocation. Instead, create a single, static client in the global scope. For more information, see [managing connections in Azure Functions](manage-connections.md).
+When you use a service-specific client in an Azure Functions application, don't create a new client with every function invocation because you can hit connection limits. Instead, create a single, static client in the global scope. For more information, see [managing connections in Azure Functions](manage-connections.md).
::: zone pivot="nodejs-model-v3"
When you use a service-specific client in an Azure Functions application, don't
When writing Azure Functions in JavaScript, you should write code using the `async` and `await` keywords. Writing code using `async` and `await` instead of callbacks or `.then` and `.catch` with Promises helps avoid two common problems: - Throwing uncaught exceptions that [crash the Node.js process](https://nodejs.org/api/process.html#process_warning_using_uncaughtexception_correctly), potentially affecting the execution of other functions.
+ - Unexpected behavior, such as missing logs from `context.log`, caused by asynchronous calls that aren't properly awaited.
-In the following example, the asynchronous method `fs.readFile` is invoked with an error-first callback function as its second parameter. This code causes both of the issues previously mentioned. An exception that isn't explicitly caught in the correct scope crashed the entire process (issue #1). Calling the 1.x `context.done()` outside of the scope of the callback function means that the function invocation may end before the file is read (issue #2). In this example, calling 1.x `context.done()` too early results in missing log entries starting with `Data from file:`.
+In the following example, the asynchronous method `fs.readFile` is invoked with an error-first callback function as its second parameter. This code causes both of the issues previously mentioned. An exception that isn't explicitly caught in the correct scope can crash the entire process (issue #1). Calling the deprecated `context.done()` method outside of the scope of the callback can signal the function is finished before the file is read (issue #2). In this example, calling `context.done()` too early results in missing log entries starting with `Data from file:`.
```javascript // NOT RECOMMENDED PATTERN
module.exports = function (context) {
} ```
-Using the `async` and `await` keywords helps avoid both of these errors. You should use the Node.js utility function [`util.promisify`](https://nodejs.org/api/util.html#util_util_promisify_original) to turn error-first callback-style functions into awaitable functions.
+Use the `async` and `await` keywords to help avoid both of these issues. Most APIs in the Node.js ecosystem have been converted to support promises in some form. For example, starting in v14, Node.js provides an `fs/promises` API to replace the `fs` callback API.
-In the following example, any unhandled exceptions thrown during the function execution only fail the individual invocation that raised an exception. The `await` keyword means that steps following `readFileAsync` only execute after `readFile` is complete. With `async` and `await`, you also don't need to call the `context.done()` callback.
+In the following example, any unhandled exceptions thrown during the function execution only fail the individual invocation that raised the exception. The `await` keyword means that steps following `readFile` only execute after it's complete. With `async` and `await`, you also don't need to call the `context.done()` callback.
```javascript // Recommended pattern
-const fs = require('fs');
-const util = require('util');
-const readFileAsync = util.promisify(fs.readFile);
+const fs = require('fs/promises');
module.exports = async function (context) { let data; try {
- data = await readFileAsync('./hello.txt');
+ data = await fs.readFile('./hello.txt');
} catch (err) { context.log.error('ERROR', err); // This rethrown exception will be handled by the Functions Runtime and will only fail the individual invocation
module.exports = async function (context) {
For more information, see the following resources:
-+ [Best practices for Azure Functions](functions-best-practices.md)
-+ [Azure Functions developer reference](functions-reference.md)
-+ [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-
-[`func azure functionapp publish`]: functions-run-local.md#project-file-deployment
+- [Best practices for Azure Functions](functions-best-practices.md)
+- [Azure Functions developer reference](functions-reference.md)
+- [Azure Functions triggers and bindings](functions-triggers-bindings.md)
azure-maps How To Manage Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md
Title: Manage your Azure Maps account's pricing tier | Microsoft Azure Maps
+ Title: Manage your Azure Maps account's pricing tier
+ description: You can use the Azure portal to manage your Microsoft Azure Maps account and its pricing tier.
# Manage the pricing tier of your Azure Maps account
-You can manage the pricing tier of your Azure Maps account through the Azure portal. You can also view or change your account's pricing tier after you create an [account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+You can manage the pricing tier of your Azure Maps account through the Azure portal. You can also view or change your account's pricing tier after you create an [Azure account].
-Get more information about [choosing the right pricing tier in Azure Maps](./choose-pricing-tier.md).
+Get more information about [choosing the right pricing tier in Azure Maps].
->[!NOTE]
->Switching to Gen 1 pricing tier is not available for Gen 2 Azure Maps Creator customers. Gen 1 Azure Maps Creator will be deprecated on 8/6/2021.
+> [!NOTE]
+> Switching to Gen 1 pricing tier is not available for Gen 2 Azure Maps Creator customers. Gen 1 Azure Maps Creator will be deprecated on 8/6/2021.
## View your pricing tier
To view your chosen pricing tier, navigate to the **Pricing Tier** option in the
## Change a pricing tier
-After you create your Azure Maps account, you can upgrade or downgrade the pricing tier for your Azure Maps account. To upgrade or downgrade, navigate to the **Pricing Tier** option in the settings menu. Select the pricing tier from drop down list. Note ΓÇô current pricing tier will be default selection. Select the **Save** button to save your chosen pricing tier option.
+After you create your Azure Maps account, you can upgrade or downgrade the pricing tier for your Azure Maps account. To upgrade or downgrade, navigate to the **Pricing Tier** option in the settings menu. Select the pricing tier from drop down list. Note ΓÇô current pricing tier is the default selection. Select the **Save** button to save your chosen pricing tier option.
> [!NOTE] > You don't have to generate new subscription keys or client ID (for Azure AD authentication) if you upgrade or downgrade the pricing tier for your Azure Maps account. - :::image type="content" source="./media/how-to-manage-pricing-tier/change-pricing-tier.png" border="true" alt-text="Change a pricing tier"::: -- ## Next steps Learn how to see the API usage metrics for your Azure Maps account:
-> [!div class="nextstepaction"]
-> [View usage metrics](./how-to-view-api-usage.md)
+> [!div class="nextstepaction"]
+> [View usage metrics]
+
+[Azure account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F
+[View usage metrics]: how-to-view-api-usage.md
+[choosing the right pricing tier in Azure Maps]: choose-pricing-tier.md
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md
Title: Request real-time and forecasted weather data using Azure Maps Weather services
+ Title: Request real-time and forecasted weather data using Azure Maps Weather services
+ description: Learn how to request real-time (current) and forecasted (minute, hourly, daily) weather data using Microsoft Azure Maps Weather services
# Request real-time and forecasted weather data using Azure Maps Weather services
-Azure Maps [Weather services](/rest/api/maps/weather) are a set of RESTful APIs that allows developers to integrate highly dynamic historical, real-time, and forecasted weather data and visualizations into their solutions. In this article, we'll show you how to request both real-time and forecasted weather data.
+Azure Maps [Weather services] are a set of RESTful APIs that allows developers to integrate highly dynamic historical, real-time, and forecasted weather data and visualizations into their solutions.
-In this article youΓÇÖll learn, how to:
+This article demonstrates how to request both real-time and forecasted weather data:
-* Request real-time (current) weather data using the [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions).
-* Request severe weather alerts using the [Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralerts).
-* Request daily forecasts using the [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast).
-* Request hourly forecasts using the [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast).
-* Request minute by minute forecasts using the [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast).
+* Request real-time (current) weather data using the [Get Current Conditions API].
+* Request severe weather alerts using the [Get Severe Weather Alerts API].
+* Request daily forecasts using the [Get Daily Forecast API].
+* Request hourly forecasts using the [Get Hourly Forecast API].
+* Request minute by minute forecasts using the [Get Minute Forecast API].
This video provides examples for making REST calls to Azure Maps Weather services.
This video provides examples for making REST calls to Azure Maps Weather service
* A [subscription key] >[!IMPORTANT]
- >The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast)requires a Gen 1 (S1) or Gen 2 pricing tier. All other APIs require an S0 pricing tier key.
+ >The [Get Minute Forecast API] requires a Gen 1 (S1) or Gen 2 pricing tier. All other APIs require an S0 pricing tier key.
-This tutorial uses the [Postman](https://www.postman.com/) application, but you may choose a different API development environment.
+This tutorial uses the [Postman] application, but you may choose a different API development environment.
## Request real-time weather data
-The [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions) returns detailed weather conditions such as precipitation, temperature, and wind for a given coordinate location. Also, observations from the past 6 or 24 hours for a particular location can be retrieved. The response includes details like observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, and temperature. RealFeelΓäó Temperature and ultraviolet(UV) index are also returned.
+The [Get Current Conditions API] returns detailed weather conditions such as precipitation, temperature, and wind for a given coordinate location. Also, observations from the past 6 or 24 hours for a particular location can be retrieved. The response includes details like observation date and time, description of weather conditions, weather icon, precipitation indicator flags, and temperature. RealFeelΓäó Temperature and ultraviolet(UV) index are also returned.
-In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions) to retrieve current weather conditions at coordinates located in Seattle, WA.
+In this example, you use the [Get Current Conditions API] to retrieve current weather conditions at coordinates located in Seattle, WA.
1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weat
https://atlas.microsoft.com/weather/currentConditions/json?api-version=1.0&query=47.60357,-122.32945&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Click the blue **Send** button. The response body contains current weather information.
+3. Select the blue **Send** button. The response body contains current weather information.
```json {
In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weat
## Request severe weather alerts
-[Azure Maps Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralerts) returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service can return details such as alert type, category, level, and detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers.
+Azure Maps [Get Severe Weather Alerts API] returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service returns details like alert type, category, level. The service also returns detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers.
-In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/weather/getsevereweatheralerts) to retrieve current weather conditions at coordinates located in Cheyenne, WY.
+In this example, you use the [Get Severe Weather Alerts API] to retrieve current weather conditions at coordinates located in Cheyenne, WY.
>[!NOTE] >This example retrieves severe weather alerts at the time of this writing. It is likely that there are no longer any severe weather alerts at the requested location. To retrieve actual severe alert data when running this example, you'll need to retrieve data at a different coordinate location.
In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/w
https://atlas.microsoft.com/weather/severe/alerts/json?api-version=1.0&query=41.161079,-104.805450&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Click the blue **Send** button. If there are no severe weather alerts, the response body will contain an empty `results[]` array. If there are severe weather alerts, the response body contains something like the following JSON response:
+3. Select the blue **Send** button. If there are no severe weather alerts, the response body contains an empty `results[]` array. If there are severe weather alerts, the response body contains something like the following JSON response:
```json {
In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/w
## Request daily weather forecast data
-The [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast) returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`.
+The [Get Daily Forecast API] returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`.
>[!IMPORTANT] >In the S0 pricing tier, you can request daily forecast for the next 1, 5, 10, and 15 days. In either Gen 1 (S1) or Gen 2 pricing tier, you can request daily forecast for the next 25 days, and 45 days.
-In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast) to retrieve the five-day weather forecast for coordinates located in Seattle, WA.
+In this example, you use the [Get Daily Forecast API] to retrieve the five-day weather forecast for coordinates located in Seattle, WA.
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/
https://atlas.microsoft.com/weather/forecast/daily/json?api-version=1.0&query=47.60357,-122.32945&duration=5&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Click the blue **Send** button. The response body contains the five-day weather forecast data. For the sake of brevity, the JSON response below shows the forecast for the first day.
+3. Select the blue **Send** button. The response body contains the five-day weather forecast data. For the sake of brevity, the following JSON response shows the forecast for the first day.
```json {
In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/
## Request hourly weather forecast data
-The [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast) returns detailed weather forecast by the hour for the next 1, 12, 24 (1 day), 72 (3 days), 120 (5 days), and 240 hours (10 days) for the given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and UV index.
+The [Get Hourly Forecast API] returns detailed weather forecast by the hour for the next 1, 12, 24 (1 day), 72 (3 days), 120 (5 days), and 240 hours (10 days) for the given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and UV index.
>[!IMPORTANT] >In the S0 pricing tier, you can request hourly forecast for the next 1, 12, 24 hours (1 day), and 72 hours (3 days). In either Gen 1 (S1) or Gen 2 pricing tier, you can request hourly forecast for the next 120 (5 days) and 240 hours (10 days).
-In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast) to retrieve the hourly weather forecast for the next 12 hours at coordinates located in Seattle, WA.
+In this example, you use the [Get Hourly Forecast API] to retrieve the hourly weather forecast for the next 12 hours at coordinates located in Seattle, WA.
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather
https://atlas.microsoft.com/weather/forecast/hourly/json?api-version=1.0&query=47.60357,-122.32945&duration=12&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Click the blue **Send** button. The response body contains weather forecast data for the next 12 hours. For the sake of brevity, the JSON response below shows the forecast for the first hour.
+3. Select the blue **Send** button. The response body contains weather forecast data for the next 12 hours. For the sake of brevity, the following JSON response shows the forecast for the first hour.
```json {
In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather
## Request minute-by-minute weather forecast data
- The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast) returns minute-by-minute forecasts for a given location for the next 120 minutes. Users can request weather forecasts in intervals of 1, 5 and 15 minutes. The response includes details such as the type of precipitation (including rain, snow, or a mixture of both), start time, and precipitation intensity value (dBZ).
+ The [Get Minute Forecast API] returns minute-by-minute forecasts for a given location for the next 120 minutes. Users can request weather forecasts in intervals of 1, 5 and 15 minutes. The response includes details such as the type of precipitation (including rain, snow, or a mixture of both), start time, and precipitation intensity value (dBZ).
-In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast) to retrieve the minute-by-minute weather forecast at coordinates located in Seattle, WA. The weather forecast is given for the next 120 minutes. Our query requests that the forecast be given at 15-minute intervals, but you can adjust the parameter to be either 1 or 5 minutes.
+In this example, you use the [Get Minute Forecast API] to retrieve the minute-by-minute weather forecast at coordinates located in Seattle, WA. The weather forecast is given for the next 120 minutes. Our query requests that the forecast is given at 15-minute intervals, but you can adjust the parameter to be either 1 or 5 minutes.
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather
https://atlas.microsoft.com/weather/forecast/minute/json?api-version=1.0&query=47.60357,-122.32945&interval=15&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Click the blue **Send** button. The response body contains weather forecast data for the next 120 minutes, in 15-minute intervals.
+3. Select the blue **Send** button. The response body contains weather forecast data for the next 120 minutes, in 15-minute intervals.
```json {
In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather
## Next steps > [!div class="nextstepaction"]
-> [Weather services in Azure Maps](./weather-services-concepts.md)
+> [Weather service concepts]
> [!div class="nextstepaction"]
-> [Azure Maps Weather services](/rest/api/maps/weather)
+> [Weather services]
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Get Current Conditions API]: /rest/api/maps/weather/getcurrentconditions
+[Get Daily Forecast API]: /rest/api/maps/weather/getdailyforecast
+[Get Hourly Forecast API]: /rest/api/maps/weather/gethourlyforecast
+[Get Minute Forecast API]: /rest/api/maps/weather/getminuteforecast
+[Get Severe Weather Alerts API]: /rest/api/maps/weather/getsevereweatheralerts
+[Postman]: https://www.postman.com/
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Weather service concepts]: weather-services-concepts.md
+[Weather services]: /rest/api/maps/weather
azure-maps How To Secure Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md
Some managed identity benefits are:
### Host a daemon on non-Azure resources
-When running on a non-Azure environment, managed identities aren't available. As such, you must configure a service principal through an Azure AD application registration for the daemon application.
+Managed identities are only available when running on an Azure environment. As such, you must configure a service principal through an Azure AD application registration for the daemon application.
#### Create new application registration
-If you've already created your application registration, go to [Assign delegated API permissions](#assign-delegated-api-permissions).
+If you have already created your application registration, go to [Assign delegated API permissions](#assign-delegated-api-permissions).
To create a new application registration:
To create a client secret:
:::image type="content" border="true" source="./media/how-to-manage-authentication/new-client-secret-add.png" alt-text="Add new client secret.":::
-5. Copy the secret and store it securely in a service such as Azure Key Vault. Also, We'll use the secret in the [Request token with Managed Identity](#request-a-token-with-managed-identity) section of this article.
+5. Copy the secret and store it securely in a service such as Azure Key Vault. Also, use the secret in the [Request token with Managed Identity](#request-a-token-with-managed-identity) section of this article.
:::image type="content" border="true" source="./media/how-to-manage-authentication/copy-client-secret.png" alt-text="Copy client secret.":::
To acquire the access token:
:::image type="content" border="true" source="./media/how-to-manage-authentication/get-token-params.png" alt-text="Copy token parameters.":::
-We'll use the [Postman](https://www.postman.com/) application to create the token request, but you can use a different API development environment.
+This article uses the [Postman](https://www.postman.com/) application to create the token request, but you can use a different API development environment.
1. In the Postman app, select **New**.
azure-monitor Autoscale Common Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-metrics.md
description: Learn which metrics are commonly used for autoscaling your cloud se
Previously updated : 04/22/2022 Last updated : 04/17/2023 # Azure Monitor autoscaling common metrics
+Azure Monitor autoscaling allows you to scale the number of running instances in or out, based on telemetry data or metrics. Scaling can be based on any metric, even metrics from a different resource. For example, scale a Virtual Machine Scale Set based on the amount of traffic on a firewall.
-Azure Monitor autoscaling allows you to scale the number of running instances up or down, based on telemetry data, also known as metrics. This article describes common metrics that you might want to use. In the Azure portal, you can choose the metric of the resource to scale by. You can also choose any metric from a different resource to scale by.
+This article describes metrics that are commonly used to trigger scale events.
-Azure Monitor autoscale applies only to [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/), [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [Azure API Management](../../api-management/api-management-key-concepts.md). Other Azure services use different scaling methods.
+Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](./autoscale-overview.md#supported-services-for-autoscale).
-## Compute metrics for Resource Manager-based VMs
+For all resources, you can get a list of the available metrics using the PowerShell or Azure CLI
-By default, Azure Resource Manager-based virtual machines and virtual machine scale sets emit basic (host-level) metrics. In addition, when you configure diagnostics data collection for an Azure VM and virtual machine scale sets, the Azure Diagnostics extension also emits guest-OS performance counters. These counters are commonly known as "guest-OS metrics." You use all these metrics in autoscale rules.
+```azurepowershell
+Get-AzMetricDefinition -ResourceId <resource_id>
+```
-You can use the `Get MetricDefinitions` API/PoSH/CLI to view the metrics available for your Virtual Machine Scale Sets resource.
+```azurecli
+az monitor metrics list-definitions --resource <resource_id>
+```
+
+## Compute metrics for Resource Manager-based VMs
-If you're using virtual machine scale sets and you don't see a particular metric listed, it's likely *disabled* in your Diagnostics extension.
+By default, Azure Resource Manager-based virtual machines and Virtual Machine Scale Sets emit basic (host-level) metrics. In addition, when you configure diagnostics data collection for an Azure VM and Virtual Machine Scale Sets, the Azure Diagnostics extension also emits guest-OS performance counters. These counters are commonly known as "guest-OS metrics." You use all these metrics in autoscale rules.
+
+If you're using Virtual Machine Scale Sets and you don't see a particular metric listed, it's likely *disabled* in your Diagnostics extension.
If a particular metric isn't being sampled or transferred at the frequency you want, you can update the diagnostics configuration.
If either preceding case is true, see [Use PowerShell to enable Azure Diagnostic
### Host metrics for Resource Manager-based Windows and Linux VMs
-The following host-level metrics are emitted by default for Azure VM and virtual machine scale sets in both Windows and Linux instances. These metrics describe your Azure VM but are collected from the Azure VM host rather than via agent installed on the guest VM. You can use these metrics in autoscaling rules.
+The following host-level metrics are emitted by default for Azure VM and Virtual Machine Scale Sets in both Windows and Linux instances. These metrics describe your Azure VM but are collected from the Azure VM host rather than via agent installed on the guest VM. You can use these metrics in autoscaling rules.
- [Host metrics for Resource Manager-based Windows and Linux VMs](../essentials/metrics-supported.md#microsoftcomputevirtualmachines)-- [Host metrics for Resource Manager-based Windows and Linux virtual machine scale sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)
+- [Host metrics for Resource Manager-based Windows and Linux Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)
### Guest OS metrics for Resource Manager-based Windows VMs
-When you create a VM in Azure, diagnostics is enabled by using the Diagnostics extension. The Diagnostics extension emits a set of metrics taken from inside of the VM. This means you can autoscale off of metrics that aren't emitted by default.
-
-You can generate a list of the metrics by using the following command in PowerShell.
-
-```
-Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,Unit
-```
+When you create a VM in Azure, diagnostics is enabled by using the Diagnostics extension. The Diagnostics extension emits a set of metrics taken from inside of the VM. This means you can autoscale using metrics that aren't emitted by default.
You can create an alert for the following metrics:
You can create an alert for the following metrics:
When you create a VM in Azure, diagnostics is enabled by default by using the Diagnostics extension.
-You can generate a list of the metrics by using the following command in PowerShell.
-
-```
-Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,Unit
-```
- You can create an alert for the following metrics: | Metric name | Unit |
You can also perform autoscale based on common web server metrics such as the HT
### Web Apps metrics
-You can generate a list of the Web Apps metrics by using the following command in PowerShell:
-
-```
-Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,Unit
-```
-
-You can alert on or scale by these metrics.
+for Web Apps, you can alert on or scale by these metrics.
| Metric name | Unit | | | |
You can alert on or scale by these metrics.
You can scale by Azure Storage queue length, which is the number of messages in the Storage queue. Storage queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
-Configure this setting in the Azure portal in the **Settings** pane. For virtual machine scale sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
+Configure this setting in the Azure portal in the **Settings** pane. For Virtual Machine Scale Sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
For example, with a Classic Storage account, the autoscale setting `metricTrigger` would include:
For a (non-classic) Storage account, the `metricTrigger` setting would include:
## Commonly used Service Bus metrics
-You can scale by Azure Service Bus queue length, which is the number of messages in the Service Bus queue. Service Bus queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
+You can scale by Azure Service Bus queue length, which is the number of messages in the Service Bus queue. Service Bus queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances, and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
-For virtual machine scale sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
+For Virtual Machine Scale Sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
``` "metricName": "ApproximateMessageCount",
For virtual machine scale sets, you can update the autoscale setting in the Reso
``` > [!NOTE]
-> For Service Bus, the resource group concept doesn't exist, but Azure Resource Manager creates a default resource group per region. The resource group is usually in the Default-ServiceBus-[region] format. Examples are Default-ServiceBus-EastUS, Default-ServiceBus-WestUS, and Default-ServiceBus-AustraliaEast.
+> For Service Bus, the resource group concept doesn't exist. Azure Resource Manager creates a default resource group per region. The resource group is usually in the Default-ServiceBus-[region] format. Examples are Default-ServiceBus-EastUS, Default-ServiceBus-WestUS, and Default-ServiceBus-AustraliaEast.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Your cluster must be configured to send metrics to [Azure Monitor managed servic
The methods currently available for creating Prometheus alert rules are Azure Resource Manager template (ARM template) and Bicep template.
+> [!NOTE]
+> Although you can create the Prometheus alert in a resource group different from the target resource, you should use the same resource group.
+ ### [ARM template](#tab/arm-template) 1. Download the template that includes the set of alert rules you want to enable. For a list of the rules for each, see [Alert rule details](#alert-rule-details).
The methods currently available for creating Prometheus alert rules are Azure Re
1. To deploy community and recommended alerts, follow this [template](https://aka.ms/azureprometheus-alerts-bicep) and follow the README.md file in the same folder for how to deploy.
-> [!NOTE]
-> Although you can create the Prometheus alert in a resource group different from the target resource, use the same resource group as your target resource.
+ ### Edit Prometheus alert rules
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
description: This article describes how you can query against resources from mul
Previously updated : 04/28/2022 Last updated : 04/01/2023
There are two methods to query data that's stored in multiple workspaces and app
To reference another workspace in your query, use the [workspace](../logs/workspace-expression.md) identifier. For an app from Application Insights, use the [app](./app-expression.md) identifier. ### Identify workspace resources
-The following examples demonstrate queries across Log Analytics workspaces to return summarized counts of logs from the Update table on a workspace named `contosoretail-it`.
You can identify a workspace in one of several ways:
-* **Resource name**: This human-readable name of the workspace is sometimes referred to as the *component name*.
-
- >[!IMPORTANT]
- >Because app and workspace names aren't unique, this identifier might be ambiguous. We recommend that the reference uses a qualified name, workspace ID, or Azure Resource ID.
-
- `workspace("contosoretail-it").Update | count`
-
-* **Qualified name**: This "full name" of the workspace is composed of the subscription name, resource group, and component name in the format *subscriptionName/resourceGroup/componentName*.
-
- `workspace('contoso/contosoretail/contosoretail-it').Update | count`
-
- >[!NOTE]
- >Because Azure subscription names aren't unique, this identifier might be ambiguous.
- * **Workspace ID**: A workspace ID is the unique, immutable, identifier assigned to each workspace represented as a globally unique identifier (GUID).
- `workspace("b459b4u5-912x-46d5-9cb1-p43069212nb4").Update | count`
+ `workspace("00000000-0000-0000-0000-000000000000").Update | count`
* **Azure Resource ID**: This ID is the Azure-defined unique identity of the workspace. You use the Resource ID when the resource name is ambiguous. For workspaces, the format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/workspaces/componentName*. For example: ```
- workspace("/subscriptions/e427519-5645-8x4e-1v67-3b84b59a1985/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail-it").Update | count
+ workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail-it").Update | count
``` ### Identify an application
The following examples return a summarized count of requests made against an app
You can identify an application in Application Insights with the `app(Identifier)` expression. The `Identifier` argument specifies the app by using one of the following names or IDs:
-* **Resource name**: This human readable name of the app is sometimes referred to as the *component name*.
-
- `app("fabrikamapp")`
-
- >[!NOTE]
- >Identifying an application by name assumes uniqueness across all accessible subscriptions. If you have multiple applications with the specified name, the query fails because of the ambiguity. In this case, you must use one of the other identifiers.
-
-* **Qualified name**: This "full name" of the app is composed of the subscription name, resource group, and component name in the format *subscriptionName/resourceGroup/componentName*.
-
- `app("AI-Prototype/Fabrikam/fabrikamapp").requests | count`
-
- >[!NOTE]
- >Because Azure subscription names aren't unique, this identifier might be ambiguous.
- >
- * **ID**: This ID is the app GUID of the application.
- `app("b459b4f6-912x-46d5-9cb1-b43069212ab4").requests | count`
+ `app("00000000-0000-0000-0000-000000000000").requests | count`
* **Azure Resource ID**: This ID is the Azure-defined unique identity of the app. You use the resource ID when the resource name is ambiguous. The format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/components/componentName*. For example: ```
- app("/subscriptions/b459b4f6-912x-46d5-9cb1-b43069212ab4/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
+ app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
``` ### Perform a query across multiple resources
You can query multiple resources from any of your resource instances. These reso
Example for a query across two workspaces: ```
-union Update, workspace("contosoretail-it").Update, workspace("b459b4u5-912x-46d5-9cb1-p43069212nb4").Update
+union
+ Update,
+ workspace("").Update, workspace("00000000-0000-0000-0000-000000000000").Update
| where TimeGenerated >= ago(1h) | where UpdateState == "Needed" | summarize dcount(Computer) by Classification
Create a query like the following example that references the scope of Applicati
```Kusto // crossResource function that scopes my Application Insights resources union withsource= SourceApp
-app('Contoso-app1').requests,
-app('Contoso-app2').requests,
-app('Contoso-app3').requests,
-app('Contoso-app4').requests,
-app('Contoso-app5').requests
+app('00000000-0000-0000-0000-000000000000').requests,
+app('00000000-0000-0000-0000-000000000001').requests,
+app('00000000-0000-0000-0000-000000000002').requests,
+app('00000000-0000-0000-0000-000000000003').requests,
+app('00000000-0000-0000-0000-000000000004').requests
``` You can now [use this function](./functions.md#use-a-function) in a cross-resource query like the following example. The function alias `applicationsScoping` returns the union of the requests table from all the defined applications. The query then filters for failed requests and visualizes the trends by application. The `parse` operator is optional in this example. It extracts the application name from the `SourceApp` property.
You can now [use this function](./functions.md#use-a-function) in a cross-resour
applicationsScoping | where timestamp > ago(12h) | where success == 'False'
-| parse SourceApp with * '(' applicationName ')' *
-| summarize count() by applicationName, bin(timestamp, 1h)
+| parse SourceApp with * '(' applicationId ')' *
+| summarize count() by applicationId, bin(timestamp, 1h)
| render timechart ``` >[!NOTE] > This method can't be used with log alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isn't supported. If you prefer to use a function for resource scoping in log alerts, you must edit the alert rule in the portal or with an Azure Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log alert query.
-![Screenshot that shows a time chart.](media/cross-workspace-query/chart.png)
- ## Next steps See [Analyze log data in Azure Monitor](./log-query-overview.md) for an overview of log queries and how Azure Monitor log data is structured.
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-collector-release-notes.md
- Title: Release Notes for Microsoft.ApplicationInsights.SnapshotCollector NuGet package - Application Insights
-description: Release notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package used by the Application Insights Snapshot Debugger.
- Previously updated : 11/10/2020---
-# Release notes for Microsoft.ApplicationInsights.SnapshotCollector
-
-This article contains the releases notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package for .NET applications, which is used by the Application Insights Snapshot Debugger.
-
-[Learn](./snapshot-debugger.md) more about the Application Insights Snapshot Debugger for .NET applications.
-
-For bug reports and feedback, open an issue on GitHub at https://github.com/microsoft/ApplicationInsights-SnapshotCollector
--
-## Release notes
-
-## [1.4.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.3)
-A point release to address user-reported bugs.
-### Bug fixes
-- Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17)-- Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
-<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](snapshot-debugger-troubleshoot.md#not-supported-scenarios)
-
-## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2)
-A point release to address a user-reported bug.
-### Bug fixes
-- Fix [ArgumentException: Delegates must be of the same type.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/16)-
-## [1.4.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.1)
-A point release to revert a breaking change introduced in 1.4.0.
-### Bug fixes
-- Fix [Method not found in WebJobs](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/15)-
-## [1.4.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.0)
-Address multiple improvements and added support for Azure Active Directory (Azure AD) authentication for Application Insights ingestion.
-### Changes
-- Snapshot Collector package size reduced by 60%. From 10.34 MB to 4.11 MB.-- Target netstandard2.0 only in Snapshot Collector.-- Bump Application Insights SDK dependency to 2.15.0.-- Add back MinidumpWithThreadInfo when writing dumps.-- Add CompatibilityVersion to improve synchronization between Snapshot Collector agent and uploader on breaking changes.-- Change SnapshotUploader LogFile naming algorithm to avoid excessive file I/O in App Service.-- Add pid, role name, and process start time to uploaded blob metadata.-- Use System.Diagnostics.Process where possible in Snapshot Collector and Snapshot Uploader.
-### New features
-- Add Azure Active Directory authentication to SnapshotCollector. Learn more about Azure AD authentication in Application Insights [here](../app/azure-ad-authentication.md).-
-## [1.3.7.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.5)
-A point release to backport a fix from 1.4.0-pre.
-### Bug fixes
-- Fix [ObjectDisposedException on shutdown](https://github.com/microsoft/ApplicationInsights-dotnet/issues/2097).-
-## [1.3.7.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.4)
-A point release to address a problem discovered in testing Azure App Service's codeless attach scenario.
-### Changes
-- The netcoreapp3.0 target now depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (previously >= 2.1.2).-
-## [1.3.7.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.3)
-A point release to address a couple of high-impact issues.
-### Bug fixes
-- Fixed PDB discovery in the wwwroot/bin folder, which was broken when we changed the symbol search algorithm in 1.3.6.-- Fixed noisy ExtractWasCalledMultipleTimesException in telemetry.-
-## [1.3.7](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7)
-### Changes
-- The netcoreapp2.0 target of SnapshotCollector depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (again). This reverts behavior to how it was before 1.3.5. We tried to upgrade it in 1.3.6, but it broke some Azure App Service scenarios.
-### New features
-- Snapshot Collector reads and parses the ConnectionString from the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable or from the TelemetryConfiguration. Primarily, this is used to set the endpoint for connecting to the Snapshot service. For more information, see the [Connection strings documentation](../app/sdk-connection-string.md).
-### Bug fixes
-- Switched to using HttpClient for all targets except net45 because WebRequest was failing in some environments due to an incompatible SecurityProtocol (requires TLS 1.2).-
-## [1.3.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.6)
-### Changes
-- SnapshotCollector now depends on Microsoft.ApplicationInsights >= 2.5.1 for all target frameworks. This may be a breaking change if your application depends on an older version of the Microsoft.ApplicationInsights SDK.-- Remove support for TLS 1.0 and 1.1 in Snapshot Uploader.-- Period of PDB scans now defaults 24 hours instead of 15 minutes. Configurable via PdbRescanInterval on SnapshotCollectorConfiguration.-- PDB scan searches top-level folders only, instead of recursive. This may be a breaking change if your symbols are in subfolders of the binary folder.
-### New features
-- Log rotation in SnapshotUploader to avoid filling the logs folder with old files.-- Deoptimization support (via ReJIT on attach) for .NET Core 3.0 applications.-- Add symbols to NuGet package.-- Set additional metadata when uploading minidumps.-- Added an Initialized property to SnapshotCollectorTelemetryProcessor. It's a CancellationToken, which will be canceled when the Snapshot Collector is completely initialized and connected to the service endpoint.-- Snapshots can now be captured for exceptions in dynamically generated methods. For example, the compiled expression trees generated by Entity Framework queries.
-### Bug fixes
-- AmbiguousMatchException loading Snapshot Collector due to Status Monitor.-- GetSnapshotCollector extension method now searches all TelemetrySinks.-- Don't start the Snapshot Uploader on unsupported platforms.-- Handle InvalidOperationException when deoptimizing dynamic methods (for example, Entity Framework)-
-## [1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5)
-- Add support for Sovereign clouds (Older versions won't work in sovereign clouds)-- Adding snapshot collector made easier by using AddSnapshotCollector(). More information can be found [here](./snapshot-debugger-app-service.md).-- Use FISMA MD5 setting for verifying blob blocks. This avoids the default .NET MD5 crypto algorithm, which is unavailable when the OS is set to FIPS-compliant mode.-- Ignore .NET Framework frames when deoptimizing function calls. This behavior can be controlled by the DeoptimizeIgnoredModules configuration setting.-- Add `DeoptimizeMethodCount` configuration setting that allows deoptimization of more than one function call. More information here-
-## [1.3.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.4)
-- Allow structured Instrumentation Keys.-- Increase SnapshotUploader robustness - continue startup even if old uploader logs can't be moved.-- Re-enabled reporting additional telemetry when SnapshotUploader.exe exits immediately (was disabled in 1.3.3).-- Simplify internal telemetry.-- _Experimental feature_: Snappoint collection plans: Add "snapshotOnFirstOccurence". More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).-
-## [1.3.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.3)
-- Fixed bug that was causing SnapshotUploader.exe to stop responding and not upload snapshots for .NET Core apps.-
-## [1.3.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.2)
-- _Experimental feature_: Snappoint collection plans. More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).-- SnapshotUploader.exe will exit when the runtime unloads the AppDomain from which SnapshotCollector is loaded, instead of waiting for the process to exit. This improves the collector reliability when hosted in IIS.-- Add configuration to allow multiple SnapshotCollector instances that are using the same Instrumentation Key to share the same SnapshotUploader process: ShareUploaderProcess (defaults to `true`).-- Report additional telemetry when SnapshotUploader.exe exits immediately.-- Reduced the number of support files SnapshotUploader.exe needs to write to disk.-
-## [1.3.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.1)
-- Remove support for collecting snapshots with the RtlCloneUserProcess API and only support PssCaptureSnapshots API.-- Increase the default limit on how many snapshots can be captured in 10 minutes from 1 to 3.-- Allow SnapshotUploader.exe to negotiate TLS 1.1 and 1.2-- Report additional telemetry when SnapshotUploader logs a warning or an error-- Stop taking snapshots when the backend service reports the daily quota was reached (50 snapshots per day)-- Add extra check in SnapshotUploader.exe to not allow two instances to run in the same time.-
-## [1.3.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.0)
-### Changes
-- For applications targeting .NET Framework, Snapshot Collector now depends on Microsoft.ApplicationInsights version 2.3.0 or above.
-It used to be 2.2.0 or above.
-We believe this won't be an issue for most applications, but let us know if this change prevents you from using the latest Snapshot Collector.
-- Use exponential back-off delays in the Snapshot Uploader when retrying failed uploads.-- Use ServerTelemetryChannel (if available) for more reliable reporting of telemetry.-- Use 'SdkInternalOperationsMonitor' on the initial connection to the Snapshot Debugger service so that it's ignored by dependency tracking.-- Improve telemetry around initial connection to the Snapshot Debugger service.-- Report additional telemetry for:
- - Azure App Service version.
- - Azure compute instances.
- - Containers.
- - Azure Function app.
-### Bug fixes
-- When the problem counter reset interval is set to 24 days, interpret that as 24 hours.-- Fixed a bug where the Snapshot Uploader would stop processing new snapshots if there was an exception while disposing a snapshot.-
-## [1.2.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.3)
-- Fix strong-name signing with Snapshot Uploader binaries.-
-## [1.2.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.2)
-### Changes
-- The files needed for SnapshotUploader(64).exe are now embedded as resources in the main DLL. That means the SnapshotCollectorFiles folder is no longer created, simplifying build and deployment and reducing clutter in Solution Explorer. Take care when upgrading to review the changes in your `.csproj` file. The `Microsoft.ApplicationInsights.SnapshotCollector.targets` file is no longer needed.-- Telemetry is logged to your Application Insights resource even if ProvideAnonymousTelemetry is set to false. This is so we can implement a health check feature in the Azure portal. ProvideAnonymousTelemetry affects only the telemetry sent to Microsoft for product support and improvement.-- When the TempFolder or ShadowCopyFolder are redirected to environment variables, keep the collector idle until those environment variables are set.-- For applications that connect to the Internet via a proxy server, Snapshot Collector will now autodetect any proxy settings and pass them on to SnapshotUploader.exe.-- Lower the priority of the SnapshotUplaoder process (where possible). This priority can be overridden via the IsLowPrioirtySnapshotUploader option.-- Added a GetSnapshotCollector extension method on TelemetryConfiguration for scenarios where you want to configure the Snapshot Collector programmatically.-- Set the Application Insights SDK version (instead of the application version) in customer-facing telemetry.-- Send the first heartbeat event after two minutes.
-### Bug fixes
-- Fix NullReferenceException when exceptions have null or immutable Data dictionaries.-- In the uploader, retry PDB matching a few times if we get a sharing violation.-- Fix duplicate telemetry when more than one thread calls into the telemetry pipeline at startup.-
-## [1.2.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.1)
-### Changes
-- XML Doc comment files are now included in the NuGet package.-- Added an ExcludeFromSnapshotting extension method on `System.Exception` for scenarios where you know you have a noisy exception and want to avoid creating snapshots for it.-- Added an IsEnabledWhenProfiling configuration property, defaults to true. This is a change from previous versions where snapshot creation was temporarily disabled if the Application Insights Profiler was performing a detailed collection. The old behavior can be recovered by setting this property to false.
-### Bug fixes
-- Sign SnapshotUploader64.exe properly.-- Protect against double-initialization of the telemetry processor.-- Prevent double logging of telemetry in apps with multiple pipelines.-- Fix a bug with the expiration time of a collection plan, which could prevent snapshots after 24 hours.-
-## [1.2.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.0)
-The biggest change in this version (hence the move to a new minor version number) is a rewrite of the snapshot creation and handling pipeline. In previous versions, this functionality was implemented in native code (ProductionBreakpoints*.dll and SnapshotHolder*.exe). The new implementation is all managed code with P/Invokes. For this first version using the new pipeline, we haven't strayed far from the original behavior. The new implementation allows for better error reporting and sets us up for future improvements.
-
-### Other changes in this version
-- MinidumpUploader.exe has been renamed to SnapshotUploader.exe (or SnapshotUploader64.exe).-- Added timing telemetry to DeOptimize/ReOptimize requests.-- Added gzip compression for minidump uploads.-- Fixed a problem where PDBs were locked preventing site upgrade.-- Log the original folder name (SnapshotCollectorFiles) when shadow-copying.-- Adjust memory limits for 64-bit processes to prevent site restarts due to OOM.-- Fix an issue where snapshots were still collected even after disabling.-- Log heartbeat events to customer's AI resource.-- Improve snapshot speed by removing "Source" from Problem ID.-
-## [1.1.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.2)
-### Changes
-Augmented usage telemetry
-- Detect and report .NET version and OS-- Detect and report additional Azure Environments (Cloud Service, Service Fabric)-- Record and report exception metrics (number of 1st chance exceptions and number of TrackException calls) in Heartbeat telemetry.
-### Bug fixes
-- Correct handling of SqlException where the inner exception (Win32Exception) isn't thrown.-- Trim trailing spaces on symbol folders, which caused an incorrect parse of command-line arguments to the MinidumpUploader.-- Prevent infinite retry of failed connections to the Snapshot Debugger agent's endpoint.-
-## [1.1.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.0)
-### Changes
-- Added host memory protection. This feature reduces the impact on the host machine's memory.-- Improve the Azure portal snapshot viewing experience.
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
Below you can find scenarios where Snapshot Collector isn't supported:
## Next steps * Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
-* See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+* See [snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md). [Enablement UI]: ./media/snapshot-debugger/enablement-ui.png
azure-monitor Snapshot Debugger Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-data.md
+
+ Title: View Application Insights Snapshot Debugger data
+description: View snapshots collected by the Snapshot Debugger in either the Azure portal or Visual Studio
+++
+reviewer: cweining
++ Last updated : 04/14/2023++
+# View Application Insights Snapshot Debugger data
+
+Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights pane of the Azure portal.
+
+You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio Enterprise. You can also [set SnapPoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
+
+## View Snapshots in the Portal
+
+After an exception has occurred in your application and a snapshot has been created, you should have snapshots to view in the Azure portal within 5 to 10 minutes. To view snapshots, in the **Failure** pane, either:
+
+* Select the **Operations** button when viewing the **Operations** tab, or
+* Select the **Exceptions** button when viewing the **Exceptions** tab.
++
+Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event. If a snapshot is available for the given exception, an **Open Debug Snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md).
++
+In the Debug Snapshot view, you see a call stack and a variables pane. When you select frames of the call stack in the call stack pane, you can view local variables and parameters for that function call in the variables pane.
++
+Snapshots might include sensitive information. By default, you can only view snapshots if you've been assigned the `Application Insights Snapshot Debugger` role.
+
+## View Snapshots in Visual Studio 2017 Enterprise or above
+
+1. Click the **Download Snapshot** button to download a `.diagsession` file, which can be opened by Visual Studio Enterprise.
+
+1. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you'll need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
+
+1. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process.
+
+ :::image type="content" source="./media/snapshot-debugger/open-snapshot-visual-studio.png" alt-text="Screenshot showing the debug snapshot in Visual Studio.":::
+
+The downloaded snapshot includes any symbol files that were found on your web application server. These symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable symbol deployment when you publish your web apps.
+
+## Next steps
+
+Enable the Snapshot Debugger in your:
+- [App Service](./snapshot-debugger-app-service.md)
+- [Function App](./snapshot-debugger-function-app.md)
+- [Virtual machine or other Azure service](./snapshot-debugger-vm.md)
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-function-app.md
We recommend that you have Snapshot Debugger enabled on all your apps to ease di
## Next steps * Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
-* [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+* [View snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
* Customize Snapshot Debugger configuration based on your use-case on your Function app. For more information, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration). * For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
Based on how Snapshot Debugger was enabled, see the following options:
* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of `Microsoft.ApplicationInsights.SnapshotCollector`.
-For the latest updates and bug fixes [consult the release notes](./snapshot-collector-release-notes.md).
+For the latest updates and bug fixes [consult the release notes](./snapshot-debugger.md#release-notes-for-microsoftapplicationinsightssnapshotcollector).
## Check the uploader logs
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
void ExampleRequest()
## Next steps - Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.-- See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
+- See [snapshots](snapshot-debugger-data.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
reviewer: cweining Previously updated : 04/10/2023 Last updated : 04/14/2023 # Debug snapshots on exceptions in .NET apps When an exception occurs, you can automatically collect a debug snapshot from your live web application. The debug snapshot shows the state of source code and variables at the moment the exception was thrown. The Snapshot Debugger in [Azure Application Insights](../app/app-insights-overview.md):
-* Monitors system-generated logs from your web app.
-* Collects snapshots on your top-throwing exceptions.
-* Provides information you need to diagnose issues in production.
+- Monitors system-generated logs from your web app.
+- Collects snapshots on your top-throwing exceptions.
+- Provides information you need to diagnose issues in production.
-Simply include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application and configure collection parameters in [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md).
+To use the Snapshot Debugger, you simply:
+1. Include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application
+1. Configure collection parameters in [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md).
-Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights pane of the Azure portal.
+## How snapshots work
+
+The Snapshot Debugger is implemented as an [Application Insights Telemetry Processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Debugger Telemetry Processor is added to your application's system-generated logs pipeline.
+Each time your application calls [TrackException](../app/asp-net-exceptions.md#exceptions), the Snapshot Debugger computes a Problem ID from the type of exception being thrown and the throwing method.
+Each time your application calls `TrackException`, a counter is incremented for the appropriate Problem ID. When the counter reaches the `ThresholdForSnapshotting` value, the Problem ID is added to a Collection Plan.
+
+The Snapshot Debugger also monitors exceptions as they're thrown by subscribing to the [AppDomain.CurrentDomain.FirstChanceException](/dotnet/api/system.appdomain.firstchanceexception) event. When that event fires, the Problem ID of the exception is computed and compared against the Problem IDs in the Collection Plan.
+If there's a match, then a snapshot of the running process is created. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. After the `FirstChanceException` handler returns, the thrown exception is processed as normal. Eventually, the exception reaches the `TrackException` method again where it, along with the snapshot identifier, is reported to Application Insights.
+
+The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (*.pdb*) files.
-You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio Enterprise. You can also [set SnapPoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
+> [!TIP]
+> * A process snapshot is a suspended clone of the running process.
+> * Creating the snapshot takes about 10 to 20 milliseconds.
+> * The default value for `ThresholdForSnapshotting` is 1. This is also the minimum value. Therefore, your app has to trigger the same exception **twice** before a snapshot is created.
+> * Set `IsEnabledInDeveloperMode` to true if you want to generate snapshots while debugging in Visual Studio.
+> * The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every ten minutes.
+> * No more than 50 snapshots per day may be uploaded.
-## Enable Application Insights Snapshot Debugger for your application
+
+## Supported applications and environments
+
+### Applications
Snapshot collection is available for:
Snapshot collection is available for:
- .NET and ASP.NET applications running .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer versions on Windows. - .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) (and newer versions) applications on Windows.
-We don't recommend using .NET Core versions prior to LTS since they're out of support.
+.NET Core versions prior to LTS are out of support and not recommended.
+
+### Environments
The following environments are supported:
The following environments are supported:
> [!NOTE] > Client applications (for example, WPF, Windows Forms or UWP) aren't supported.
-If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](snapshot-debugger-troubleshoot.md).
-
-## Grant permissions
-
-Access to snapshots is protected by Azure role-based access control (Azure RBAC). To inspect a snapshot, you must first be added to the necessary role by a subscription owner.
-
-> [!NOTE]
-> Owners and contributors don't automatically have this role. If they want to view snapshots, they must add themselves to the role.
+If you've enabled Snapshot Debugger but aren't seeing snapshots, check the [Troubleshooting guide](snapshot-debugger-troubleshoot.md).
-Subscription owners should assign the [Application Insights Snapshot Debugger](../../role-based-access-control/role-assignments-portal.md) role to users who will inspect snapshots. This role can be assigned to individual users or groups by subscription owners for the target Application Insights resource or its resource group or subscription.
+## Required permissions
-Assign the Debugger role to the **Application Insights Snapshot**.
+Access to snapshots is protected by Azure role-based access control (Azure RBAC). To inspect a snapshot, you must first be added to the [Application Insights Snapshot Debugger](../../role-based-access-control/role-assignments-portal.md) role. Subscription owners can assign this role to individual users or groups for the target **Application Insights Snapshot**.
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). > [!IMPORTANT] > Snapshots may contain personal data or other sensitive information in variable and parameter values. Snapshot data is stored in the same region as your App Insights resource.
-## View Snapshots in the Portal
-
-After an exception has occurred in your application and a snapshot has been created, you should have snapshots to view in the Azure portal within 5 to 10 minutes. To view snapshots, in the **Failure** pane, either:
-
-* Select the **Operations** button when viewing the **Operations** tab, or
-* Select the **Exceptions** button when viewing the **Exceptions** tab.
--
-Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event. If a snapshot is available for the given exception, an **Open Debug Snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md).
--
-In the Debug Snapshot view, you see a call stack and a variables pane. When you select frames of the call stack in the call stack pane, you can view local variables and parameters for that function call in the variables pane.
--
-Snapshots might include sensitive information. By default, you can only view snapshots if you've been assigned the `Application Insights Snapshot Debugger` role.
-
-## View Snapshots in Visual Studio 2017 Enterprise or above
-
-1. Click the **Download Snapshot** button to download a `.diagsession` file, which can be opened by Visual Studio Enterprise.
-
-1. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you'll need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger).
-
-1. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process.
-
- :::image type="content" source="./media/snapshot-debugger/open-snapshot-visual-studio.png" alt-text="Screenshot showing the debug snapshot in Visual Studio.":::
-
-The downloaded snapshot includes any symbol files that were found on your web application server. These symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable symbol deployment when you publish your web apps.
-
-## How snapshots work
-
-The Snapshot Collector is implemented as an [Application Insights Telemetry Processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Collector Telemetry Processor is added to your application's system-generated logs pipeline.
-Each time your application calls [TrackException](../app/asp-net-exceptions.md#exceptions), the Snapshot Collector computes a Problem ID from the type of exception being thrown and the throwing method.
-Each time your application calls `TrackException`, a counter is incremented for the appropriate Problem ID. When the counter reaches the `ThresholdForSnapshotting` value, the Problem ID is added to a Collection Plan.
-
-The Snapshot Collector also monitors exceptions as they're thrown by subscribing to the [AppDomain.CurrentDomain.FirstChanceException](/dotnet/api/system.appdomain.firstchanceexception) event. When that event fires, the Problem ID of the exception is computed and compared against the Problem IDs in the Collection Plan.
-If there's a match, then a snapshot of the running process is created. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. After the `FirstChanceException` handler returns, the thrown exception is processed as normal. Eventually, the exception reaches the `TrackException` method again where it, along with the snapshot identifier, is reported to Application Insights.
-
-The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (*.pdb*) files.
-
-> [!TIP]
-> * A process snapshot is a suspended clone of the running process.
-> * Creating the snapshot takes about 10 to 20 milliseconds.
-> * The default value for `ThresholdForSnapshotting` is 1. This is also the minimum value. Therefore, your app has to trigger the same exception **twice** before a snapshot is created.
-> * Set `IsEnabledInDeveloperMode` to true if you want to generate snapshots while debugging in Visual Studio.
-> * The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every ten minutes.
-> * No more than 50 snapshots per day may be uploaded.
- ## Limitations ### Data retention
However, in Azure App Services, the Snapshot Collector can deoptimize throwing m
> [!TIP] > Install the Application Insights Site Extension in your App Service to get de-optimization support.
+## Release notes for `Microsoft.ApplicationInsights.SnapshotCollector`
+
+This article contains the releases notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package for .NET applications, which is used by the Application Insights Snapshot Debugger.
+
+[Learn](./snapshot-debugger.md) more about the Application Insights Snapshot Debugger for .NET applications.
+
+For bug reports and feedback, [open an issue on GitHub](https://github.com/microsoft/ApplicationInsights-SnapshotCollector)
++
+### [1.4.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.3)
+A point release to address user-reported bugs.
+#### Bug fixes
+- Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17)
+- Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
+<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](snapshot-debugger-troubleshoot.md#not-supported-scenarios)
+
+### [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2)
+A point release to address a user-reported bug.
+#### Bug fixes
+- Fix [ArgumentException: Delegates must be of the same type.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/16)
+
+### [1.4.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.1)
+A point release to revert a breaking change introduced in 1.4.0.
+#### Bug fixes
+- Fix [Method not found in WebJobs](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/15)
+
+### [1.4.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.0)
+Address multiple improvements and added support for Azure Active Directory (Azure AD) authentication for Application Insights ingestion.
+#### Changes
+- Snapshot Collector package size reduced by 60%. From 10.34 MB to 4.11 MB.
+- Target netstandard2.0 only in Snapshot Collector.
+- Bump Application Insights SDK dependency to 2.15.0.
+- Add back MinidumpWithThreadInfo when writing dumps.
+- Add CompatibilityVersion to improve synchronization between Snapshot Collector agent and uploader on breaking changes.
+- Change SnapshotUploader LogFile naming algorithm to avoid excessive file I/O in App Service.
+- Add pid, role name, and process start time to uploaded blob metadata.
+- Use System.Diagnostics.Process where possible in Snapshot Collector and Snapshot Uploader.
+#### New features
+- Add Azure Active Directory authentication to SnapshotCollector. Learn more about Azure AD authentication in Application Insights [here](../app/azure-ad-authentication.md).
+
+### [1.3.7.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.5)
+A point release to backport a fix from 1.4.0-pre.
+#### Bug fixes
+- Fix [ObjectDisposedException on shutdown](https://github.com/microsoft/ApplicationInsights-dotnet/issues/2097).
+
+### [1.3.7.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.4)
+A point release to address a problem discovered in testing Azure App Service's codeless attach scenario.
+#### Changes
+- The netcoreapp3.0 target now depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (previously >= 2.1.2).
+
+### [1.3.7.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7.3)
+A point release to address a couple of high-impact issues.
+#### Bug fixes
+- Fixed PDB discovery in the wwwroot/bin folder, which was broken when we changed the symbol search algorithm in 1.3.6.
+- Fixed noisy ExtractWasCalledMultipleTimesException in telemetry.
+
+### [1.3.7](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.7)
+#### Changes
+- The netcoreapp2.0 target of SnapshotCollector depends on Microsoft.ApplicationInsights.AspNetCore >= 2.1.1 (again). This reverts behavior to how it was before 1.3.5. We tried to upgrade it in 1.3.6, but it broke some Azure App Service scenarios.
+#### New features
+- Snapshot Collector reads and parses the ConnectionString from the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable or from the TelemetryConfiguration. Primarily, this is used to set the endpoint for connecting to the Snapshot service. For more information, see the [Connection strings documentation](../app/sdk-connection-string.md).
+#### Bug fixes
+- Switched to using HttpClient for all targets except net45 because WebRequest was failing in some environments due to an incompatible SecurityProtocol (requires TLS 1.2).
+
+### [1.3.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.6)
+#### Changes
+- SnapshotCollector now depends on Microsoft.ApplicationInsights >= 2.5.1 for all target frameworks. This may be a breaking change if your application depends on an older version of the Microsoft.ApplicationInsights SDK.
+- Remove support for TLS 1.0 and 1.1 in Snapshot Uploader.
+- Period of PDB scans now defaults 24 hours instead of 15 minutes. Configurable via PdbRescanInterval on SnapshotCollectorConfiguration.
+- PDB scan searches top-level folders only, instead of recursive. This may be a breaking change if your symbols are in subfolders of the binary folder.
+#### New features
+- Log rotation in SnapshotUploader to avoid filling the logs folder with old files.
+- Deoptimization support (via ReJIT on attach) for .NET Core 3.0 applications.
+- Add symbols to NuGet package.
+- Set additional metadata when uploading minidumps.
+- Added an Initialized property to SnapshotCollectorTelemetryProcessor. It's a CancellationToken, which will be canceled when the Snapshot Collector is completely initialized and connected to the service endpoint.
+- Snapshots can now be captured for exceptions in dynamically generated methods. For example, the compiled expression trees generated by Entity Framework queries.
+#### Bug fixes
+- AmbiguousMatchException loading Snapshot Collector due to Status Monitor.
+- GetSnapshotCollector extension method now searches all TelemetrySinks.
+- Don't start the Snapshot Uploader on unsupported platforms.
+- Handle InvalidOperationException when deoptimizing dynamic methods (for example, Entity Framework)
+
+### [1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5)
+- Add support for Sovereign clouds (Older versions won't work in sovereign clouds)
+- Adding snapshot collector made easier by using AddSnapshotCollector(). More information can be found [here](./snapshot-debugger-app-service.md).
+- Use FISMA MD5 setting for verifying blob blocks. This avoids the default .NET MD5 crypto algorithm, which is unavailable when the OS is set to FIPS-compliant mode.
+- Ignore .NET Framework frames when deoptimizing function calls. This behavior can be controlled by the DeoptimizeIgnoredModules configuration setting.
+- Add `DeoptimizeMethodCount` configuration setting that allows deoptimization of more than one function call. More information here
+
+### [1.3.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.4)
+- Allow structured Instrumentation Keys.
+- Increase SnapshotUploader robustness - continue startup even if old uploader logs can't be moved.
+- Re-enabled reporting additional telemetry when SnapshotUploader.exe exits immediately (was disabled in 1.3.3).
+- Simplify internal telemetry.
+- _Experimental feature_: Snappoint collection plans: Add "snapshotOnFirstOccurence". More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).
+
+### [1.3.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.3)
+- Fixed bug that was causing SnapshotUploader.exe to stop responding and not upload snapshots for .NET Core apps.
+
+### [1.3.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.2)
+- _Experimental feature_: Snappoint collection plans. More information available [here](https://gist.github.com/alexaloni/5b4d069d17de0dabe384ea30e3f21dfe).
+- SnapshotUploader.exe will exit when the runtime unloads the AppDomain from which SnapshotCollector is loaded, instead of waiting for the process to exit. This improves the collector reliability when hosted in IIS.
+- Add configuration to allow multiple SnapshotCollector instances that are using the same Instrumentation Key to share the same SnapshotUploader process: ShareUploaderProcess (defaults to `true`).
+- Report additional telemetry when SnapshotUploader.exe exits immediately.
+- Reduced the number of support files SnapshotUploader.exe needs to write to disk.
+
+### [1.3.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.1)
+- Remove support for collecting snapshots with the RtlCloneUserProcess API and only support PssCaptureSnapshots API.
+- Increase the default limit on how many snapshots can be captured in 10 minutes from 1 to 3.
+- Allow SnapshotUploader.exe to negotiate TLS 1.1 and 1.2
+- Report additional telemetry when SnapshotUploader logs a warning or an error
+- Stop taking snapshots when the backend service reports the daily quota was reached (50 snapshots per day)
+- Add extra check in SnapshotUploader.exe to not allow two instances to run in the same time.
+
+### [1.3.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.0)
+#### Changes
+- For applications targeting .NET Framework, Snapshot Collector now depends on Microsoft.ApplicationInsights version 2.3.0 or above.
+It used to be 2.2.0 or above.
+We believe this won't be an issue for most applications, but let us know if this change prevents you from using the latest Snapshot Collector.
+- Use exponential back-off delays in the Snapshot Uploader when retrying failed uploads.
+- Use ServerTelemetryChannel (if available) for more reliable reporting of telemetry.
+- Use 'SdkInternalOperationsMonitor' on the initial connection to the Snapshot Debugger service so that it's ignored by dependency tracking.
+- Improve telemetry around initial connection to the Snapshot Debugger service.
+- Report additional telemetry for:
+ - Azure App Service version.
+ - Azure compute instances.
+ - Containers.
+ - Azure Function app.
+#### Bug fixes
+- When the problem counter reset interval is set to 24 days, interpret that as 24 hours.
+- Fixed a bug where the Snapshot Uploader would stop processing new snapshots if there was an exception while disposing a snapshot.
+
+### [1.2.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.3)
+- Fix strong-name signing with Snapshot Uploader binaries.
+
+### [1.2.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.2)
+#### Changes
+- The files needed for SnapshotUploader(64).exe are now embedded as resources in the main DLL. That means the SnapshotCollectorFiles folder is no longer created, simplifying build and deployment and reducing clutter in Solution Explorer. Take care when upgrading to review the changes in your `.csproj` file. The `Microsoft.ApplicationInsights.SnapshotCollector.targets` file is no longer needed.
+- Telemetry is logged to your Application Insights resource even if ProvideAnonymousTelemetry is set to false. This is so we can implement a health check feature in the Azure portal. ProvideAnonymousTelemetry affects only the telemetry sent to Microsoft for product support and improvement.
+- When the TempFolder or ShadowCopyFolder are redirected to environment variables, keep the collector idle until those environment variables are set.
+- For applications that connect to the Internet via a proxy server, Snapshot Collector will now autodetect any proxy settings and pass them on to SnapshotUploader.exe.
+- Lower the priority of the SnapshotUplaoder process (where possible). This priority can be overridden via the IsLowPrioirtySnapshotUploader option.
+- Added a GetSnapshotCollector extension method on TelemetryConfiguration for scenarios where you want to configure the Snapshot Collector programmatically.
+- Set the Application Insights SDK version (instead of the application version) in customer-facing telemetry.
+- Send the first heartbeat event after two minutes.
+#### Bug fixes
+- Fix NullReferenceException when exceptions have null or immutable Data dictionaries.
+- In the uploader, retry PDB matching a few times if we get a sharing violation.
+- Fix duplicate telemetry when more than one thread calls into the telemetry pipeline at startup.
+
+### [1.2.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.1)
+#### Changes
+- XML Doc comment files are now included in the NuGet package.
+- Added an ExcludeFromSnapshotting extension method on `System.Exception` for scenarios where you know you have a noisy exception and want to avoid creating snapshots for it.
+- Added an IsEnabledWhenProfiling configuration property, defaults to true. This is a change from previous versions where snapshot creation was temporarily disabled if the Application Insights Profiler was performing a detailed collection. The old behavior can be recovered by setting this property to false.
+#### Bug fixes
+- Sign SnapshotUploader64.exe properly.
+- Protect against double-initialization of the telemetry processor.
+- Prevent double logging of telemetry in apps with multiple pipelines.
+- Fix a bug with the expiration time of a collection plan, which could prevent snapshots after 24 hours.
+
+### [1.2.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.2.0)
+The biggest change in this version (hence the move to a new minor version number) is a rewrite of the snapshot creation and handling pipeline. In previous versions, this functionality was implemented in native code (ProductionBreakpoints*.dll and SnapshotHolder*.exe). The new implementation is all managed code with P/Invokes. For this first version using the new pipeline, we haven't strayed far from the original behavior. The new implementation allows for better error reporting and sets us up for future improvements.
+
+#### Other changes in this version
+- MinidumpUploader.exe has been renamed to SnapshotUploader.exe (or SnapshotUploader64.exe).
+- Added timing telemetry to DeOptimize/ReOptimize requests.
+- Added gzip compression for minidump uploads.
+- Fixed a problem where PDBs were locked preventing site upgrade.
+- Log the original folder name (SnapshotCollectorFiles) when shadow-copying.
+- Adjust memory limits for 64-bit processes to prevent site restarts due to OOM.
+- Fix an issue where snapshots were still collected even after disabling.
+- Log heartbeat events to customer's AI resource.
+- Improve snapshot speed by removing "Source" from Problem ID.
+
+### [1.1.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.2)
+
+#### Changes
+- Augmented usage telemetry
+- Detect and report .NET version and OS
+- Detect and report additional Azure Environments (Cloud Service, Service Fabric)
+- Record and report exception metrics (number of 1st chance exceptions and number of TrackException calls) in Heartbeat telemetry.
+
+#### Bug fixes
+- Correct handling of SqlException where the inner exception (Win32Exception) isn't thrown.
+- Trim trailing spaces on symbol folders, which caused an incorrect parse of command-line arguments to the MinidumpUploader.
+- Prevent infinite retry of failed connections to the Snapshot Debugger agent's endpoint.
+
+### [1.1.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.0)
+#### Changes
+- Added host memory protection. This feature reduces the impact on the host machine's memory.
+- Improve the Azure portal snapshot viewing experience.
+ ## Next steps Enable Application Insights Snapshot Debugger for your application:
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Logs|[Manage tables in a Log Analytics workspace]()|Refreshed all Log Analytics
Security-Fundamentals|[Monitoring Azure App Service](../../articles/app-service/monitor-app-service.md)|Revised the Azure Monitor overview to improve usability. The article is cleaned up, streamlined, and better reflects the product architecture and the customer experience. | Snapshot-Debugger|[host.json reference for Azure Functions 2.x and later](../../articles/azure-functions/functions-host-json.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.| Snapshot-Debugger|[Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger](profiler/profiler-bring-your-own-storage.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
-Snapshot-Debugger|[Release notes for Microsoft.ApplicationInsights.SnapshotCollector](snapshot-debugger/snapshot-collector-release-notes.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
+Snapshot-Debugger|[Release notes for Microsoft.ApplicationInsights.SnapshotCollector](./snapshot-debugger/snapshot-debugger.md#release-notes-for-microsoftapplicationinsightssnapshotcollector)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.| Snapshot-Debugger|[Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](snapshot-debugger/snapshot-debugger-function-app.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.| Snapshot-Debugger|[ Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to
| Article | Description | ||| |[Autoscale in Microsoft Azure](autoscale/autoscale-overview.md)|Updated conceptual diagrams.|
-|[Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)](autoscale/autoscale-predictive.md)|Predictive autoscale (preview) is now available in all regions.|
+|[Use predictive autoscale to scale out before load demands in Virtual Machine Scale Sets (preview)](autoscale/autoscale-predictive.md)|Predictive autoscale (preview) is now available in all regions.|
### Change Analysis
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"use-parent-property": { "level": "warning" },
- "use-protectedsettings-for-commandtoexecute-secrets": {
- "level": "warning"
- },
"use-recent-api-versions": { "level": "warning" },
azure-resource-manager Linter Rule Use Parent Property https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-parent-property.md
Last updated 01/30/2023
# Linter rule - use parent property
-When defined outside of the parent resource, you format name of the child resource with slashes to include the parent name. Setting the full resource name isn't the recommended approach. The syntax can be simplified by using the `parent` property. For more information, see [Full resource name outside parent](./child-resource-name-type.md#full-resource-name-outside-parent).
- When defined outside of the parent resource, you use slashes to include the parent name in the name of the child resource. Setting the full resource name with parent resource name is not recommended. The `parent` property can be used to simplify the syntax. See [Full resource name outside parent](./child-resource-name-type.md#full-resource-name-outside-parent). ## Linter rule code
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
The resource group stores metadata about the resources. Therefore, when you spec
![browse resource groups](./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png)
-3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the addition columns you could add to the display:
+3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the additional columns you could add to the display:
## Open resource groups
azure-resource-manager Deployment Script Template Configure Dev https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template-configure-dev.md
$DeploymentScriptOutputs['text'] = $output
For an Azure CLI container image, you can create a *hello.sh* file by using the following content: ```bash
-firstname=$1
-lastname=$2
-output="{\"name\":{\"displayName\":\"$firstname $lastname\",\"firstName\":\"$firstname\",\"lastName\":\"$lastname\"}}"
+FIRSTNAME=$1
+LASTNAME=$2
+OUTPUT="{\"name\":{\"displayName\":\"$FIRSTNAME $LASTNAME\",\"firstName\":\"$FIRSTNAME\",\"lastName\":\"$LASTNAME\"}}"
echo -n "Hello "
-echo $output | jq -r '.name.displayName'
+echo $OUTPUT | jq -r '.name.displayName'
``` > [!NOTE]
azure-resource-manager Deployment Tutorial Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md
This repository is referred to as a *remote repository*. Each of the developers
```bash git clone https://github.com/[YourAccountName]/[YourGitHubRepositoryName] cd [YourGitHubRepositoryName]
- mkdir CreateWebApp
- cd CreateWebApp
+ mkdir create_web_app
+ cd create_web_app
pwd ``` Replace `[YourAccountName]` with your GitHub account name, and replace `[YourGitHubRepositoryName]` with your repository name you created in the previous procedure.
-The _CreateWebApp_ folder is the folder where the template is stored. The `pwd` command shows the folder path. The path is where you save the template to in the following procedure.
+The _create_web_app_ folder is the folder where the template is stored. The `pwd` command shows the folder path. The path is where you save the template to in the following procedure.
### Download a Quickstart template
-Instead of creating the templates, you can download the templates and save them to the _CreateWebApp_ folder.
+Instead of creating the templates, you can download the templates and save them to the _create_web_app_ folder.
* The main template: https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/get-started-deployment/linked-template/azuredeploy.json * The linked template: https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/get-started-deployment/linked-template/linkedStorageAccount.json
Both the folder name and the file names are used as they are in the pipeline. If
The _azuredeploy.json_ has been added to the local repository. Next, you upload the template to the remote repository. 1. Open *Git Shell* or *Git Bash*, if it is not opened.
-1. Change directory to the _CreateWebApp_ folder in your local repository.
+1. Change directory to the _create_web_app_ folder in your local repository.
1. Verify the _azuredeploy.json_ file is in the folder. 1. Run the following command:
The _azuredeploy.json_ has been added to the local repository. Next, you upload
You might get a warning about LF. You can ignore the warning. **main** is the main branch. You typically create a branch for each update. To simplify the tutorial, you use the main branch directly.
-1. Browse to your GitHub repository from a browser. The URL is `https://github.com/[YourAccountName]/[YourGitHubRepository]`. You shall see the _CreateWebApp_ folder and the two files inside the folder.
+1. Browse to your GitHub repository from a browser. The URL is `https://github.com/[YourAccountName]/[YourGitHubRepository]`. You shall see the _create_web_app_ folder and the two files inside the folder.
1. Select _azuredeploy.json_ to open the template. 1. Select the **Raw** button. The URL begins with `https://raw.githubusercontent.com`. 1. Make a copy of the URL. You need to provide this value when you configure the pipeline later in the tutorial.
Create a service connection that is used to deploy projects to Azure.
Until now, you have completed the following tasks. If you skip the previous sections because you are familiar with GitHub and DevOps, you must complete the tasks before you continue.
-* Create a GitHub repository, and save the templates to the _CreateWebApp_ folder in the repository.
+* Create a GitHub repository, and save the templates to the _create_web_app_ folder in the repository.
* Create a DevOps project, and create an Azure Resource Manager service connection. To create a pipeline with a step to deploy a template:
azure-signalr Howto Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-disable-local-auth.md
+
+ Title: Disable local (access key) authentication with Azure SignalR Service
+description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure SignalR Service.
+++ Last updated : 03/31/2023++++
+# Disable local (access key) authentication with Azure SignalR Service
+
+There are two ways to authenticate to Azure SignalR Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure SignalR Service resources when possible.
+
+> [!IMPORTANT]
+> Disabling local authentication can have following influences.
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with current set of access keys will become unavailable.
+
+## Use Azure portal
+
+In this section, you will learn how to use the Azure portal to disable local authentication.
+
+1. Navigate to your SignalR Service resource in the [Azure portal](https://portal.azure.com).
+
+2. in the **Settings** section of the menu sidebar, select **Keys** tab.
+
+3. Select **Disabled** for local authentication.
+
+4. Click **Save** button.
+
+![Screenshot of disabling local auth.](./media/howto-disable-local-auth/disable-local-auth.png)
+
+## Use Azure Resource Manager template
+
+You can disable local authentication by setting `disableLocalAuth` property to true as shown in the following Azure Resource Manager template.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resource_name": {
+ "defaultValue": "test-for-disable-aad",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/SignalR",
+ "apiVersion": "2022-08-01-preview",
+ "name": "[parameters('resource_name')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Premium_P1",
+ "tier": "Premium",
+ "size": "P1",
+ "capacity": 1
+ },
+ "kind": "SignalR",
+ "properties": {
+ "tls": {
+ "clientCertEnabled": false
+ },
+ "features": [
+ {
+ "flag": "ServiceMode",
+ "value": "Default",
+ "properties": {}
+ },
+ {
+ "flag": "EnableConnectivityLogs",
+ "value": "True",
+ "properties": {}
+ }
+ ],
+ "cors": {
+ "allowedOrigins": [
+ "*"
+ ]
+ },
+ "serverless": {
+ "connectionTimeoutInSeconds": 30
+ },
+ "upstream": {},
+ "networkACLs": {
+ "defaultAction": "Deny",
+ "publicNetwork": {
+ "allow": [
+ "ServerConnection",
+ "ClientConnection",
+ "RESTAPI",
+ "Trace"
+ ]
+ },
+ "privateEndpoints": []
+ },
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": true,
+ "disableAadAuth": false
+ }
+ }
+ ]
+}
+```
+
+## Use Azure Policy
+
+You can assign the [Azure SignalR Service should have local authentication methods disabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff70eecba-335d-4bbc-81d5-5b17b03d498f) Azure policy to an Azure subscription or a resource group to enforce disabling of local authentication for all SignalR resources in the subscription or the resource group.
+
+![Screenshot of disabling local auth policy.](./media/howto-disable-local-auth/disable-local-auth-policy.png)
+
+## Next steps
+
+See the following docs to learn about authentication methods.
+
+- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)
+- [Authenticate with Azure applications](./signalr-howto-authorize-application.md)
+- [Authenticate with managed identities](./signalr-howto-authorize-managed-identity.md)
azure-signalr Signalr Concept Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authorize-azure-active-directory.md
Authorizing requests against SignalR with Azure AD provides superior security an
<a id="security-principal"></a> *[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.*
+> [!IMPORTANT]
+> Disabling local authentication can have following influences.
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with access keys will no longer be available.
+ ## Overview of Azure AD for SignalR When a security principal attempts to access a SignalR resource, the request must be authorized. With Azure AD, access to a resource requires 2 steps.
To learn more about roles and role assignments, see:
To learn how to create custom roles, see: -- [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
+- [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
+
+To learn how to use only Azure AD authentication, see
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Howto Authorize Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md
The first step is to register an Azure application.
2. Under **Manage** section, select **App registrations**. 3. Select **New registration**.
- ![Screenshot of registering an application](./media/authenticate/register-an-application.png)
+ ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png)
4. Enter a display **Name** for your application. 5. Select **Register** to confirm the register. Once you have your application registered, you can find the **Application (client) ID** and **Directory (tenant) ID** under its Overview page. These GUIDs can be useful in the following steps.
-![Screenshot of an application](./media/authenticate/application-overview.png)
+![Screenshot of an application.](./media/signalr-howto-authorize-application/application-overview.png)
To learn more about registering an application, see - [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
The application requires a client secret to prove its identity when requesting a
1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, select **New client secret**.
-![Screenshot of creating a client secret](./media/authenticate/new-client-secret.png)
+![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png)
1. Enter a **description** for the client secret, and choose a **expire time**. 1. Copy the value of the **client secret** and then paste it to a secure location. > [!NOTE]
The application requires a client secret to prove its identity when requesting a
You can also upload a certification instead of creating a client secret.
-![Screenshot of uploading a certification](./media/authenticate/upload-certificate.png)
+![Screenshot of uploading a certification.](./media/signalr-howto-authorize-application/upload-certificate.png)
To learn more about adding credentials, see
On Azure portal, add settings as follows:
See the following related articles: - [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md) - [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
This example shows you how to configure `System-assigned managed identity` on a
1. Open [Azure portal](https://portal.azure.com/), Search for and select a Virtual Machine. 1. Under **Settings** section, select **Identity**. 1. On the **System assigned** tab, toggle the **Status** to **On**.
- ![Screenshot of an application](./media/authenticate/identity-virtual-machine.png)
+ ![Screenshot of an application.](./media/signalr-howto-authorize-managed-identity/identity-virtual-machine.png)
1. Select the **Save** button to confirm the change.
If you want to use user-assigned identity, you need to assign `clientId`in addit
See the following related articles: - [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md) - [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
When you run the `func new` command from the root directory of the project, the
} ```
-1. Edit *index/\__init\__.py* and replace the contents with the following code:
+1. Edit *index/index.js* and replace the contents with the following code:
```javascript var fs = require('fs').promises
The client interface for this app is a web page. The `index` function reads HTML
### Add the SignalR Service connection string to the function app settings
-=======
-1. Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md). **Or** you can update the setting to use your real storage account with the following command:
+
+Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md). **Or** you can update the setting to use your real storage account with the following command:
```bash func settings add AzureWebJobsStorage "<storage-connection-string>" ```
-4. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
--
-The last step is to set the SignalR Service connection string in Azure Function app settings.
+You're almost done now. The last step is to set the SignalR Service connection string in Azure Function app settings.
1. In the Azure portal, go to the SignalR instance you deployed earlier. 1. Select **Keys** to view the connection strings for the SignalR Service instance.
The last step is to set the SignalR Service connection string in Azure Function
### Run the Azure Function app locally - Start the Azurite storage emulator: ```bash
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
+
+ Title: Azure Video Indexer emotions detection overview
+
+description: This article gives an overview of an Azure Video Indexer emotions detection.
++++ Last updated : 06/15/2022+++
+# Emotions detection
+
+Emotion detection is an Azure Video Indexer AI feature that automatically detects emotions a video's transcript lines. Each sentence can either be detected as "Anger", "Fear", "Joy", "Neutral", and "Sad". The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
+
+The model doesn't have context of the input data, which can impact its accuracy. To increase the accuracy, it's recommended for the input data to be in a clear and unambiguous format.
+
+## Prerequisites
+
+Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before deploying emotions detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+When working on the website the insights are displayed in the **Insights** tab. They can also be generated in a categorized list in a JSON file that includes the id, type, and a list of instances it appeared at, with their time and confidence.
+
+To display the instances in a JSON file, do the following:
+
+1. Select Download -> Insights (JSON).
+1. Copy the text and paste it into an online JSON viewer.
+
+```json
+"emotions": [
+ {
+ "id": 1,
+ "type": "Sad",
+ "instances": [
+ {
+ "confidence": 0.5518,
+ "adjustedStart": "0:00:00",
+ "adjustedEnd": "0:00:05.75",
+ "start": "0:00:00",
+ "end": "0:00:05.75"
+ },
+
+```
+
+To download the JSON file via the API, use theΓÇ»[Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+> [!NOTE]
+> Emotions detection is language independent, however if the transcript is not in English, it is first being translated to English and only then the model is applied. This may cause a reduced accuracy in emotions detection for non English languages.
+
+## Emotions detection components
+
+During the emotions detection procedure, the transcript of the video is processed, as follows:
+
+|Component |Definition |
+|||
+|Source language |The user uploads the source file for indexing. |
+|Transcription API |The audio file is sent to Cognitive Services and the translated transcribed output is returned. If a language has been specified, it is processed. |
+|Emotions detection |Each sentence is sent to the emotions detection model. The model produces the confidence level of each emotion. If the confidence level exceeds a specific threshold, and there is no ambiguity between positive and negative emotions, the emotion is detected. In any other case, the sentence is labeled as neutral.|
+|Confidence level |The estimated confidence level of the detected emotions is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score. |
+
+## Example use cases
+
+* Personalization of keywords to match customer interests, for example websites about England posting promotions about English movies or festivals.
+* Deep-searching archives for insights on specific keywords to create feature stories about companies, personas or technologies, for example by a news agency.
+
+## Considerations and limitations when choosing a use case
+
+Below are some considerations to keep in mind when using emotions detection:
+
+* When uploading a file always use high quality audio and video content.
+
+When used responsibly and carefully emotions detection is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+
+* Always respect an individual’s right to privacy, and only ingest media for lawful and justifiable purposes.  
+Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
+* Commit to respecting and promoting human rights in the design and deployment of your analyzed media.  
+* When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.ΓÇ»
+* Always seek legal advice when using media from unknown sources.ΓÇ»
+* Always obtain appropriate legal and professional advice to ensure that your uploaded media is secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.    
+* Provide a feedback channel that allows users and individuals to report issues with the service.  
+* Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.ΓÇ»
+* Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.  
+* Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.ΓÇ»
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+View some other Azure Video Insights:
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [OCR](ocr.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, Translation & Language identification](transcription-translation-lid.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched persons](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
Previously updated : 06/15/2022 Last updated : 04/17/2023
This article discusses faces detection and the key considerations for making use
|Term|Definition| |||
-|InsightΓÇ» |The information and knowledge derived from the processing and analysis of video and audio files that generate different types of insights and can include detected objects, people, faces, animated characters, keyframes and translations or transcriptions. |
+|InsightΓÇ» |The information and knowledge derived from the processing and analysis of video and audio files that generate different types of insights and can include detected objects, people, faces, keyframes and translations or transcriptions. |
|Face recognition  |The analysis of images to identify the faces that appear in the images. This process is implemented via the Azure Cognitive Services Face API. | |Template |Enrolled images of people are converted to templates, which are then used for facial recognition. Machine-interpretable features are extracted from one or more images of an individual to create that individual’s template. The enrollment or probe images aren't stored by Face API and the original images can't be reconstructed based on a template. Template quality is a key determinant on the accuracy of your results. | |Enrollment |The process of enrolling images of individuals for template creation so they can be recognized. When a person is enrolled to a verification system used for authentication, their template is also associated with a primary identifier2 that is used to determine which template to compare with the probe template. High-quality images and images representing natural variations in how a person looks (for instance wearing glasses, not wearing glasses) generate high-quality enrollment templates. |
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
Previously updated : 07/07/2022 Last updated : 04/06/2023
During the observed people tracking and matched faces procedure, images in a med
Below are some considerations to keep in mind when using observed people and matched faces. -- When uploading a file always use high-quality video content. The recommended maximum frame size is HD and frame rate is 30 FPS. A frame should contain no more than 10 people. When outputting frames from videos to AI models, only send around 2 or 3 frames per second. Processing 10 and more frames might delay the AI result. People and faces in videos recorded by cameras that are high-mounted, down-angled or with a wide field of view (FOV) may have fewer pixels that may result in lower accuracy of the generated insights. -- Typically, small people or objects under 200 pixels and people who are seated may not be detected. People wearing similar clothes or uniforms might be detected as being the same person and will be given the same ID number. People or objects that are obstructed may not be detected. Tracks of people with front and back poses may be split into different instances. -- An observed person must first be detected and appear in the people category before they're matched. Tracks are optimized to handle observed people who frequently appear in the front. Obstructions like overlapping people or faces may cause mismatches between matched people and observed people. Mismatching may occur when different people appear in the same relative spatial position in the frame within a short period.
+### Limitations of observed people tracing
+
+It's important to note the limitations of observed people tracing, to avoid or mitigate the effects of false negatives (missed detections) and limited detail.
+
+* People are generally not detected if they appear small (minimum person height is 100 pixels).
+* Maximum frame size is FHD
+* Low quality video (for example, dark lighting conditions) may impact the detection results.
+* The recommended frame rate at least 30 FPS.
+* Recommended video input should contain up to 10 people in a single frame. The feature could work with more people in a single frame, but the detection result retrieves up to 10 people in a frame with the detection highest confidence.
+* People with similar clothes: (for example, people wear uniforms, players in sport games) could be detected as the same person with the same ID number.
+* Obstruction ΓÇô there maybe errors where there are obstructions (scene/self or obstructions by other people).
+* Pose: The tracks may be split due to different poses (back/front)
+
+### Other considerations
When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
azure-video-indexer Observed People Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracing.md
Title: Trace observed people in a video
description: This topic gives an overview of a Trace observed people in a video concept. Previously updated : 03/27/2022 Last updated : 04/06/2023
The following JSON response illustrates what Video Indexer returns when tracing
## Limitations and assumptions
-It's important to note the limitations of Observed People Tracing, to avoid or mitigate the effects of false negatives (missed detections) and limited detail.
-
-* To optimize the detector results, use video footage from static cameras (although a moving camera or mixed scenes will also give results).
-* People are generally not detected if they appear small (minimum person height is 200 pixels).
-* Maximum frame size is HD
-* People are generally not detected if they're not standing or walking.
-* Low quality video (for example, dark lighting conditions) may impact the detection results.
-* The recommended frame rate ΓÇöat least 30 FPS.
-* Recommended video input should contain up to 10 people in a single frame. The feature could work with more people in a single frame, but the detection result retrieves up to 10 people in a frame with the detection highest confidence.
-* People with similar clothes (for example, people wear uniforms, players in sport games) could be detected as the same person with the same ID number.
-* Obstruction ΓÇô there maybe errors where there are obstructions (scene/self or obstructions by other people).
-* Pose: The tracks may be split due to different poses (back/front)
+For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
## Next steps
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 03/22/2023 Last updated : 04/06/2023
To stay up-to-date with the most recent Azure Video Indexer developments, this a
[!INCLUDE [announcement](./includes/deprecation-announcement.md)]
+## April 2023
+
+## Observed people tracing improvements
+
+For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
+ ## March 2023 ### Support for storage behind firewall
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Microsoft will regularly apply important updates to the Azure VMware Solution fo
## April 2023
+**HCX Run commands**
+ Introducing run commands for HCX on Azure VMware solutions. You can use these run commands to restart HCX cloud manager in your Azure VMware solution private cloud. Additionally, you can also scale HCX cloud manager using run commands. To learn how to use run commands for HCX, see [Use HCX Run commands](use-hcx-run-commands.md). ## February 2023
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
# What is Azure VMware Solution?
-Azure VMware Solution provides you with private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and in Public Preview in Azure Government.The minimum initial deployment is three hosts, but more hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX-T Data Center. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. For information about the SLA, see the [Azure service-level agreements](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) page.
+Azure VMware Solution provides you with private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and in Public Preview in Azure Government.The minimum initial deployment is three hosts, but more hosts can be added, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX-T Data Center. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. For information about the SLA, see the [Azure service-level agreements](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) page.
Azure VMware Solution is a VMware validated solution with ongoing validation and testing of enhancements and upgrades. Microsoft manages and maintains the private cloud infrastructure and software. It allows you to focus on developing and running workloads in your private clouds to deliver business value.
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
## AV36P and AV52 node sizes available in Azure VMware Solution
- The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allow for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+ The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allows for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
**AV36P key highlights for Memory and Storage optimized Workloads:**
azure-vmware Tutorial Scale Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-scale-private-cloud.md
You'll need an existing private cloud to complete this tutorial. If you haven't
:::image type="content" source="media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" alt-text="Screenshot showing how to add a cluster to an Azure VMware Solution private cloud." lightbox="media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" border="true":::
-1. Use the slider to select the number of hosts and then select **Save**.
+2. Use the slider to select the number of hosts and then select **Save**.
:::image type="content" source="media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" alt-text="Screenshot showing how to configure a new cluster." lightbox="media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" border="true":::
You'll need an existing private cloud to complete this tutorial. If you haven't
1. In your Azure VMware Solution private cloud, under **Manage**, select **Clusters**.
-1. Select the cluster you want to scale, select **More** (...), then select **Edit**.
+2. Select the cluster you want to scale, select **More** (...), then select **Edit**.
:::image type="content" source="media/tutorial-scale-private-cloud/ss4-select-scale-private-cloud-2.png" alt-text="Screenshot showing where to edit an existing cluster." lightbox="media/tutorial-scale-private-cloud/ss4-select-scale-private-cloud-2.png" border="true":::
-1. Use the slider to select the number of hosts and then select **Save**.
+3. Click **Add Host** to add a host to the cluster. Repeat that to reach the desired number of hosts, and then select **Save**.
+
+ :::image type="content" source="media/tutorial-scale-private-cloud/ss5-add-hosts-to-cluster.png" alt-text="Screenshot showing how to add additional hosts to an existing cluster." lightbox="media/tutorial-scale-private-cloud/ss5-add-hosts-to-cluster.png" border="true":::
The addition of hosts to the cluster begins.
+ >[!NOTE]
+ >The hosts will be added to the cluster in parallel.
+ ## Next steps If you require another Azure VMware Solution private cloud, [create another private cloud](tutorial-create-private-cloud.md) following the same networking prerequisites, cluster, and host limits.
azure-web-pubsub Concept Azure Ad Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-azure-ad-authorization.md
# Authorize access to Web PubSub resources using Azure Active Directory
-Azure Web PubSub Service supports using Azure Active Directory (Azure AD) to authorize requests to Web PubSub resources. With Azure AD, you can use role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal will be authenticated by Azure AD, who will return an OAuth 2.0 token. The token can then be used to authorize a request against the Web PubSub resource.
-Authorizing requests against Web PubSub with Azure AD provides superior security and ease of use over Access Key authorization. Microsoft recommends using Azure AD authorization with your Web PubSub resources when possible to assure access with minimum required privileges.
+The Azure Web PubSub Service allows for the authorization of requests to Web PubSub resources by using Azure Active Directory (Azure AD).
+
+By utilizing role-based access control (RBAC) within Azure AD, permissions can be granted to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. Azure AD authenticates this security principal and returns an OAuth 2.0 token, which Web PubSub resources can then use to authorize a request.
+
+Using Azure AD for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Azure AD authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges.
<a id="security-principal"></a> *[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.* ## Overview of Azure AD for Web PubSub
-When a security principal attempts to access a Web PubSub resource, the request must be authorized. With Azure AD, access to a resource requires two steps.
+Authentication is necessary to access a Web PubSub resource when using Azure AD. This authentication involves two steps:
-1. First, the security principal has to be authenticated by Azure, who will return an OAuth 2.0 token.
-2. Next, the token is passed as part of a request to the Web PubSub resource and used by the service to authorize access to the specified resource.
+1. First, Azure authenticates the security principal and issues an OAuth 2.0 token.
+2. Second, the token is added to the request to the Web PubSub resource. The Web PubSub service uses the token to check if the service principal has the access to the resource.
### Client-side authentication while using Azure AD
-When using Access Key, the key is shared between your negotiation server (or Function App) and the Web PubSub resource, which means the Web PubSub service could authenticate the client connection request with the shared key. However, there is no access key when using Azure AD to authorize.
+The negotiation server/Function App shares an access key with the Web PubSub resource, enabling the Web PubSub service to authenticate client connection requests using client tokens generated by the access key.
-To solve this problem, we provided a REST API for generating the client token that can be used to connect to the Azure Web PubSub service.
+However, access key is often disabled when using Azure AD to improve security.
-1. First, the negotiation server requires an **Aad Token** from Azure to authenticate itself.
-1. Second, the negotiation server calls Web PubSub Auth API with the **Aad Token** to get a **Client Token** and returns it to client.
-1. Finally, the client uses the **Client Token** to connect to the Azure Web PubSub service.
+To address this issue, we have developed a REST API that generates a client token. This token can be used to connect to the Azure Web PubSub service.
+
+To use this API, the negotiation server must first obtain an **Azure AD Token** from Azure to authenticate itself. The server can then call the Web PubSub Auth API with the **Azure AD Token** to retrieve a **Client Token**. The **Client Token** is then returned to the client, who can use it to connect to the Azure Web PubSub service.
We provided helper functions (for example `GenerateClientAccessUri) for supported programming languages.
Azure Active Directory (Azure AD) authorizes access rights to secured resources
### Resource scope
-You may have to determine the scope of access that the security principal should have before you assign any Azure RBAC role to a security principal. It is recommended to only grant the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them.
+Before assigning an Azure RBAC role to a security principal, it's important to identify the appropriate level of access that the principal should have. It's recommended to grant the role with the narrowest possible scope. Resources located underneath inherit Azure RBAC roles with broader scopes.
You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope:
You can scope access to Azure SignalR resources at the following levels, beginni
Full access to data-plane permissions, including read/write REST APIs and Auth APIs.
- This is the most common used role for building a upstream server.
+ This role is the most common used for building an upstream server.
- `Web PubSub Service Reader` Use to grant read-only REST APIs permissions to Web PubSub resources.
- It is usually used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
+ It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
## Next steps
-To learn how to create an Azure application and use AAD Auth, see
+To learn how to create an Azure application and use Azure AD auth, see
- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)
-To learn how to configure a managed identity and use AAD Auth, see
+To learn how to configure a managed identity and use Azure AD auth, see
- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md) To learn more about roles and role assignments, see -- [What is Azure role-based access control](../role-based-access-control/overview.md).
+- [What is Azure role-based access control](../role-based-access-control/overview.md)
To learn how to create custom roles, see -- [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
+- [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
+
+To learn how to use only Azure AD authentication, see
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Authorize From Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-application.md
The first step is to register an Azure application.
2. Under **Manage** section, select **App registrations**. 3. Click **New registration**.
- ![Screenshot of registering an application](./media/aad-authorization/register-an-application.png)
+ ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png)
4. Enter a display **Name** for your application. 5. Click **Register** to confirm the register. Once you have your application registered, you can find the **Application (client) ID** and **Directory (tenant) ID** under its Overview page. These GUIDs can be useful in the following steps.
-![Screenshot of an application](./media/aad-authorization/application-overview.png)
+![Screenshot of an application.](./media/howto-authorize-from-application/application-overview.png)
To learn more about registering an application, see - [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
The application requires a client secret to prove its identity when requesting a
1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, click **New client secret**.
-![Screenshot of creating a client secret](./media/aad-authorization/new-client-secret.png)
+![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png)
1. Enter a **description** for the client secret, and choose a **expire time**. 1. Copy the value of the **client secret** and then paste it to a secure location. > [!NOTE]
The application requires a client secret to prove its identity when requesting a
You can also upload a certification instead of creating a client secret.
-![Screenshot of uploading a certification](./media/aad-authorization/upload-certificate.png)
+![Screenshot of uploading a certification.](./media/howto-authorize-from-application/upload-certificate.png)
To learn more about adding credentials, see
This sample shows how to assign a `Web PubSub Service Owner` role to a service p
The following screenshot shows an example of the Access control (IAM) page for a Web PubSub resource.
- ![Screenshot of access control](./media/aad-authorization/access-control.png)
+ ![Screenshot of access control.](./media/howto-authorize-from-application/access-control.png)
1. Click **Add > Add role assignment**.
This sample shows how to assign a `Web PubSub Service Owner` role to a service p
1. Click **Next**.
- ![Screenshot of adding role assignment](./media/aad-authorization/add-role-assignment.png)
+ ![Screenshot of adding role assignment.](./media/howto-authorize-from-application/add-role-assignment.png)
1. On the **Members** tab, under **Assign access to** section, select **User, group, or service principal**.
This sample shows how to assign a `Web PubSub Service Owner` role to a service p
4. Click **Next**.
- ![Screenshot of assigning role to service principals](./media/aad-authorization/assign-role-to-service-principals.png)
+ ![Screenshot of assigning role to service principals.](./media/howto-authorize-from-application/assign-role-to-service-principals.png)
5. Click **Review + assign** to confirm the change.
We officially support 4 programming languages:
## Next steps See the following related articles:+ - [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md) - [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Authorize From Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-managed-identity.md
This is an example for configuring `System-assigned managed identity` on a `Virt
1. Open [Azure portal](https://portal.azure.com/), Search for and select a Virtual Machine. 1. Under **Settings** section, select **Identity**. 1. On the **System assigned** tab, toggle the **Status** to **On**.
- ![Screenshot of virtual machine - identity](./media/aad-authorization/identity-virtual-machine.png)
+ ![Screenshot of virtual machine - identity.](./media/howto-authorize-from-managed-identity/identity-virtual-machine.png)
1. Click the **Save** button to confirm the change. ### How to create user-assigned managed identities
This sample shows how to assign a `Web PubSub Service Owner` role to a system-as
The following screenshot shows an example of the Access control (IAM) page for a Web PubSub resource.
- ![Screenshot of access control](./media/aad-authorization/access-control.png)
+ ![Screenshot of access control.](./media/howto-authorize-from-managed-identity/access-control.png)
1. Click **Add > Add role assignment**.
This sample shows how to assign a `Web PubSub Service Owner` role to a system-as
1. Click **Next**.
- ![Screenshot of adding role assignment](./media/aad-authorization/add-role-assignment.png)
+ ![Screenshot of adding role assignment.](./media/howto-authorize-from-managed-identity/add-role-assignment.png)
1. On the **Members** tab, under **Assign access to** section, select **Managed identity**.
This sample shows how to assign a `Web PubSub Service Owner` role to a system-as
2. Click **Next**.
- ![Screenshot of assigning role to managed identities](./media/aad-authorization/assign-role-to-managed-identities.png)
+ ![Screenshot of assigning role to managed identities.](./media/howto-authorize-from-managed-identity/assign-role-to-managed-identities.png)
3. Click **Review + assign** to confirm the change.
We officially support 4 programming languages:
## Next steps See the following related articles:+ - [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)
+- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-disable-local-auth.md
+
+ Title: Disable local (access key) authentication with Azure Web PubSub Service
+description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure Web PubSub Service.
+++ Last updated : 03/31/2023++++
+# Disable local (access key) authentication with Azure Web PubSub Service
+
+There are two ways to authenticate to Azure Web PubSub Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure Web PubSub Service resources when possible.
+
+> [!IMPORTANT]
+> Disabling local authentication can have following influences.
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with current set of access keys will become unavailable.
+> - Signature will **NOT** be attached in the upstream request header. Please visit *[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)* to learn how to validate requests via Azure AD token.
+
+## Use Azure portal
+
+In this section, you will learn how to use the Azure portal to disable local authentication.
+
+1. Navigate to your Web PubSub Service resource in the [Azure portal](https://portal.azure.com).
+
+2. in the **Settings** section of the menu sidebar, select **Keys** tab.
+
+3. Select **Disabled** for local authentication.
+
+4. Click **Save** button.
+
+![Screenshot of disabling local auth.](./media/howto-disable-local-auth/disable-local-auth.png)
+
+## Use Azure Resource Manager template
+
+You can disable local authentication by setting `disableLocalAuth` property to true as shown in the following Azure Resource Manager template.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resource_name": {
+ "defaultValue": "test-for-disable-aad",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/WebPubSub",
+ "apiVersion": "2022-08-01-preview",
+ "name": "[parameters('resource_name')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Premium_P1",
+ "tier": "Premium",
+ "size": "P1",
+ "capacity": 1
+ },
+ "properties": {
+ "tls": {
+ "clientCertEnabled": false
+ },
+ "networkACLs": {
+ "defaultAction": "Deny",
+ "publicNetwork": {
+ "allow": [
+ "ServerConnection",
+ "ClientConnection",
+ "RESTAPI",
+ "Trace"
+ ]
+ },
+ "privateEndpoints": []
+ },
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": true,
+ "disableAadAuth": false
+ }
+ }
+ ]
+}
+```
+
+## Use Azure Policy
+
+You can assign the [Azure Web PubSub Service should have local authentication methods disabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb66ab71c-582d-4330-adfd-ac162e78691e) Azure policy to an Azure subscription or a resource group to enforce disabling of local authentication for all Web PubSub resources in the subscription or the resource group.
+
+![Screenshot of disabling local auth policy.](./media/howto-disable-local-auth/disable-local-auth-policy.png)
+
+## Next steps
+
+See the following docs to learn about authentication methods.
+
+- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)
+- [Authenticate with Azure applications](./howto-authorize-from-application.md)
+- [Authenticate with managed identities](./howto-authorize-from-managed-identity.md)
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
AKS backup automatically triggers scheduled backup job that copies the cluster r
Once the backup configuration for an AKS cluster is complete, a backup instance is created in the Backup vault. You can view the backup instance for the cluster under the Backup section in the AKS portal. You can perform any Backup-related operations for the Instance, such as initiating restores, monitoring, stopping protection, and so on, through its corresponding backup instance.
-AKS backup also integrates directly with Backup center to help you manage the protection of all your storage accounts centrally along with all other backup supported workloads. The Backup center is a single pane of glass for all your backup requirements, such as monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data.
+AKS backup also integrates directly with Backup center to help you manage the protection of all your AKS clusters centrally along with all other backup supported workloads. The Backup center is a single pane of glass for all your backup requirements, such as monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data.
AKS backup uses Managed Identity to access other Azure resources. To configure backup of an AKS cluster and to restore from past backup, Backup vault's Managed Identity requires a set of permissions on the AKS cluster and the snapshot resource group where snapshots are created and managed. Currently, the AKS cluster requires a set of permissions on the Snapshot Resource Group. Also, the Backup Extension creates a User Identity and assigns a set of permissions to access the storage account where backups are stored in a blob. You can grant permissions to the Managed Identity using Azure role-based access control (Azure RBAC). Managed Identity is a service principle of a special type that can only be used with Azure resources. Learn more about [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).
backup Backup Azure Diagnostic Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-diagnostic-events.md
Title: Use diagnostics settings for Recovery Services vaults description: 'This article describes how to use the old and new diagnostics events for Azure Backup.' Previously updated : 10/30/2019 Last updated : 03/31/2023
You can configure diagnostics settings for a Recovery Services vault via the Azu
Azure Backup provides the following diagnostics events. Each event provides detailed data on a specific set of backup-related artifacts: * CoreAzureBackup
-* AddonAzureBackupAlerts
* AddonAzureBackupProtectedInstance * AddonAzureBackupJobs * AddonAzureBackupPolicy
You can now use Azure Backup to send vault diagnostics data to dedicated Log Ana
To send your vault diagnostics data to Log Analytics:
-1. Go to your vault, and select **Diagnostic Settings**. Select **+ Add Diagnostic Setting**.
-1. Give a name to the diagnostics setting.
-1. Select the **Send to Log Analytics** check box, and select a Log Analytics workspace.
-1. Select **Resource specific** in the toggle, and select the following six events: **CoreAzureBackup**, **AddonAzureBackupJobs**, **AddonAzureBackupAlerts**, **AddonAzureBackupPolicy**, **AddonAzureBackupStorage**, and **AddonAzureBackupProtectedInstance**.
+**Choose a vault type**:
+
+# [Recovery Services vaults](#tab/recovery-services-vaults)
+
+1. Go to your *vault*, and then select **Diagnostic Settings** > **+ Add Diagnostic Setting**.
+1. Provide a name to the *diagnostics setting*.
+1. Select the **Send to Log Analytics** checkbox, and then select a *Log Analytics workspace*.
+1. Select **Resource specific** in the toggle, and select the following five events: **CoreAzureBackup**, **AddonAzureBackupJobs**, **AddonAzureBackupPolicy**, **AddonAzureBackupStorage**, and **AddonAzureBackupProtectedInstance**.
1. Select **Save**.
- ![Resource-specific mode](./media/backup-azure-diagnostics-events/resource-specific-blade.png)
+
+
+# [Backup vaults](#tab/backup-vaults)
+
+1. Go to your *vault*, and then select **Diagnostic Settings** > **+ Add Diagnostic Setting**.
+2. Provide a name to the *diagnostics setting*.
+3. Select the **Send to Log Analytics** checkbox, and then select a *Log Analytics workspace*.
+4. Select the following events: **CoreAzureBackup**, **AddonAzureBackupJobs**, **AddonAzureBackupPolicy**, and **AddonAzureBackupProtectedInstance**.
+5. Select **Save**.
+
+
++ After data flows into the Log Analytics workspace, dedicated tables for each of these events are created in your workspace. You can query any of these tables directly. You can also perform joins or unions between these tables if necessary. > [!IMPORTANT]
-> The six events, namely, CoreAzureBackup, AddonAzureBackupJobs, AddonAzureBackupAlerts, AddonAzureBackupPolicy, AddonAzureBackupStorage, and AddonAzureBackupProtectedInstance, are supported *only* in the resource-specific mode in [Backup reports](./configure-reports.md). *If you try to send data for these six events in Azure diagnostics mode, no data will be visible in Backup reports.*
+> The six events, namely, *CoreAzureBackup*, *AddonAzureBackupJobs*, *AddonAzureBackupAlerts*, *AddonAzureBackupPolicy*, *AddonAzureBackupStorage*, and *AddonAzureBackupProtectedInstance*, are supported *only* in the resource-specific mode for Recovery Services vaults in [Backup reports](configure-reports.md). *If you try to send data for these six events in Azure diagnostics mode, no data will appear in Backup reports.*
+>
+> *AddonAzureBackupAlerts* refers to the alerts being generated by the classic alerts solution. As classic alerts solution is on deprecation path in favour of Azure Monitor-based alers, we recommend you not to select the event *AddonAzureBackupAlerts* when configuring diagnostics settings. To send the fired Azure Monitor-based alerts to a destination of your choice, you can create an alert processing rule and action group that routes these alerts to a logic app, webhook, or runbook that in turn sends these alerts to the required destination.
+>
+> For Backup vaults, since information on the frontend size and backup storage consumed are already included in the *CoreAzureBackup* and *AddonAzureBackupProtectedInstances* events (to aid query performance), the *AddonAzureBackupStorage event* isn't applicable for Backup vault, to avoid creation of redundant tables.
## Legacy event
-Traditionally, all backup-related diagnostics data for a vault was contained in a single event called AzureBackupReport. The six events described here are, in essence, a decomposition of all the data contained in AzureBackupReport.
+Traditionally, for Recovery Services vaults, all backup-related diagnostics data for a vault was contained in a single event called AzureBackupReport. The six events described here are, in essence, a decomposition of all the data contained in AzureBackupReport.
-Currently, we continue to support the AzureBackupReport event for backward compatibility in cases where users have existing custom queries on this event. Examples are custom log alerts and custom visualizations. *We recommend that you move to the [new events](#diagnostics-events-available-for-azure-backup-users) as early as possible*. The new events:
+Currently, we continue to support the *AzureBackupReport* event for Recovery Services vaults, backward compatibility in cases where you've existing custom queries on this event. For example, custom log alerts and custom visualizations. *We recommend that you move to the [new events](#diagnostics-events-available-for-azure-backup-users) as early as possible*. The new events:
* Make the data much easier to work with in log queries. * Provide better discoverability of schemas and their structure.
Currently, we continue to support the AzureBackupReport event for backward compa
*The legacy event in Azure diagnostics mode will eventually be deprecated. Choosing the new events might help you to avoid complex migrations at a later date*. Our [reporting solution](./configure-reports.md) that uses Log Analytics will also stop supporting data from the legacy event.
+> [!NOTE]
+> For Backup vaults, all diagnostics events are sent to the resource-specific tables only; so, you don't need to do any migration for Backup vaults. The following section is specific to Recovery services vaults.
+ ### Steps to move to new diagnostics settings for a Log Analytics workspace 1. Identify which vaults are sending data to the Log Analytics workspaces by using the legacy event and the subscriptions they belong to. Run the following query in each of your workspaces to identify these vaults and subscriptions.
backup Backup Azure Monitoring Use Azuremonitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-use-azuremonitor.md
Title: Monitor Azure Backup with Azure Monitor description: Monitor Azure Backup workloads and create custom alerts by using Azure Monitor. Previously updated : 06/04/2019 Last updated : 04/18/2023 ms.assetid: 01169af5-7eb0-4cb0-bbdb-c58ac71bf48b
For more information, see [Create, view, and manage log alerts by using Azure Mo
### Sample Kusto queries
-The default graphs give you Kusto queries for basic scenarios on which you can build alerts. You can also modify the queries to get the data you want to be alerted on. Paste the following sample Kusto queries in the **Logs** page and then create alerts on the queries:
+The default graphs give you Kusto queries for basic scenarios on which you can build alerts. You can also modify the queries to fetch the data you want to be alerted on. Paste the following sample Kusto queries on the **Logs** page, and then create alerts on the queries.
+
+Recovery Services vaults and Backup vaults send data to a common set of tables that are listed in this article. However, there are slight differences in the schema for Recovery Services vaults and Backup vaults ([learn more](backup-azure-monitoring-built-in-monitor.md)). So, this section is split into multiple sub-sections that helps you to use the right queries depending on which workload or vault types you want to query.
+
+#### Queries common across Recovery Services vaults and Backup vaults
- All successful backup jobs
The default graphs give you Kusto queries for basic scenarios on which you can b
| where JobStatus=="Failed" ````
+#### Queries specific to Recovery Services vault workloads
+ - All successful Azure VM backup jobs ````Kusto
The default graphs give you Kusto queries for basic scenarios on which you can b
| sort by StorageConsumedInMBs desc ````
+#### Queries specific to Backup vault workloads
+
+- All successful Azure PostgreSQL backup jobs
+
+ ````Kusto
+ AddonAzureBackupJobs
+ | where JobOperation=="Backup"
+ | summarize arg_max(TimeGenerated,*) by JobUniqueId
+ | where DatasourceType == "Microsoft.DBforPostgreSQL/servers/databases"
+ | where JobStatus=="Completed"
+ ````
+
+- All successful Azure Disk restore jobs
+
+ ````Kusto
+ AddonAzureBackupJobs
+ | where JobOperation == "Restore"
+ | summarize arg_max(TimeGenerated,*) by JobUniqueId
+ | where DatasourceType == "Microsoft.Compute/disks"
+ | where JobStatus=="Completed"
+ ````
+
+- Backup Storage Consumed per Backup Item
+
+ ````Kusto
+ CoreAzureBackup
+ | where OperationName == "BackupItem"
+ | summarize arg_max(TimeGenerated, *) by BackupItemUniqueId
+ | project BackupItemUniqueId, BackupItemFriendlyName, StorageConsumedInMBs
+ ````
+ ### Diagnostic data update frequency The diagnostic data from the vault is pumped to the Log Analytics workspace with some lag. Every event arrives at the Log Analytics workspace *20 to 30 minutes* after it's pushed from the Recovery Services vault. Here are further details about the lag:
backup Backup Azure Reports Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reports-data-model.md
Title: Data model for Azure Backup diagnostics events description: This data model is in reference to the Resource Specific Mode of sending diagnostic events to Log Analytics (LA). Previously updated : 10/19/2022 Last updated : 04/18/2023
# Data Model for Azure Backup Diagnostics Events > [!NOTE]
->
> For creating custom reporting views, it is recommended to use [system functions on Azure Monitor logs](backup-reports-system-functions.md) instead of working with the raw tables listed below.
+## Differences in schema for Recovery Services vaults and Backup vaults
+
+Recovery Services vaults and Backup vaults send data to a common set of tables that are listed in this article. However, there are slight differences in the schema for Recovery Services vaults and Backup vaults.
+
+- One of the main reasons for this difference is that for Backup vaults, Azure Backup service does a 'flattening' of schemas to reduce the number of joins needed in queries, hence improving query performance. For example, if you are looking to write a query that lists all Backup vault jobs along with the friendly name of the datasource, and friendly name of the vault, you can get all of this information fro the AddonAzureBackupJobs table (without needing to do a join with CoreAzureBackup to get the datasource and vault names). Flattened schemas are currently supported only for Backup vaults and not yet for Recovery Services vaults.
+- Apart from the above, there are also certain scenarios that are currently applicable for Recovery Services vaults only (for example, fields related to DPM workloads). This also leads to some differences in the schema between Backup vaults and Recovery Services vaults.
+
+To understand which fields are specific to a particular vault type, and which fields are common across vault types, refer to the **Applicable Resource Types** column provided in the below sections. For more information on how to write queries on these tables for Recovery Services vaults and Backup vaults, see the [sample queries](./backup-azure-monitoring-use-azuremonitor.md#sample-kusto-queries).
+ ## CoreAzureBackup This table provides information about core backup entities, such as vaults and backup items.
-| **Field** | **Data Type** | **Description** |
-| | - | |
-| ResourceId | Text | Resource identifier for data being collected. For example, Recovery Services vault resource ID. |
-| OperationName | Text | This field represents the name of the current operation - BackupItem, BackupItemAssociation, or ProtectedContainer. |
-| Category | Text | This field represents the category of diagnostics data pushed to Azure Monitor logs. For example, CoreAzureBackup. |
-| AgentVersion | Text | Version number of Agent Backup or the Protection Agent (in the case of SC DPM and MABS) |
-| AzureBackupAgentVersion | Text | Version of the Azure Backup Agent on the Backup Management Server |
-| AzureDataCenter | Text | Data center where the vault is located |
-| BackupItemAppVersion | Text | Application version of the backup item |
-| BackupItemFriendlyName | Text | Friendly name of the backup item |
-| BackupItemName | Text | Name of the backup item |
-| BackupItemProtectionState | Text | Protection State of the Backup Item |
-| BackupItemFrontEndSize | Text | Front-end size (in MBs) of the backup item |
-| BackupItemType | Text | Type of backup item. For example: VM, FileFolder |
-| BackupItemUniqueId | Text | Unique identifier of the backup item |
-| BackupManagementServerType | Text | Type of the Backup Management Server, as in MABS, SC DPM |
-| BackupManagementServerUniqueId | Text | Field to uniquely identify the Backup Management Server |
-| BackupManagementType | Text | Provider type for server doing backup job. For example, IaaSVM, FileFolder |
-| BackupManagementServerName | Text | Name of the Backup Management Server |
-| BackupManagementServerOSVersion | Text | OS version of the Backup Management Server |
-| BackupManagementServerVersion | Text | Version of the Backup Management Server |
-| LatestRecoveryPointLocation | Text | Location of the latest recovery point for the backup item |
-| LatestRecoveryPointTime | DateTime | Date time of the latest recovery point for the backup item |
-| OldestRecoveryPointLocation | Text | Location of the oldest recovery point for the backup item |
-| OldestRecoveryPointTime | DateTime | Date time of the latest recovery point for the backup item |
-| PolicyUniqueId | Text | Unique ID to identify the policy |
-| ProtectedContainerFriendlyName | Text | Friendly name of the protected server |
-| ProtectedContainerLocation | Text | Whether the Protected Container is located On-premises or in Azure |
-| ProtectedContainerName | Text | Name of the Protected Container |
-| ProtectedContainerOSType | Text | OS Type of the Protected Container |
-| ProtectedContainerOSVersion | Text | OS Version of the Protected Container |
-| ProtectedContainerProtectionState | Text | Protection State of the Protected Container |
-| ProtectedContainerType | Text | Whether the Protected Container is a server, or a container |
-| ProtectedContainerUniqueId | Text | Unique ID used to identify the protected container for everything except VMs backed up using DPM, MABS |
-| ProtectedContainerWorkloadType | Text | Type of the Protected Container backed up. For example, IaaSVMContainer |
-| ProtectionGroupName | Text | Name of the Protection Group the Backup Item is protected in, for SC DPM, and MABS, if applicable |
-| ResourceGroupName | Text | Resource group of the resource (for example, Recovery Services vault) for data being collected |
-| SchemaVersion | Text | This field denotes the current version of the schema. It is **V2** |
-| SecondaryBackupProtectionState | Text | Whether secondary protection is enabled for the backup item |
-| State | Text | State of the backup item object. For example, Active, Deleted |
-| StorageReplicationType | Text | Type of storage replication for the vault. For example, GeoRedundant |
-| SubscriptionId | Text | Subscription identifier of the resource (for example, Recovery Services vault) for which data is collected |
-| VaultName | Text | Name of the vault |
-| VaultTags | Text | Tags associated with the vault resource |
-| VaultUniqueId | Text | Unique Identifier of the vault |
-| SourceSystem | Text | Source system of the current data - Azure |
+| **Field** | **Data Type** | **Applicable Resource Types** | **Description** |
+| | - | | |
+| ResourceId | Text | Recovery Services vault, Backup vault | Resource identifier for data being collected. For example, Recovery Services vault resource ID. |
+| OperationName | Text | Recovery Services vault, Backup vault | This field represents the name of the current operation - BackupItem, BackupItemAssociation, or ProtectedContainer. |
+| Category | Text | Recovery Services vault, Backup vault | This field represents the category of diagnostics data pushed to Azure Monitor logs. For example, CoreAzureBackup. |
+| AgentVersion | Text | Recovery Services vault | Version number of Agent Backup or the Protection Agent (in the case of SC DPM and MABS) |
+| AzureBackupAgentVersion | Text | Recovery Services vault | Version of the Azure Backup Agent on the Backup Management Server |
+| AzureDataCenter | Text | Recovery Services vault, Backup vault | Data center where the vault is located |
+| BackupItemAppVersion | Text | Recovery Services vault | Application version of the backup item |
+| BackupItemFriendlyName | Text | Recovery Services vault, Backup vault | Friendly name of the backup item |
+| BackupItemName | Text | Recovery Services vault, Backup vault | Name of the backup item |
+| BackupItemProtectionState | Text | Recovery Services vault, Backup vault | Protection State of the Backup Item |
+| BackupItemFrontEndSize | Text | Recovery Services vault, Backup vault | Front-end size (in MBs) of the backup item |
+| BackupItemType | Text | Recovery Services vault, Backup vault | Type of backup item. For example: VM, FileFolder |
+| BackupItemUniqueId | Text | Recovery Services vault, Backup vault | Unique identifier of the backup item |
+| BackupManagementServerType | Text | Recovery Services vault | Type of the Backup Management Server, as in MABS, SC DPM |
+| BackupManagementServerUniqueId | Text | Recovery Services vault | Field to uniquely identify the Backup Management Server |
+| BackupManagementType | Text | Recovery Services vault | Provider type for server doing backup job. For example, IaaSVM, FileFolder |
+| BackupManagementServerName | Text | Recovery Services vault | Name of the Backup Management Server |
+| BackupManagementServerOSVersion | Text | Recovery Services vault | OS version of the Backup Management Server |
+| BackupManagementServerVersion | Text | Recovery Services vault | Version of the Backup Management Server |
+| LatestRecoveryPointLocation | Text | Recovery Services vault | Location of the latest recovery point for the backup item |
+| LatestRecoveryPointTime | DateTime | Recovery Services vault | Date time of the latest recovery point for the backup item |
+| OldestRecoveryPointLocation | Text | Recovery Services vault | Location of the oldest recovery point for the backup item |
+| OldestRecoveryPointTime | DateTime | Recovery Services vault | Date time of the latest recovery point for the backup item |
+| PolicyUniqueId | Text | Recovery Services vault, Backup vault | Unique ID to identify the policy |
+| ProtectedContainerFriendlyName | Text | Recovery Services vault | Friendly name of the protected server |
+| ProtectedContainerLocation | Text | Recovery Services vault | Whether the Protected Container is located On-premises or in Azure |
+| ProtectedContainerName | Text | Recovery Services vault | Name of the Protected Container |
+| ProtectedContainerOSType | Text | Recovery Services vault | OS Type of the Protected Container |
+| ProtectedContainerOSVersion | Text | Recovery Services vault | OS Version of the Protected Container |
+| ProtectedContainerProtectionState | Text | Recovery Services vault | Protection State of the Protected Container |
+| ProtectedContainerType | Text | Recovery Services vault | Whether the Protected Container is a server, or a container |
+| ProtectedContainerUniqueId | Text | Recovery Services vault | Unique ID used to identify the protected container for everything except VMs backed up using DPM, MABS |
+| ProtectedContainerWorkloadType | Text | Recovery Services vault | Type of the Protected Container backed up. For example, IaaSVMContainer |
+| ProtectionGroupName | Text | Recovery Services vault | Name of the Protection Group the Backup Item is protected in, for SC DPM, and MABS, if applicable |
+| ResourceGroupName | Text | Recovery Services vault, Backup vault| Resource group of the resource (for example, Recovery Services vault) for data being collected |
+| SchemaVersion | Text | Recovery Services vault, Backup vault | This field denotes the current version of the schema. It is **V2** |
+| SecondaryBackupProtectionState | Text | Recovery Services vault | Whether secondary protection is enabled for the backup item |
+| State | Text | Recovery Services vault | State of the backup item object. For example, Active, Deleted |
+| StorageReplicationType | Text | Recovery Services vault, Backup vault | Type of storage replication for the vault. For example, GeoRedundant |
+| SubscriptionId | Text | Recovery Services vault, Backup vault | Subscription identifier of the resource (for example, Recovery Services vault) for which data is collected |
+| VaultName | Text | Recovery Services vault, Backup vault | Name of the vault |
+| VaultTags | Text | Recovery Services vault, Backup vault | Tags associated with the vault resource |
+| VaultUniqueId | Text | Recovery Services vault, Backup vault | Unique Identifier of the vault |
+| SourceSystem | Text | Recovery Services vault, Backup vault | Source system of the current data - Azure |
+| DatasourceSetFriendlyName | Text | Backup vault | Friendly name of the parent resource of the datasource (wherever applicable). For example, for an Azure PostgreSQL database, this field will contain the friendly name of the PostgreSQL server |
+| DatasourceSetResourceId | Text | Backup vault | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for an Azure PostgreSQL database, this field will contain the ARM ID of the PostgreSQL server |
+| DatasourceSetType | Text | Backup vault | Type of the DatasourceSet, for example, Microsoft.Compute/disks |
+| DatasourceFriendlyName | Text | Backup vault | Friendly name of the datasource being backed up |
+| DatasourceResourceId | Text | Backup vault | Azure Resource Manager (ARM) ID of the datasource being backed up |
+| DatasourceType | Text | Backup vault | Type of the datasource being backed up, for example, Microsoft.DBforPostgreSQL/servers/databases |
+| BillingGroupUniqueId | Text | Backup vault | Unique ID of the billing group associated with the backup item |
+| BillingGroupFriendlyName | Text | Backup vault | Friendly name of the billing group associated with the backup item |
+| DatasourceResourceGroupName | Text | Backup vault | Resource group of the datasource being backed up |
+| DatasourceSubscriptionId | Text | Backup vault | Subscription ID of the datasource being backed up |
+| BackupItemId | Text | Backup vault | Azure Resource Manager (ARM) ID of the backup item |
+| StorageConsumedInMBs | Text | Backup vault | Backup storage consumed by the backup item |
+| VaultType | Text | Backup vault | Type of vault. For Backup vaults, the value is Microsoft.DataProtection/backupVaults. For Recovery Services vaults, this field is currently empty |
+| PolicyName | Text | Backup vault | Name of the policy associated with the backup item |
+| PolicyId | Text | Backup vault | Azure Resource Manager (ARM) ID of the policy associated with the backup item |
+ ## AddonAzureBackupAlerts This table provides details about alert related fields.
-| **Field** | **Data Type** | **Description** |
-| :-- | - | |
-| ResourceId | Text | Unique identifier for the resource about which data is collected. For example, a Recovery Services vault resource ID |
-| OperationName | Text | Name of the current operation. For example, Alert |
-| Category | Text | Category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupAlerts |
-| AlertCode | Text | Code to uniquely identify an alert type |
-| AlertConsolidationStatus | Text | Identify if the alert is a consolidated alert or not |
-| AlertOccurrenceDateTime | DateTime | Date and time when the alert was created |
-| AlertRaisedOn | Text | Type of entity the alert is raised on |
-| AlertSeverity | Text | Severity of the alert. For example, Critical |
-| AlertStatus | Text | Status of the alert. For example, Active |
-| AlertTimeToResolveInMinutes | Number | Time taken to resolve an alert. Blank for active alerts. |
-| AlertType | Text | Type of alert. For example, Backup |
-| AlertUniqueId | Text | Unique identifier of the generated alert |
-| BackupItemUniqueId | Text | Unique identifier of the backup item associated with the alert |
-| BackupManagementServerUniqueId | Text | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
-| BackupManagementType | Text | Provider type for server doing backup job, for example, IaaSVM, FileFolder |
-| CountOfAlertsConsolidated | Number | Number of alerts consolidated if it's a consolidated alert |
-| ProtectedContainerUniqueId | Text | Unique identifier of the protected server associated with the alert |
-| RecommendedAction | Text | Action recommended to resolve the alert |
-| SchemaVersion | Text | Current version of the schema, for example **V2** |
-| State | Text | Current state of the alert object, for example, Active, Deleted |
-| StorageUniqueId | Text | Unique ID used to identify the storage entity |
-| VaultUniqueId | Text | Unique ID used to identify the vault related to the alert |
-| SourceSystem | Text | Source system of the current data - Azure |
+> [!NOTE]
+> AddonAzureBackupAlerts refers to the alerts being generated by the classic alerts solution. As classic alerts solution is on deprecation path in favour of Azure Monitor based alers, we recommend you not to select this event AddonAzureBackupAlerts when configuring diagnostics settings. To send the fired Azure Monitor based alerts to a destination of your choice, you can create an alert processing rule and action group that routes these alerts to a logic app, webhook, or runbook that in turn sends these alerts to the required destination.
+
+| **Field** | **Data Type** | **Applicable Resource Types** | **Description** |
+| -- | - | | |
+| ResourceId | Text | Recovery Services vault | Unique identifier for the resource about which data is collected. For example, a Recovery Services vault resource ID |
+| OperationName | Text | Recovery Services vault | Name of the current operation. For example, Alert |
+| Category | Text | Recovery Services vault | Category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupAlerts |
+| AlertCode | Text | Recovery Services vault | Code to uniquely identify an alert type |
+| AlertConsolidationStatus | Text | Recovery Services vault | Identify if the alert is a consolidated alert or not |
+| AlertOccurrenceDateTime | DateTime | Recovery Services vault | Date and time when the alert was created |
+| AlertRaisedOn | Text | Recovery Services vault | Type of entity the alert is raised on |
+| AlertSeverity | Text | Recovery Services vault | Severity of the alert. For example, Critical |
+| AlertStatus | Text | Recovery Services vault | Status of the alert. For example, Active |
+| AlertTimeToResolveInMinutes | Number | Recovery Services vault | Time taken to resolve an alert. Blank for active alerts. |
+| AlertType | Text | Recovery Services vault | Type of alert. For example, Backup |
+| AlertUniqueId | Text | Recovery Services vault | Unique identifier of the generated alert |
+| BackupItemUniqueId | Text | Recovery Services vault | Unique identifier of the backup item associated with the alert |
+| BackupManagementServerUniqueId | Text | Recovery Services vault | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
+| BackupManagementType | Text | Recovery Services vault | Provider type for server doing backup job, for example, IaaSVM, FileFolder |
+| CountOfAlertsConsolidated | Number | Recovery Services vault | Number of alerts consolidated if it's a consolidated alert |
+| ProtectedContainerUniqueId | Text | Recovery Services vault | Unique identifier of the protected server associated with the alert |
+| RecommendedAction | Text | Recovery Services vault | Action recommended to resolving the alert |
+| SchemaVersion | Text | Recovery Services vault | Current version of the schema, for example **V2** |
+| State | Text | Recovery Services vault | Current state of the alert object, for example, Active, Deleted |
+| StorageUniqueId | Text | Recovery Services vault | Unique ID used to identify the storage entity |
+| VaultUniqueId | Text | Recovery Services vault | Unique ID used to identify the vault related to the alert |
+| SourceSystem | Text | Recovery Services vault | Source system of the current data - Azure |
## AddonAzureBackupProtectedInstance This table provides basic protected instances-related fields.
-| **Field** | **Data Type** | **Description** |
-| | - | |
-| ResourceId | Text | Unique identifier for the resource about which data is collected. For example, a Recovery Services vault resource ID |
-| OperationName | Text | Name of the operation, for example ProtectedInstance |
-| Category | Text | Category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupProtectedInstance |
-| BackupItemUniqueId | Text | Unique ID of the backup item |
-| BackupManagementServerUniqueId | Text | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
-| BackupManagementType | Text | Provider type for server doing backup job, for example, IaaSVM, FileFolder |
-| ProtectedContainerUniqueId | Text | Unique ID to identify the protected container the job is run on |
-| ProtectedInstanceCount | Text | Count of Protected Instances for the associated backup item or protected container on that date-time |
-| SchemaVersion | Text | Current version of the schema, for example **V2** |
-| State | Text | State of the backup item object, for example, Active, Deleted |
-| VaultUniqueId | Text | Unique identifier of the protected vault associated with the protected instance |
-| SourceSystem | Text | Source system of the current data - Azure |
+| **Field** | **Data Type** | **Applicable Resource Types** | **Description** |
+| | - | -- | - |
+| ResourceId | Text | Recovery Services vault, Backup vault | Unique identifier for the resource about which data is collected. For example, a Recovery Services vault resource ID |
+| OperationName | Text | Recovery Services vault, Backup vault | Name of the operation, for example ProtectedInstance |
+| Category | Text | Recovery Services vault, Backup vault | Category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupProtectedInstance |
+| BackupItemUniqueId | Text | Recovery Services vault | Unique ID of the backup item |
+| BackupManagementServerUniqueId | Text | Recovery Services vault | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
+| BackupManagementType | Text | Recovery Services vault | Provider type for server doing backup job, for example, IaaSVM, FileFolder |
+| ProtectedContainerUniqueId | Text | Recovery Services vault | Unique ID to identify the protected container the job is run on |
+| ProtectedInstanceCount | Integer | Recovery Services vault, Backup vault | Count of Protected Instances for the associated billing entity on that date-time |
+| SchemaVersion | Text | Recovery Services vault, Backup vault | Current version of the schema, for example **V2** |
+| State | Text | Recovery Services vault | State of the backup item object, for example, Active, Deleted |
+| VaultUniqueId | Text | Recovery Services vault, Backup vault | Unique identifier of the protected vault associated with the protected instance |
+| SourceSystem | Text | Recovery Services vault, Backup vault | Source system of the current data - Azure |
+| BillingGroupFriendlyName | Text | Backup vault | Friendly name of the billing group (unit at which billing information is calculated) |
+| BillingGroupUniqueId | Text | Backup vault | Unique ID of the billing group (unit at which billing information is calculated) |
+| StorageConsumedInMBs | Double | Backup vault | Current value of backup storage consumed by the billing group |
+| VaultTags | Text | Backup vault | Tags of the vault associated with the billing group |
+| AzureDataCenter | Text | Backup vault | Location of the vault associated with the billing group |
+| VaultType | Text | Backup vault | Type of the vault associated with the billing group. For Backup vaults, the value is Microsoft.DataProtection/backupVaults. For Recovery Services vaults, this field is currently empty |
+| StorageReplicationType | Text | Backup vault | Type of storage replication for the vault. For example, LocallyRedundant |
+| SubscriptionId | Text | Backup vault | Subscription ID of the billing group |
+| ResourceGroupName | Text | Backup vault | Resource Group of the billing group |
+| VaultName | Text | Backup vault | Name of the vault associated with the billing group |
+| DatasourceType | Text | Backup vault | Type of the datasource being backed up, for example, Microsoft.DBforPostgreSQL/servers/databases |
+| BillingGroupType | Text | Backup vault | Type of the billing group, used to denote the unit at which billing information is calculated. For example, in the case of Azure PostgreSQL backup, protected instances and storage consumed are calculated at the server (DatasourceSet) level so the value is DatasourceSet in this scenario |
+| SourceSizeInMBs | Double | Backup vault | Frontend size of the billing group |
+| BillingGroupResourceGroupName | Text | Backup vault | Resource group in which the billing group exists |
+ ## AddonAzureBackupJobs This table provides details about job-related fields.
-| **Field** | **Data Type** | **Description** |
-| | - | |
-| ResourceId | Text | Resource identifier for data being collected. For example, Recovery Services vault resource ID |
-| OperationName | Text | This field represents name of the current operation - Job |
-| Category | Text | This field represents category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupJobs |
-| AdhocOrScheduledJob | Text | Field to specify if the job is Ad Hoc or Scheduled |
-| BackupItemUniqueId | Text | Unique ID used to identify the backup item related to the storage entity |
-| BackupManagementServerUniqueId | Text | Unique ID used to identify the backup management server related to the storage entity |
-| BackupManagementType | Text | Provider type for performing backup, for example, IaaSVM, FileFolder to which this job belongs to |
-| DataTransferredInMB | Number | Data transferred in MB for this job |
-| JobDurationInSecs | Number | Total job duration in seconds |
-| JobFailureCode | Text | Failure Code string because of which job failure happened |
-| JobOperation | Text | Operation for which job is run for example, Backup, Restore, Configure Backup |
-| JobOperationSubType | Text | Sub Type of the Job Operation. For example, 'Log', in the case of Log Backup Job |
-| JobStartDateTime | DateTime | Date and time when job started running |
-| JobStatus | Text | Status of the finished job, for example, Completed, Failed |
-| JobUniqueId | Text | Unique ID to identify the job |
-| ProtectedContainerUniqueId | Text | Unique identifier of the protected server associated with the job |
-| RecoveryJobDestination | Text | Destination of a recovery job, where the data is recovered |
-| RecoveryJobRPDateTime | DateTime | The date, time when the recovery point that's being recovered was created |
-| RecoveryJobLocation | Text | The location where the recovery point that's being recovered was stored |
-| RecoveryLocationType | Text | Type of the Recovery Location |
-| SchemaVersion | Text | Current version of the schema, for example **V2** |
-| State | Text | Current state of the job object, for example, Active, Deleted |
-| VaultUniqueId | Text | Unique identifier of the protected vault associated with the job |
-| SourceSystem | Text | Source system of the current data - Azure |
+| **Field** | **Data Type** | **Applicable Resource Types** | **Description** |
+| | - | | |
+| ResourceId | Text | Recovery Services vault, Backup vault | Resource identifier for data being collected. For example, Recovery Services vault resource ID |
+| OperationName | Text | Recovery Services vault, Backup vault | This field represents name of the current operation - Job |
+| Category | Text | Recovery Services vault, Backup vault | This field represents category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupJobs |
+| AdhocOrScheduledJob | Text | Recovery Services vault, Backup vault | Field to specify if the job is Ad Hoc or Scheduled |
+| BackupItemUniqueId | Text | Recovery Services vault, Backup vault | Unique ID used to identify the backup item related to the storage entity |
+| BackupManagementServerUniqueId | Text | Recovery Services vault | Unique ID used to identify the backup management server related to the storage entity |
+| BackupManagementType | Text | Recovery Services vault | Provider type for performing backup, for example, IaaS VM, File-Folder to which this job belongs. |
+| DataTransferredInMB | Number | Recovery Services vault | Data transferred in MB for this job |
+| JobDurationInSecs | Number | Recovery Services vault, Backup vault | Total job duration in seconds |
+| JobFailureCode | Text | Recovery Services vault, Backup vault | Failure Code string because of which job failure happened |
+| JobOperation | Text | Recovery Services vault, Backup vault | Operation for which job, is run for example, Backup, Restore, Configure Backup |
+| JobOperationSubType | Text | Recovery Services vault, Backup vault | Sub Type of the Job Operation. For example, 'Log', in the case of Log Backup Job |
+| JobStartDateTime | DateTime | Recovery Services vault, Backup vault | Date and time when job started running |
+| JobStatus | Text | Recovery Services vault, Backup vault | Status of the finished job, for example, Completed, Failed |
+| JobUniqueId | Text | Recovery Services vault, Backup vault | Unique ID to identify the job |
+| ProtectedContainerUniqueId | Text | Recovery Services vault | Unique identifier of the protected server associated with the job |
+| RecoveryJobDestination | Text | Recovery Services vault | Destination of a recovery job, where the data is recovered |
+| RecoveryJobRPDateTime | DateTime | Recovery Services vault | The date, time when the recovery point that's being recovered was created |
+| RecoveryJobLocation | Text | Recovery Services vault, Backup vault | The location where the recovery point that's being recovered was stored |
+| RecoveryLocationType | Text | Recovery Services vault | Type of the Recovery Location |
+| SchemaVersion | Text | Recovery Services vault, Backup vault | Current version of the schema, for example **V2** |
+| State | Text | Recovery Services vault | Current state of the job object, for example, Active, Deleted |
+| VaultUniqueId | Text | Recovery Services vault, Backup vault | Unique identifier of the protected vault associated with the job |
+| SourceSystem | Text | Recovery Services vault, Backup vault | Source system of the current data - Azure |
+| DatasourceSetFriendlyName | Text | Backup vault | Friendly name of the datasource being backed up |
+| DatasouceSetResourceId | Text | Backup vault | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for an Azure PostgreSQL database, this field will contain the ARM ID of the PostgreSQL server |
+| DatasourceSetType | Text | Backup vault | Type of the datasource being backed up, for example, Microsoft.DBforPostgreSQL/servers/databases |
+| DatasourceResourceId | Text | Backup vault | Azure Resource Manager (ARM) ID of the datasource being backed up |
+| DatasourceType | Text | Backup vault | Type of the datasource being backed up, for example, Microsoft.DBforPostgreSQL/servers/databases |
+| DatasourceFriendlyName | Text | Backup vault | Friendly name of the datasource being backed up |
+| SubscriptionId | Text | Backup vault | Subscription id of the vault |
+| ResourceGroupName | Text | Backup vault | Resource Group of the vault |
+| VaultName | Text | Backup vault | Name of the vault |
+| VaultTags | Text | Backup vault | Tags of the vault |
+| StorageReplicationType | Text | Backup vault | Type of storage replication for the vault. For example, GeoRedundant |
+| AzureDataCenter | Text | Backup vault | Location of the vault |
+| BackupItemId | Text | Backup vault | Azure Resource Manager (ARM) ID of the backup item associated with the job |
+| BackupItemFriendlyName | Text | Backup vault | Friendly name of the backup item associated with the job |
## AddonAzureBackupPolicy This table provides details about policy-related fields.
-| **Field** | **Data Type** | **Description** |
-| - | -- | |
-| ResourceId | Text | Unique identifier for the resource about which data is collected. For example, a Recovery Services vault resource ID |
-| OperationName | Text | Name of the operation, for example, Policy or PolicyAssociation |
-| Category | Text | Category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupPolicy |
-| BackupDaysOfTheWeek | Text | Days of the week when backups have been scheduled |
-| BackupFrequency | Text | Frequency with which backups are run. For example, daily, weekly |
-| BackupManagementType | Text | Provider type for server doing backup job. For example, IaaSVM, FileFolder |
-| BackupManagementServerUniqueId | Text | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
-| BackupTimes | Text | Date and time when backups are scheduled |
-| DailyRetentionDuration | Whole Number | Total retention duration in days for configured backups |
-| DailyRetentionTimes | Text | Date and time when daily retention was configured |
-| DiffBackupDaysOfTheWeek | Text | Days of the week for Differential backups for SQL in Azure VM Backup |
-| DiffBackupFormat | Text | Format for Differential backups for SQL in Azure VM backup |
-| DiffBackupRetentionDuration | Decimal Number | Retention duration for Differential backups for SQL in Azure VM Backup |
-| DiffBackupTime | Time | Time for Differential backups for SQL in Azure VM Backup |
-| LogBackupFrequency | Decimal Number | Frequency for Log backups for SQL |
-| LogBackupRetentionDuration | Decimal Number | Retention duration for Log backups for SQL in Azure VM Backup |
-| MonthlyRetentionDaysOfTheMonth | Text | Weeks of the month when monthly retention is configured. For example, First, Last |
-| MonthlyRetentionDaysOfTheWeek | Text | Days of the week selected for monthly retention |
-| MonthlyRetentionDuration | Text | Total retention duration in months for configured backups |
-| MonthlyRetentionFormat | Text | Type of configuration for monthly retention. For example, daily for day based, weekly for week based |
-| MonthlyRetentionTimes | Text | Date and time when monthly retention is configured |
-| MonthlyRetentionWeeksOfTheMonth | Text | Weeks of the month when monthly retention is configured. For example, First, Last |
-| PolicyName | Text | Name of the policy defined |
-| PolicyUniqueId | Text | Unique ID to identify the policy |
-| PolicyTimeZone | Text | Timezone in which the Policy Time Fields are specified in the logs |
-| RetentionDuration | Text | Retention duration for configured backups |
-| RetentionType | Text | Type of retention |
-| SchemaVersion | Text | This field denotes current version of the schema, it is **V2** |
-| State | Text | Current state of the policy object. For example, Active, Deleted |
-| SynchronisationFrequencyPerDay | Whole Number | Number of times in a day a file backup is synchronized for SC DPM and MABS |
-| VaultUniqueId | Text | Unique ID of the vault that this policy belongs to |
-| WeeklyRetentionDaysOfTheWeek | Text | Days of the week selected for weekly retention |
-| WeeklyRetentionDuration | Decimal Number | Total weekly retention duration in weeks for configured backups |
-| WeeklyRetentionTimes | Text | Date and time when weekly retention is configured |
-| YearlyRetentionDaysOfTheMonth | Text | Dates of the month selected for yearly retention |
-| YearlyRetentionDaysOfTheWeek | Text | Days of the week selected for yearly retention |
-| YearlyRetentionDuration | Decimal Number | Total retention duration in years for configured backups |
-| YearlyRetentionFormat | Text | Type of configuration for yearly retention, for example, daily for day based, weekly for week based |
-| YearlyRetentionMonthsOfTheYear | Text | Months of the year selected for yearly retention |
-| YearlyRetentionTimes | Text | Date and time when yearly retention is configured |
-| YearlyRetentionWeeksOfTheMonth | Text | Weeks of the month selected for yearly retention |
-| SourceSystem | Text | Source system of the current data - Azure |
+| **Field** | **Data Type** | **Applicable resource types** | **Description** |
+| - | -- | - | -- |
+| ResourceId | Text | Recovery Services vault, Backup vault | Unique identifier for the resource about which data is collected. For example, a Recovery Services vault resource ID |
+| OperationName | Text | Recovery Services vault, Backup vault | Name of the operation, for example, Policy or PolicyAssociation |
+| Category | Text | Recovery Services vault, Backup vault | Category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupPolicy |
+| BackupDaysOfTheWeek | Text | Recovery Services vault, Backup vault | Days of the week when backups have been scheduled |
+| BackupFrequency | Text | Recovery Services vault | Frequency with which backups are run. For example, daily, weekly |
+| BackupManagementType | Text | Recovery Services vault | Provider type for server doing backup job. For example, IaaSVM, FileFolder |
+| BackupManagementServerUniqueId | Text | Recovery Services vault | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
+| BackupTimes | Text | Recovery Services vault | Date and time when backups are scheduled |
+| DailyRetentionDuration | Integer | Recovery Services vault | Total retention duration in days for configured backups |
+| DailyRetentionTimes | Text | Recovery Services vault | Date and time when daily retention was configured |
+| DiffBackupDaysOfTheWeek | Text | Recovery Services vault | Days of the week for Differential backups for SQL/HANA in Azure VM Backup |
+| DiffBackupFormat | Text | Recovery Services vault | Format for Differential backups for SQL/HANA in Azure VM backup |
+| DiffBackupRetentionDuration | Integer | Recovery Services vault | Retention duration for Differential backups for SQL/HANA in Azure VM Backup |
+| DiffBackupTime | Time | Recovery Services vault | Time for Differential backups for SQL/HANA in Azure VM Backup |
+| LogBackupFrequency | Integer | Recovery Services vault | Frequency for Log backups for SQL |
+| LogBackupRetentionDuration | Integer | Recovery Services vault | Retention duration for Log backups for SQL in Azure VM Backup |
+| MonthlyRetentionDaysOfTheMonth | Text | Recovery Services vault, Backup vault | Weeks of the month when monthly retention is configured. For example, First, Last |
+| MonthlyRetentionDaysOfTheWeek | Text | Recovery Services vault, Backup vault | Days of the week selected for monthly retention |
+| MonthlyRetentionDuration | Text | Recovery Services vault, Backup vault | Total retention duration in months for configured backups |
+| MonthlyRetentionFormat | Text | Recovery Services vault, Backup vault | Type of configuration for monthly retention. For example, daily for day based, weekly for week based |
+| MonthlyRetentionTimes | Text | Recovery Services vault, Backup vault | Date and time when monthly retention is configured |
+| MonthlyRetentionWeeksOfTheMonth | Text | Recovery Services vault, Backup vault | Weeks of the month when monthly retention is configured. For example, First, Last |
+| PolicyName | Text | Recovery Services vault, Backup vault | Name of the policy defined |
+| PolicyUniqueId | Text | Recovery Services vault, Backup vault | Unique ID to identify the policy |
+| PolicyTimeZone | Text | Recovery Services vault, Backup vault | Timezone in which the Policy Time Fields are specified in the logs |
+| RetentionDuration | Text | Recovery Services vault | Retention duration for configured backups |
+| RetentionType | Text | Recovery Services vault | Type of retention |
+| SchemaVersion | Text | Recovery Services vault, Backup vault | This field denotes current version of the schema, it is **V2** |
+| State | Text | Recovery Services vault | Current state of the policy object. For example, Active, Deleted |
+| SynchronisationFrequencyPerDay | Whole Number | Recovery Services vault | Number of times in a day a file backup is synchronized for SC DPM and MABS |
+| VaultUniqueId | Text | Recovery Services vault, Backup vault | Unique ID of the vault that this policy belongs to |
+| WeeklyRetentionDaysOfTheWeek | Text | Recovery Services vault, Backup vault | Days of the week selected for weekly retention |
+| WeeklyRetentionDuration | Decimal Number | Recovery Services vault, Backup vault | Total weekly retention duration in weeks for configured backups |
+| WeeklyRetentionTimes | Text | Recovery Services vault, Backup vault | Date and time when weekly retention is configured |
+| YearlyRetentionDaysOfTheMonth | Text | Recovery Services vault, Backup vault | Dates of the month selected for yearly retention |
+| YearlyRetentionDaysOfTheWeek | Text | Recovery Services vault, Backup vault | Days of the week selected for yearly retention |
+| YearlyRetentionDuration | Decimal Number | Recovery Services vault, Backup vault | Total retention duration in years for configured backups |
+| YearlyRetentionFormat | Text | Recovery Services vault, Backup vault | Type of configuration for yearly retention, for example, daily for day based, weekly for week based |
+| YearlyRetentionMonthsOfTheYear | Text | Recovery Services vault, Backup vault | Months of the year selected for yearly retention |
+| YearlyRetentionTimes | Text | Recovery Services vault, Backup vault | Date and time when yearly retention is configured |
+| YearlyRetentionWeeksOfTheMonth | Text | Recovery Services vault, Backup vault | Weeks of the month selected for yearly retention |
+| SourceSystem | Text | Recovery Services vault, Backup vault | Source system of the current data - Azure |
+| PolicySubType | Text | Recovery Services vault | Subtype of the policy, for example, Standard or Enhanced |
+| BackupIntervalInHours | Integer | Recovery Services vault, Backup vault | Interval of time between successive backup jobs. Applicable for Azure VM and Azure Disk backup |
+| ScheduleWindowDuration | Integer | Recovery Services vault | Duration of the daily window in which backups can be run. Applicable for enhanced policy for Azure VM backup |
+| ScheduleWindowStartTime | DateTime | Recovery Services vault | Start time of the daily window in which backups can be run. Applicable for enhanced policy for Azure VM backup |
+| FullBackupDaysOfTheWeek | String | Backup vault | Days of the week when full backup runs. Currently applicable for Azure PostgreSQL backup |
+| FullBackupFrequency | String | Backup vault | Frequency of full backup. Currently applicable for Azure PostgreSQL backup | |
+| FullBackupTimes | String | Backup vault | Time of the day at which full backup is taken. Currently applicable for Azure PostgreSQL backup | |
+| IncrementalBackupDaysOfTheWeek | String | Backup vault | Days of the week when incremental backup runs. Currently applicable for Azure Disk backup |
+| IncrementalBackupFrequency | String | Backup vault | Frequency of incremental backup. Currently applicable for Azure Disk backup |
+| IncrementalBackupTimes | String | Backup vault | Time of the day at which incremental backup is taken. Currently applicable for Azure Disk backup | |
+| PolicyId | String | Backup vault | Azure Resource Manager (ARM) ID of the backup policy |
+| SnapshotTierDailyRetentionDuration | Integer | Backup vault | Retention duration in days for daily snapshots. Applicable for Azure Blob and Azure Disk backup |
+| SnapshotTierWeeklyRetentionDuration | Integer | Backup vault | Retention duration in weeks for weekly snapshots. Applicable for Azure Blob and Azure Disk backup |
+| SnapshotTierMonthlyRetentionDuration | Integer | Backup vault | Retention duration in months for monthly snapshots. Applicable for Azure Blob and Azure Disk backup |
+| SnapshotTierYearlyRetentionDuration | Integer | Backup vault | Retention duration in years for yearly snapshots. Applicable for Azure Blob and Azure Disk backup |
+| StandardTierDefaultRetentionDuration | Integer | Backup vault | Default retention duration in the standard tier in days. Applicable for Azure PostgreSQL backup |
+| SnapshotTierDefaultRetentionDuration | Integer | Backup vault | Default retention duration in the snapshot tier in days. Applicable for Azure Blob and Azure Disk backup |
+| DatasourceType | String | Backup vault | Datasource type of the backup policy |
+| VaultTags | String | Backup vault | Tags of the vault associated with the backup policy |
+| AzureDataCenter | String | Backup vault | Location of the vault associated with the backup policy |
+| VaultType | String | Backup vault | Type of the vault associated with the backup policy. For Backup vaults, the value is Microsoft.DataProtection/backupVaults. For Recovery Services vaults, this field is currently empty |
+| StorageReplicationType | String | Backup vault | Storage replication type of the vault associated with the backup policy |
+| SubscriptionId | String | Backup vault | Subscription ID of the vault associated with the backup policy |
+| ResourceGroupName | String | Backup vault | Resource group of the vault associated with the backup policy |
+| VaultName | String | Backup vault | Name of the vault associated with the backup policy |
## AddonAzureBackupStorage This table provides details about storage-related fields.
-| **Field** | **Data Type** | **Description** |
-| | - | |
-| ResourceId | Text | Resource identifier for data being collected. For example, Recovery Services vault resource ID |
-| OperationName | Text | This field represents name of the current operation - Storage or StorageAssociation |
-| Category | Text | This field represents category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupStorage |
-| BackupItemUniqueId | Text | Unique ID used to identify the backup item for VMs backed up using DPM, MABS |
-| BackupManagementServerUniqueId | Text | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
-| BackupManagementType | Text | Provider type for server doing backup job. For example, IaaSVM, FileFolder |
-| PreferredWorkloadOnVolume | Text | Workload for which this volume is the preferred storage |
-| ProtectedContainerUniqueId | Text | Unique identifier of the protected container associated with the backup item |
-| SchemaVersion | Text | Version of the schema. For example, **V2** |
-| State | Text | State of the backup item object. For example, Active, Deleted |
-| StorageAllocatedInMBs | Number | Size of storage allocated by the corresponding backup item in the corresponding storage of type Disk |
-| StorageConsumedInMBs | Number | Size of storage consumed by the corresponding backup item in the corresponding storage |
-| StorageName | Text | Name of storage entity. For example, E:\ |
-| StorageTotalSizeInGBs | Text | Total size of storage, in GB, consumed by storage entity |
-| StorageType | Text | Type of Storage, for example Cloud, Volume, Disk |
-| StorageUniqueId | Text | Unique ID used to identify the storage entity |
-| VaultUniqueId | Text | Unique ID used to identify the vault related to the storage entity |
-| VolumeFriendlyName | Text | Friendly name of the storage volume |
-| SourceSystem | Text | Source system of the current data - Azure |
+| **Field** | **Data Type** | **Applicable Resource Types** | **Description** |
+| | - | -|-- |
+| ResourceId | Text | Recovery Services vault | Resource identifier for data being collected. For example, Recovery Services vault resource ID |
+| OperationName | Text | Recovery Services vault | This field represents name of the current operation - Storage or StorageAssociation |
+| Category | Text | Recovery Services vault | This field represents category of diagnostics data pushed to Azure Monitor logs - AddonAzureBackupStorage |
+| BackupItemUniqueId | Text | Recovery Services vault | Unique ID used to identify the backup item for VMs backed up using DPM, MABS |
+| BackupManagementServerUniqueId | Text | Recovery Services vault | Field to uniquely identify the Backup Management Server the Backup Item is protected through, if applicable |
+| BackupManagementType | Text | Recovery Services vault | Provider type for server doing backup job. For example, IaaSVM, FileFolder |
+| PreferredWorkloadOnVolume | Text | Recovery Services vault | Workload for which this volume is the preferred storage |
+| ProtectedContainerUniqueId | Text | Recovery Services vault | Unique identifier of the protected container associated with the backup item |
+| SchemaVersion | Text | Recovery Services vault | Version of the schema. For example, **V2** |
+| State | Text | Recovery Services vault | State of the backup item object. For example, Active, Deleted |
+| StorageAllocatedInMBs | Number | Recovery Services vault | Size of storage allocated by the corresponding backup item in the corresponding storage of type Disk |
+| StorageConsumedInMBs | Number | Recovery Services vault | Size of storage consumed by the corresponding backup item in the corresponding storage |
+| StorageName | Text | Recovery Services vault | Name of storage entity. For example, E:\ |
+| StorageTotalSizeInGBs | Text | Recovery Services vault | Total size of storage, in GB, consumed by storage entity |
+| StorageType | Text | Recovery Services vault | Type of Storage, for example Cloud, Volume, Disk |
+| StorageUniqueId | Text | Recovery Services vault | Unique ID used to identify the storage entity |
+| VaultUniqueId | Text | Recovery Services vault | Unique ID used to identify the vault related to the storage entity |
+| VolumeFriendlyName | Text | Recovery Services vault | Friendly name of the storage volume |
+| SourceSystem | Text | Recovery Services vault | Source system of the current data - Azure |
## Valid Operation Names for each table Each record in the above tables has an associated **Operation Name**. An Operation Name describes the type of record (and also indicates which fields in the table are populated for that record). Each table (category) supports one or more distinct Operation Names. Below is a summary of the supported Operation Names for each of the above tables.
+**Choose a vault type**:
+
+# [Recovery Services vaults](#tab/recovery-services-vaults)
+ | **Table Name / Category** | **Supported Operation Names** | **Description** | | - | |-- | | CoreAzureBackup | BackupItem | Represents a record containing all details of a given backup item, such as ID, name, type, etc. |
Each record in the above tables has an associated **Operation Name**. An Operati
| AddonAzureBackupPolicy | Policy | Represents a record containing all details of a backup and retention policy. For example, ID, name, retention settings, etc. | | AddonAzureBackupPolicy | PolicyAssociation | Represents a mapping between a backup item and the backup policy applied to it. |
+# [Backup vaults](#tab/backup-vaults)
+
+| **Table Name / Category** | **Supported Operation Names** | **Description** |
+| - | |-- |
+| CoreAzureBackup | BackupItem | Represents a record containing all details of a given backup item, such as ID, name, type, etc. |
+| CoreAzureBackup | Vault | Represents a record containing all details of a given vault eg. ID, name, tags, location etc. |
+| AddonAzureBackupJobs | Job | Represents a record containing all details of a given job. For example, job operation, start time, status etc. |
+| AddonAzureBackupProtectedInstance | ProtectedInstance | Represents a record containing the protected instance count for each container or backup item. For Azure VM backup, the protected instance count is available at the backup item level, for other workloads it is available at the protected container level. |
+| AddonAzureBackupPolicy | Policy | Represents a record containing all details of a backup and retention policy. For example, ID, name, retention settings, etc. |
+ Often, you will need to perform joins between different tables as well as different sets of records that are part of the same table (differentiated by Operation Name) to get all the fields required for your analysis. Refer to the [sample queries](./backup-azure-monitoring-use-azuremonitor.md#sample-kusto-queries) to get started. ## Next steps
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
Once the script is run, the LVM partitions are mounted in the physical volume(s)
To list the volume group names: ```bash
-pvs -o +vguuid
+sudo pvs -o +vguuid
``` This command will list all physical volumes (including the ones present before running the script), their corresponding volume group names, and the volume group's unique user IDs (UUIDs). A sample output of the command is shown below.
-```bash
+```output
PV VG Fmt Attr PSize PFree VG UUID /dev/sda4 rootvg lvm2 a-- 138.71g 113.71g EtBn0y-RlXA-pK8g-de2S-mq9K-9syx-B29OL6
The first column (PV) shows the physical volume, the subsequent columns show the
There are scenarios where volume group names can have 2 UUIDs after running the script. It means that the volume group names in the machine where the script is executed and in the backed-up VM are the same. Then we need to rename the backed-up VMs volume groups. Take a look at the example below.
-```bash
+```output
PV VG Fmt Attr PSize PFree VG UUID /dev/sda4 rootvg lvm2 a-- 138.71g 113.71g EtBn0y-RlXA-pK8g-de2S-mq9K-9syx-B29OL6
The script output would have shown /dev/sdg, /dev/sdh, /dev/sdm2 as attached. So
Now we need to rename VG names for script-based volumes, for example: /dev/sdg, /dev/sdh, /dev/sdm2. To rename the volume group, use the following command ```bash
-vgimportclone -n rootvg_new /dev/sdm2
-vgimportclone -n APPVg_2 /dev/sdg /dev/sdh
+sudo vgimportclone -n rootvg_new /dev/sdm2
+sudo vgimportclone -n APPVg_2 /dev/sdg /dev/sdh
``` Now we have all VG names with unique IDs.
Now we have all VG names with unique IDs.
Make sure that the Volume groups corresponding to script's volumes are active. The following command is used to display active volume groups. Check whether the script's related volume groups are present in this list. ```bash
-vgdisplay -a
+sudo vgdisplay -a
``` Otherwise, activate the volume group by using the following command. ```bash
-#!/bin/bash
-vgchange ΓÇôa y <volume-group-name>
+sudo vgchange ΓÇôa y <volume-group-name>
``` ##### Listing logical volumes within Volume groups
vgchange ΓÇôa y <volume-group-name>
Once we get the unique, active list of VGs related to the script, then the logical volumes present in those volume groups can be listed using the following command. ```bash
-#!/bin/bash
-lvdisplay <volume-group-name>
+sudo lvdisplay <volume-group-name>
``` This command displays the path of each logical volume as 'LV Path'.
This command displays the path of each logical volume as 'LV Path'.
To mount the logical volumes to the path of your choice: ```bash
-#!/bin/bash
-mount <LV path from the lvdisplay cmd results> </mountpath>
+sudo mount <LV path from the lvdisplay cmd results> </mountpath>
``` > [!WARNING]
mount <LV path from the lvdisplay cmd results> </mountpath>
The following command displays details about all raid disks: ```bash
-#!/bin/bash
-mdadm ΓÇôdetail ΓÇôscan
+sudo mdadm ΓÇôdetail ΓÇôscan
``` The relevant RAID disk is displayed as `/dev/mdm/<RAID array name in the protected VM>`
mdadm ΓÇôdetail ΓÇôscan
Use the mount command if the RAID disk has physical volumes: ```bash
-#!/bin/bash
-mount [RAID Disk Path] [/mountpath]
+sudo mount [RAID Disk Path] [/mountpath]
``` If the RAID disk has another LVM configured in it, then use the preceding procedure for LVM partitions but use the volume name in place of the RAID Disk name.
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
Most agent-related or extension-related failures for Linux VMs are caused by iss
If the process isn't running, restart it by using the following commands:
- - For Ubuntu: `service walinuxagent start`
- - For other distributions: `service waagent start`
+ - For Ubuntu/Debian:
+ ```bash
+ sudo systemctl restart walinuxagent
+ ```
+
+ - For other distributions:
+ ```bash
+ sudo systemctl restart waagent
+ ```
3. [Configure the auto restart agent](https://github.com/Azure/WALinuxAgent/wiki/Known-Issues#mitigate_agent_crash). 4. Run a new test backup. If the failure persists, collect the following logs from the VM:
backup Backup Azure Vm File Recovery Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vm-file-recovery-troubleshoot.md
This section provides steps to troubleshoot common issues you might experience w
1. Ensure you have the [required permissions to download the script](./backup-azure-restore-files-from-vm.md#select-recovery-point-who-can-generate-script). 1. Verify the connection to the Azure target IPs. Run one of the following commands from an elevated command prompt:
- `nslookup download.microsoft.com`
-
+ ```bash
+ nslookup download.microsoft.com
+ ```
or
- `ping download.microsoft.com`
-
+ ```bash
+ ping download.microsoft.com
+ ```
+
### The script downloads successfully, but fails to run When you run the Python script for Item Level Recovery (ILR) on SUSE Linux Enterprise Server 12 SP4, it fails with the error "iscsi_tcp module canΓÇÖt be loaded" or "iscsi_tcp_module not found".
You might see an "Exception caught while connecting to target" error message.
1. Ensure the machine where the script is run meets the [access requirements](./backup-azure-restore-files-from-vm.md#step-4-access-requirements-to-successfully-run-the-script). 1. Verify the connection to the Azure target IPs. Run one of the following commands from an elevated command prompt:
- `nslookup download.microsoft.com`
-
+ ```bash
+ nslookup download.microsoft.com
+ ```
or
- `ping download.microsoft.com`
+ ```bash
+ ping download.microsoft.com
+ ```
1. Ensure access to iSCSI outbound port 3260. 1. Check for a firewall or NSG blocking traffic to Azure target IPs or recovery service URLs. 1. Make sure your antivirus software isn't blocking the execution of the script.
To resolve this issue, check if the volume is encrypted with a third-party appli
1. Sign in to the backed-up VM and run this command:
- `lsblk -f`
-
+ ```bash
+ lsblk -f
+ ```
![Screenshot showing the results of the command to list block devices.](./media/backup-azure-restore-files-from-vm/disk-without-volume-5.png) 1. Verify the file system and encryption. If the volume is encrypted, file recovery isn't supported. Learn more at [Support matrix for Azure VM backup](./backup-support-matrix-iaas.md#support-for-file-level-restore).
Check if the source server has disk deduplication enabled. If it does, ensure th
## Next steps -- [Recover files and folders from Azure virtual machine backup](backup-azure-restore-files-from-vm.md)
+- [Recover files and folders from Azure virtual machine backup](backup-azure-restore-files-from-vm.md)
backup Backup Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-support-matrix.md
Title: Support matrix for Backup center description: This article summarizes the scenarios that Backup center supports for each workload type Previously updated : 12/08/2022 Last updated : 03/31/2023
The following table lists all supported scenarios:
| Actions | Execute on-demand backup for a backup instance | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) | | Actions | Stop backup for a backup instance | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) | | Actions | Execute cross-region restore job from Backup center | Azure Virtual Machine <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM | See the [cross-region restore](./backup-create-rs-vault.md#set-cross-region-restore) documentation. |
-| Insights | View Backup Reports | Azure Virtual Machine <br><br> SQL in Azure Virtual Machine <br><br> SAP HANA in Azure Virtual Machine <br><br> Azure Files <br><br> System Center Data Protection Manager <br><br> Azure Backup Agent (MARS) <br><br> Azure Backup Server (MABS) | See [supported scenarios for Backup Reports](./configure-reports.md#supported-scenarios). |
+| Insights | View Backup Reports | Azure Virtual Machine <br><br> SQL in Azure Virtual Machine <br><br> SAP HANA in Azure Virtual Machine <br><br> Azure Files <br><br> System Center Data Protection Manager <br><br> Azure Backup Agent (MARS) <br><br> Azure Backup Server (MABS) <br><br> Azure Blobs <br><br> Azure Disks <br><br> Azure Database for PostgreSQL Server | See [supported scenarios for Backup Reports](./configure-reports.md#supported-scenarios). |
| Governance | View and assign built-in and custom Azure Policies under category _Backup_. | N/A | N/A | | Governance | View datasources not configured for backup | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server | N/A |
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-email.md
Title: Email Azure Backup Reports description: Create automated tasks to receive periodic reports via email Previously updated : 04/06/2023 Last updated : 04/17/2023
To configure email tasks via Backup Reports, perform the following steps:
3. After you click **Submit** and **Confirm**, the logic app will get created. The logic app and the associated API connections are created with the tag **UsedByBackupReports: true** for easy discoverability. You'll need to perform a one-time authorization step for the logic app to run successfully, as described in the section below.
+> [!NOTE]
+> Support for Backup vault workloads (Azure Database for PostgreSQL Server, Azure Blobs, Azure Disks) is added to the logic app templates in April 2023. So, if you've deployed these logic apps on an earlier date, you'll have to redeploy these using the above steps if you want to see data for Backup vault workloads in your email reports.
+ ## Authorize connections to Azure Monitor Logs and Office 365 The logic app uses the [azuremonitorlogs](/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](/connectors/office365connector/) connector for sending emails. You'll need to perform a one-time authorization for these two connectors.
To troubleshoot this issue:
* **Azure Monitor Logs Connector has not been not authorized**: To fix this issue, follow the authorization steps as provided above. * **Error in the LA query**: In case you have customized the logic app with your own queries, an error in any of the LA queries might be causing the logic app to fail. You can select the relevant step and view the error which is causing the query to run incorrectly.
-### Scenario 3: Error in authorizing O365 API connection
+### Scenario 3: Error in authorizing Microsoft 365 API connection
-When attempting to authorize the O365 API connection, you might see an error of the form _Test connection failed. Error 'REST API is not yet supported for this mailbox. This error can occur for sandbox (test) accounts or for accounts that are on a dedicated (on-premises) mail server._
+When attempting to authorize the Microsoft 365 API connection, you might see an error of the form _Test connection failed. Error 'REST API is not yet supported for this mailbox. This error can occur for sandbox (test) accounts or for accounts that are on a dedicated (on-premises) mail server._
This error can occur if the mailbox is on a dedicated Microsoft Exchange Server and isn't a valid Office 365 mailbox. [Learn more](/connectors/office365/#common-errors)
You can also directly update the ARM template, which is used for deploying the l
## Next steps
-[Learn more about Backup Reports](./configure-reports.md)
+[Learn more about Backup Reports](./configure-reports.md)
backup Backup Reports System Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-system-functions.md
Title: System functions on Azure Monitor Logs description: Write custom queries on Azure Monitor Logs using system functions Previously updated : 03/01/2021 Last updated : 04/18/2023 # System functions on Azure Monitor Logs
-Azure Backup provides a set of functions, called system functions or solution functions, that are available by default in your Log Analytics (LA) workspaces.
+Azure Backup provides a set of functions, called system functions or solution functions that are available by default in your Log Analytics (LA) workspaces.
These functions operate on data in the [raw Azure Backup tables](./backup-azure-reports-data-model.md) in LA and return formatted data that helps you easily retrieve information of all your backup-related entities, using simple queries. Users can pass parameters to these functions to filter the data that is returned by these functions.
It's recommended to use system functions for querying your backup data in LA wor
## Benefits of using system functions
-* **Simpler queries**: Using functions helps you reduce the number of joins needed in your queries. By default, the functions return ΓÇÿflattenedΓÇÖ schemas, that incorporate all information pertaining to the entity (backup instance, job, vault, and so on) being queried. For example, if you need to get a list of successful backup jobs by backup item name and its associated container, a simple call to the **_AzureBackup_getJobs()** function will give you all of this information for each job. On the other hand, querying the raw tables directly would require you to perform multiple joins between [AddonAzureBackupJobs](./backup-azure-reports-data-model.md#addonazurebackupjobs) and [CoreAzureBackup](./backup-azure-reports-data-model.md#coreazurebackup) tables.
+* **Simpler queries**: Using functions helps you reduce the number of joins needed in your queries. By default, the functions return ΓÇÿflattenedΓÇÖ schemas that incorporate all information pertaining to the entity (backup instance, job, vault, and so on) being queried. For example, if you need to get a list of successful backup jobs by backup item name and its associated container, a simple call to the **_AzureBackup_getJobs()** function will give you all of this information for each job. On the other hand, querying the raw tables directly would require you to perform multiple joins between [AddonAzureBackupJobs](./backup-azure-reports-data-model.md#addonazurebackupjobs) and [CoreAzureBackup](./backup-azure-reports-data-model.md#coreazurebackup) tables.
-* **Smoother transition from the legacy diagnostics event**: Using system functions helps you transition smoothly from the [legacy diagnostics event](./backup-azure-diagnostic-events.md#legacy-event) (AzureBackupReport in AzureDiagnostics mode) to the [resource-specific events](./backup-azure-diagnostic-events.md#diagnostics-events-available-for-azure-backup-users). All the system functions provided by Azure Backup allow you to specify a parameter that lets you choose whether the function should query data only from the resource-specific tables, or query data from both the legacy table and the resource-specific tables (with deduplication of records).
+* **Smoother transition from the legacy diagnostics event**: Using system functions helps you transition smoothly from the [legacy diagnostics event](./backup-azure-diagnostic-events.md#legacy-event) (AzureBackupReport in AzureDiagnostics mode) to the [resource-specific events](./backup-azure-diagnostic-events.md#diagnostics-events-available-for-azure-backup-users). All the system functions provided by Azure Backup allows you to specify a parameter that lets you choose whether the function should query data only from the resource-specific tables, or query data from both the legacy table and the resource-specific tables (with deduplication of records).
* If you have successfully migrated to the resource-specific tables, you can choose to exclude the legacy table from being queried by the function. * If you are currently in the process of migration and have some data in the legacy tables which you require for analysis, you can choose to include the legacy table. When the transition is complete, and you no longer need data from the legacy table, you can simply update the value of the parameter passed to the function in your queries, to exclude the legacy table. * If you are still using only the legacy table, the functions will still work if you choose to include the legacy table via the same parameter. However, it is recommended to [switch to the resource-specific tables](./backup-azure-diagnostic-events.md#steps-to-move-to-new-diagnostics-settings-for-a-log-analytics-workspace) at the earliest.
This function returns the list of all Recovery Services vaults in your Azure env
**Parameters**
-| **Parameter Name** | **Description** | **Required?** | **Example value** |
-| -- | - | | -- |
-| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all vault-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each vault. | N | "2021-03-03 00:00:00" |
-| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all vault-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each vault. | N |"2021-03-10 00:00:00"|
-| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those vaults that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
-| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those vaults that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
-| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records across all vaults. | N |"vault1,vault2,vault3"|
-| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter | N | "Microsoft.RecoveryServices/vaults"|
-| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+| **Parameter Name** | **Description** | **Required?** | **Example value** | **Data type** |
+| -- | - | | -- | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all vault-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each vault. | N | "2021-03-03 00:00:00" | DateTime |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all vault-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each vault. | N |"2021-03-10 00:00:00"| DateTime |
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those vaults that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"| String |
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those vaults that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | `eastus,westus`| String |
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records across all vaults. | N |`vault1,vault2,vault3`| String |
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. By default, the value of this parameter is '*', which makes the function search for both Recovery Services vaults and Backup vaults. | N | "Microsoft.RecoveryServices/vaults"| String |
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true | Boolean |
**Returned Fields**
-| **Field Name** | **Description** |
-| -- | |
-| UniqueId | Primary key denoting unique ID of the vault |
-| Id | Azure Resource Manager (ARM) ID of the vault |
-| Name | Name of the vault |
-| SubscriptionId | ID of the subscription in which the vault exists |
-| Location | Location in which the vault exists |
-| VaultStore_StorageReplicationType | Storage Replication Type associated with the vault |
-| Tags | Tags of the vault |
-| TimeGenerated | Timestamp of the record |
-| Type | Type of the vault, which is "Microsoft.RecoveryServices/vaults"|
+| **Field Name** | **Description** | **Data type** |
+| -- | | |
+| UniqueId | Primary key denoting unique ID of the vault | String |
+| Id | Azure Resource Manager (ARM) ID of the vault | String |
+| Name | Name of the vault | String |
+| SubscriptionId | ID of the subscription in which the vault exists | String |
+| Location | Location in which the vault exists | String |
+| VaultStore_StorageReplicationType | Storage Replication Type associated with the vault | String |
+| Tags | Tags of the vault | String |
+| TimeGenerated | Timestamp of the record | DateTime |
+| Type | Type of the vault, for example, "Microsoft.RecoveryServices/vaults" or "Microsoft.DataProtection/backupVaults"| String |
#### _AzureBackup_GetPolicies()
This function returns the list of backup policies that are being used in your Az
**Parameters**
-| **Parameter Name** | **Description** | **Required?** | **Example value** |
-| -- | - | | -- |
-| RangeStart | Use this parameter along with the RangeStart parameter only if you need to fetch all policy-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each policy. | N | "2021-03-03 00:00:00" |
-| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all policy-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each policy. | N |"2021-03-10 00:00:00"|
-| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those policies that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
-| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those policies that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
-| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of policies pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of policies across all vaults. | N |"vault1,vault2,vault3"|
-| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
-| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
-| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
+| **Parameter Name** | **Description** | **Required?** | **Example value** | **Data type** |
+| -- | - | | -- | -- |
+| RangeStart | Use this parameter along with the RangeStart parameter only if you need to fetch all policy-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each policy. | N | "2021-03-03 00:00:00" | DateTime |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all policy-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each policy. | N |"2021-03-10 00:00:00"| DateTime |
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those policies that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"| String |
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those policies that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | `eastus,westus`| String |
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of policies pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of policies across all vaults. | N |`vault1,vault2,vault3`| String |
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. By default, the value of this parameter is '*', which makes the function search for both Recovery Services vaults and Backup vaults. | N | "Microsoft.RecoveryServices/vaults"| String |
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true | Boolean |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM` as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server", "Azure Database for PostgreSQL Server Backup", "Azure Blob Backup", "Azure Disk Backup" or a comma-separated combination of any of these values). | N | `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent` | String |
**Returned Fields**
-| **Field Name** | **Description** |
-| -- | |
-| UniqueId | Primary key denoting unique ID of the policy |
-| Id | Azure Resource Manager (ARM) ID of the policy |
-| Name | Name of the policy |
-| Backup Solution | Backup Solution that the policy is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
-| TimeGenerated | Timestamp of the record |
-| VaultUniqueId | Foreign key that refers to the vault associated with the policy |
-| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the policy |
-| VaultName | Name of the vault associated with the policy |
-| VaultTags | Tags of the vault associated with the policy |
-| VaultLocation | Location of the vault associated with the policy |
-| VaultSubscriptionId | Subscription ID of the vault associated with the policy |
-| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the policy |
-| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
-| ExtendedProperties | Additional properties of the policy |
+| **Field Name** | **Description** | **Data type **
+| -- | | |
+| UniqueId | Primary key denoting unique ID of the policy | String |
+| Id | Azure Resource Manager (ARM) ID of the policy | String |
+| Name | Name of the policy | String |
+| TimeZone | Timezone in which the policy is defined | String |
+| Backup Solution | Backup Solution that the policy is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. | String |
+| TimeGenerated | Timestamp of the record | Datetime |
+| VaultUniqueId | Foreign key that refers to the vault associated with the policy | String |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the policy | String |
+| VaultName | Name of the vault associated with the policy | String |
+| VaultTags | Tags of the vault associated with the policy | String |
+| VaultLocation | Location of the vault associated with the policy | String |
+| VaultSubscriptionId | Subscription ID of the vault associated with the policy | String |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the policy | String |
+| VaultType | Type of the vault, for example, "Microsoft.RecoveryServices/vaults" or "Microsoft.DataProtection/backupVaults" | String |
+| ExtendedProperties | Additional properties of the policy | Dynamic |
#### _AzureBackup_GetJobs()
This function returns a list of all backup and restore related jobs that were tr
**Parameters**
-| **Parameter Name** | **Description** | **Required?** | **Example value** |
-| -- | - | | -- |
-| RangeStart | Use this parameter along with RangeEnd parameter to retrieve the list of all jobs that started in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" |
-| RangeEnd | Use this parameter along with RangeStart parameter to retrieve the list of all jobs that started in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00"|
-| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those jobs that are associated with vaults in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
-| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those jobs that are associated with vaults in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
-| VaultList | Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve jobs pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for jobs across all vaults. | N |"vault1,vault2,vault3"|
-| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
-| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
-| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
-| JobOperationList | Use this parameter to filter the output of the function for a specific type of job. For example, Backup or Restore. By default, the value of this parameter is "*", which makes the function search for both Backup and Restore jobs. | N | "Backup" |
-| JobStatusList | Use this parameter to filter the output of the function for a specific job status. For example, Completed, Failed, and so on. By default, the value of this parameter is "*", which makes the function search for all jobs irrespective of status. | N | "Failed,CompletedWithWarnings" |
-| JobFailureCodeList | Use this parameter to filter the output of the function for a specific failure code. By default, the value of this parameter is "*", which makes the function search for all jobs irrespective of failure code. | N | "Success"
-| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
-| BackupInstanceName | Use this parameter to search for jobs on a particular backup instance by name. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
-| ExcludeLog | Use this parameter to exclude log jobs from being returned by the function (helps in query performance). By default, the value of this parameter is true, which makes the function exclude log jobs. | N | true
+| **Parameter Name** | **Description** | **Required?** | **Example value** | **Data type ** |
+| -- | - | | -- | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter to retrieve the list of all jobs that started in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" | DateTime |
+| RangeEnd | Use this parameter along with RangeStart parameter to retrieve the list of all jobs that started in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00" | DateTime |
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those jobs that are associated with vaults in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"| String |
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those jobs that are associated with vaults in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | `eastus,westus`| String |
+| VaultList | Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve jobs pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for jobs across all vaults. | N |`vault1,vault2,vault3` | String |
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. By default, the value of this parameter is '*', which makes the function search for both Recovery Services vaults and Backup vaults. | N | "Microsoft.RecoveryServices/vaults"| String |
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true | Boolean |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM` as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server", "Azure Database for PostgreSQL Server Backup", "Azure Blob Backup", "Azure Disk Backup" or a comma-separated combination of any of these values). | N | `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent` | String |
+| JobOperationList | Use this parameter to filter the output of the function for a specific type of job. For example, Backup or Restore. By default, the value of this parameter is "*", which makes the function search for both Backup and Restore jobs. | N | "Backup" | String |
+| JobStatusList | Use this parameter to filter the output of the function for a specific job status. For example, Completed, Failed, and so on. By default, the value of this parameter is "*", which makes the function search for all jobs irrespective of status. | N | `Failed,CompletedWithWarnings` | String |
+| JobFailureCodeList | Use this parameter to filter the output of the function for a specific failure code. By default, the value of this parameter is "*", which makes the function search for all jobs irrespective of failure code. | N | "Success" | String |
+| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" | String |
+| BackupInstanceName | Use this parameter to search for jobs on a particular backup instance by name. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" | String |
+| ExcludeLog | Use this parameter to exclude log jobs from being returned by the function (helps in query performance). By default, the value of this parameter is true, which makes the function exclude log jobs. | N | true | Boolean |
**Returned Fields**
-| **Field Name** | **Description** |
-| -- | |
-| UniqueId | Primary key denoting unique ID of the job |
-| OperationCategory | Category of the operation being performed. For example, Backup, Restore |
-| Operation | Details of the operation being performed. For example, Log (for log backup)|
-| Status | Status of the job. For example, Completed, Failed, CompletedWithWarnings |
-| ErrorTitle | Failure code of the job |
-| StartTime | Date and time at which the job started |
-| DurationInSecs | Duration of the job in seconds |
-| DataTransferredInMBs | Data transferred by the job in MBs |
-| RestoreJobRPDateTime | The date and time when the recovery point that's being recovered was created |
-| RestoreJobRPLocation | The location where the recovery point that's being recovered was stored |
-| BackupInstanceUniqueId | Foreign key that refers to the backup instance associated with the job |
-| BackupInstanceId | Azure Resource Manager (ARM) ID of the backup instance associated with the job |
-| BackupInstanceFriendlyName | Name of the backup instance associated with the job |
-| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource associated with the job. For example, Azure Resource Manager (ARM) ID of the VM |
-| DatasourceFriendlyName | Friendly name of the underlying datasource associated with the job |
-| DatasourceType | Type of the datasource associated with the job. For example "Microsoft.Compute/virtualMachines" |
-| BackupSolution | Backup Solution that the job is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
-| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists |
-| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM |
-| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the job |
-| VaultUniqueId | Foreign key that refers to the vault associated with the job |
-| VaultName | Name of the vault associated with the job |
-| VaultTags | Tags of the vault associated with the job |
-| VaultSubscriptionId | Subscription ID of the vault associated with the job |
-| VaultLocation | Location of the vault associated with the job |
-| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the job |
-| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
-| TimeGenerated | Timestamp of the record |
+| **Field Name** | **Description** | **Data type ** |
+| -- | | |
+| UniqueId | Primary key denoting unique ID of the job | String |
+| OperationCategory | Category of the operation being performed. For example, Backup, Restore | String |
+| Operation | Details of the operation being performed. For example, Log (for log backup) | String |
+| Status | Status of the job. For example, Completed, Failed, CompletedWithWarnings | String |
+| ErrorTitle | Failure code of the job | String |
+| StartTime | Date and time at which the job started | DateTime |
+| DurationInSecs | Duration of the job in seconds | Double |
+| DataTransferredInMBs | Data transferred by the job in MBs. Currently, this field is only supported for Recovery Services vault workloads | Double |
+| RestoreJobRPDateTime | The date and time when the recovery point that's being recovered was created. Currently, this field is only supported for Recovery Services vault workloads | DateTime |
+| RestoreJobRPLocation | The location where the recovery point that's being recovered was stored | String |
+| BackupInstanceUniqueId | Foreign key that refers to the backup instance associated with the job | String |
+| BackupInstanceId | Azure Resource Manager (ARM) ID of the backup instance associated with the job | String |
+| BackupInstanceFriendlyName | Name of the backup instance associated with the job | String |
+| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource associated with the job. For example, Azure Resource Manager (ARM) ID of the VM | String |
+| DatasourceFriendlyName | Friendly name of the underlying datasource associated with the job | String |
+| DatasourceType | Type of the datasource associated with the job. For example "Microsoft.Compute/virtualMachines" | String |
+| BackupSolution | Backup Solution that the job is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. | String |
+| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists | String |
+| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM | String |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the job | String |
+| VaultUniqueId | Foreign key that refers to the vault associated with the job | String |
+| VaultName | Name of the vault associated with the job | String |
+| VaultTags | Tags of the vault associated with the job | String |
+| VaultSubscriptionId | Subscription ID of the vault associated with the job | String |
+| VaultLocation | Location of the vault associated with the job | String |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the job | String |
+| VaultType | Type of the vault, for example, "Microsoft.RecoveryServices/vaults" or "Microsoft.DataProtection/backupVaults" | String |
+| TimeGenerated | Timestamp of the record | DateTime|
#### _AzureBackup_GetBackupInstances()
This function returns the list of backup instances that are associated with your
**Parameters**
-| **Parameter Name** | **Description** | **Required?** | **Example value** |
-| -- | - | | -- |
-| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all backup instance-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each backup instance. | N | "2021-03-03 00:00:00" |
-| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all backup instance-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each backup instance. | N |"2021-03-10 00:00:00"|
-| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those backup instances that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
-| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those backup instances that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
-| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of backup instances across all vaults. | N |"vault1,vault2,vault3"|
-| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
-| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
-| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
-| ProtectionInfoList | Use this parameter to choose whether to include only those backup instances that are actively protected, or to also include those instances for which protection has been stopped and instances for which initial backup is pending. Supported values are "Protected", "ProtectionStopped", "InitialBackupPending" or a comma-separated combination of any of these values. By default, the value is "*", which makes the function search for all backup instances irrespective of protection details. | N | "Protected" |
-| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
-| BackupInstanceName | Use this parameter to search for a particular backup instance by name. By default, the value is "*", which makes the function search for all backup instances. | N | "testvm" |
-| DisplayAllFields | Use this parameter to choose whether to retrieve only a subset of the fields returned by the function. If the value of this parameter is false, the function eliminates storage and retention point related information from the output of the function. This is useful if you are using this function as an intermediate step in a larger query and need to optimize the performance of the query by eliminating columns which you do not require for analysis. By default, the value of this parameter is true, which makes the function return all fields pertaining to the backup instance. | N | true |
+| **Parameter Name** | **Description** | **Required?** | **Example value** | **Data type ** |
+| -- | - | | -- | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all backup instance-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each backup instance. | N | "2021-03-03 00:00:00" | DataTime |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all backup instance-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each backup instance. | N |"2021-03-10 00:00:00"| DateTime |
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those backup instances that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111" | String |
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those backup instances that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | `eastus,westus` | String |
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of backup instances across all vaults. | N |`vault1,vault2,vault3` | String |
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. By default, the value of this parameter is '*', which makes the function search for both Recovery Services vaults and Backup vaults. | N | "Microsoft.RecoveryServices/vaults" | String |
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true | Boolean |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM` as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server", "Azure Database for PostgreSQL Server Backup", "Azure Blob Backup", "Azure Disk Backup" or a comma-separated combination of any of these values). | N | `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent` | String |
+| ProtectionInfoList | Use this parameter to choose whether to include only those backup instances that are actively protected, or to also include those instances for which protection has been stopped and instances for which initial backup is pending. For Recovery services vault workloads, supported values are "Protected", "ProtectionStopped", "InitialBackupPending" or a comma-separated combination of any of these values. For Backup vault workloads, supported values are "Protected", "ConfiguringProtection", "ConfiguringProtectionFailed", "UpdatingProtection", "ProtectionError", "ProtectionStopped" or a comma-separated combination of any of these values. By default, the value is "*", which makes the function search for all backup instances irrespective of protection details. | N | "Protected" | String |
+| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" | String |
+| BackupInstanceName | Use this parameter to search for a particular backup instance by name. By default, the value is "*", which makes the function search for all backup instances. | N | "testvm" | String |
+| DisplayAllFields | Use this parameter to choose whether to retrieve only a subset of the fields returned by the function. If the value of this parameter is false, the function eliminates storage and retention point related information from the output of the function. This is useful if you are using this function as an intermediate step in a larger query and need to optimize the performance of the query by eliminating columns which you do not require for analysis. By default, the value of this parameter is true, which makes the function return all fields pertaining to the backup instance. | N | true | Boolean |
**Returned Fields**
-| **Field Name** | **Description** |
-| -- | |
-| UniqueId | Primary key denoting unique ID of the backup instance |
-| Id | Azure Resource Manager (ARM) ID of the backup instance |
-| FriendlyName | Friendly name of the backup instance |
-| ProtectionInfo | Information about the protection settings of the backup instance. For example, protection configured, protection stopped, initial backup pending |
-| LatestRecoveryPoint | Date and time of the latest recovery point associated with the backup instance |
-| OldestRecoveryPoint | Date and time of the oldest recovery point associated with the backup instance |
-| SourceSizeInMBs | Frontend size of the backup instance in MBs |
-| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the backup instance in the vault-standard tier |
-| DataSourceFriendlyName | Friendly name of the datasource corresponding to the backup instance |
-| BackupSolution | Backup Solution that the backup instance is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
-| DatasourceType | Type of the datasource corresponding to the backup instance. For example "Microsoft.Compute/virtualMachines" |
-| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource corresponding to the backup instance. For example, Azure Resource Manager (ARM) ID of the VM |
-| DatasourceSetFriendlyName | Friendly name of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the name of the VM in which the SQL Database exists |
-| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists |
-| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM |
-| PolicyName | Name of the policy associated with the backup instance |
-| PolicyUniqueId | Foreign key that refers to the policy associated with the backup instance |
-| PolicyId | Azure Resource Manager (ARM) ID of the policy associated with the backup instance |
-| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the backup instance |
-| VaultUniqueId | Foreign key which refers to the vault associated with the backup instance |
-| VaultName | Name of the vault associated with the backup instance |
-| VaultTags | Tags of the vault associated with the backup instance |
-| VaultSubscriptionId | Subscription ID of the vault associated with the backup instance |
-| VaultLocation | Location of the vault associated with the backup instance |
-| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the backup instance |
-| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
-| TimeGenerated | Timestamp of the record |
+| **Field Name** | **Description** | **Data type** |
+| -- | | |
+| UniqueId | Primary key denoting unique ID of the backup instance | String |
+| Id | Azure Resource Manager (ARM) ID of the backup instance | String |
+| FriendlyName | Friendly name of the backup instance | String |
+| ProtectionInfo | Information about the protection settings of the backup instance. For example, protection is configured, protection stopped, initial backup pending | String |
+| LatestRecoveryPoint | Date and time of the latest recovery point associated with the backup instance. Currently, this field is only supported for Recovery Services vault workloads. | DateTime |
+| OldestRecoveryPoint | Date and time of the oldest recovery point associated with the backup instance. Currently, this field is only supported for Recovery Services vault workloads. | DateTime |
+| SourceSizeInMBs | Frontend size of the backup instance in MBs | Double |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the backup instance in the vault-standard tier | Double |
+| DataSourceFriendlyName | Friendly name of the datasource corresponding to the backup instance | String |
+| BackupSolution | Backup Solution that the backup instance is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. | String |
+| DatasourceType | Type of the datasource corresponding to the backup instance. For example "Microsoft.Compute/virtualMachines" | String |
+| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource corresponding to the backup instance. For example, Azure Resource Manager (ARM) ID of the VM | String |
+| DatasourceSetFriendlyName | Friendly name of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the name of the VM in which the SQL Database exists | String |
+| DatasourceSetFriendlyName | Friendly name of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the name of the VM in which the SQL Database exists | String |
+| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists | String |
+| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM | String |
+| PolicyName | Name of the policy associated with the backup instance | String |
+| PolicyUniqueId | Foreign key that refers to the policy associated with the backup instance | String |
+| PolicyId | Azure Resource Manager (ARM) ID of the policy associated with the backup instance | String |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the backup instance | String |
+| VaultUniqueId | Foreign key which refers to the vault associated with the backup instance | String |
+| VaultName | Name of the vault associated with the backup instance | String |
+| VaultTags | Tags of the vault associated with the backup instance | String |
+| VaultSubscriptionId | Subscription ID of the vault associated with the backup instance | String |
+| VaultLocation | Location of the vault associated with the backup instance | String |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the backup instance | String |
+| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" or "Microsoft.DataProtection/backupVaults" | String |
+| TimeGenerated | Timestamp of the record | DateTime |
#### _AzureBackup_GetBillingGroups()
This function returns a list of all backup-related billing entities (billing gro
**Parameters**
-| **Parameter Name** | **Description** | **Required?** | **Example value** |
-| -- | - | | -- |
-| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all billing group related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each billing group. | N | "2021-03-03 00:00:00" |
-| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all billing group related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each billing group. | N |"2021-03-10 00:00:00"|
-| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those billing groups that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
-| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those billing groups that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
-| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of billing groups across all vaults. | N |"vault1,vault2,vault3"|
-| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
-| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
-| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
-| BillingGroupName | Use this parameter to search for a particular billing group by name. By default, the value is "*", which makes the function search for all billing groups. | N | "testvm" |
+| **Parameter Name** | **Description** | **Required?** | **Example value** | **Date type**
+| -- | - | | -- | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all billing group related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each billing group. | N | "2021-03-03 00:00:00" | DateTime |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all billing group related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each billing group. | N |"2021-03-10 00:00:00"| DateTime |
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those billing groups that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111" | String |
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those billing groups that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | `eastus,westus` | String |
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of billing groups across all vaults. | N | `vault1,vault2,vault3` | String |
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. By default, the value of this parameter is '*', which makes the function search for both Recovery Services vaults and Backup vaults. | N | "Microsoft.RecoveryServices/vaults" | String |
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true | Boolean |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM` as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server", "Azure Database for PostgreSQL Server Backup", "Azure Blob Backup", "Azure Disk Backup" or a comma-separated combination of any of these values). | N | `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent` | String |
+| BillingGroupName | Use this parameter to search for a particular billing group by name. By default, the value is "*", which makes the function search for all billing groups. | N | "testvm" | String |
**Returned Fields**
-| **Field Name** | **Description** |
-| -- | |
-| UniqueId | Primary key denoting unique ID of the billing group |
-| FriendlyName | Friendly name of the billing group |
-| Name | Name of the billing group |
-| Type | Type of billing group. For example, ProtectedContainer or BackupItem |
-| SourceSizeInMBs | Frontend size of the billing group in MBs |
-| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the billing group in the vault-standard tier |
-| BackupSolution | Backup Solution that the billing group is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
-| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the billing group |
-| VaultUniqueId | Foreign key which refers to the vault associated with the billing group |
-| VaultName | Name of the vault associated with the billing group |
-| VaultTags | Tags of the vault associated with the billing group |
-| VaultSubscriptionId | Subscription ID of the vault associated with the billing group |
-| VaultLocation | Location of the vault associated with the billing group |
-| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the billing group |
-| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
-| TimeGenerated | Timestamp of the record |
-| ExtendedProperties | Additional properties of the billing group |
+| **Field Name** | **Description** | **Data type** |
+| -- | | |
+| UniqueId | Primary key denoting unique ID of the billing group | String |
+| FriendlyName | Friendly name of the billing group | String |
+| Name | Name of the billing group | String |
+| Type | Type of billing group. For example, ProtectedContainer or BackupItem | String |
+| SourceSizeInMBs | Frontend size of the billing group in MBs | Double |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the billing group in the vault-standard tier | Double |
+| BackupSolution | Backup Solution that the billing group is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. | String |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the billing group | String |
+| VaultUniqueId | Foreign key which refers to the vault associated with the billing group | String |
+| VaultName | Name of the vault associated with the billing group | String |
+| VaultTags | Tags of the vault associated with the billing group | String |
+| VaultSubscriptionId | Subscription ID of the vault associated with the billing group | String |
+| VaultLocation | Location of the vault associated with the billing group | String |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the billing group | String |
+| VaultType | Type of the vault, for example, "Microsoft.RecoveryServices/vaults" or "Microsoft.DataProtection/backupVaults" | String |
+| TimeGenerated | Timestamp of the record | DateTime |
+| ExtendedProperties | Additional properties of the billing group | Dynamic |
### Trend Functions #### _AzureBackup_GetBackupInstancesTrends()
-This function returns historical records for each backup instance, allowing you to view key daily, weekly and monthly trends related to backup instance count and storage consumption, at multiple levels of granularity.
+This function returns historical records for each backup instance, allowing you to view key daily, weekly and monthly trends related to the backup instance count and storage consumption, at multiple levels of granularity.
**Parameters**
-| **Parameter Name** | **Description** | **Required?** | **Example value** |
-| -- | - | | -- |
-| RangeStart | Use this parameter along with RangeEnd parameter to retrieve all backup instance related records in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" |
-| RangeEnd | Use this parameter along with RangeStart parameter to retrieve all backup instance related records in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00"|
-| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those backup instances that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
-| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those backup instances that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
-| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of backup instances across all vaults. | N |"vault1,vault2,vault3"|
-| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
-| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
-| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
-| ProtectionInfoList | Use this parameter to choose whether to include only those backup instances that are actively protected, or to also include those instances for which protection has been stopped and instances for which initial backup is pending. Supported values are "Protected", "ProtectionStopped", "InitialBackupPending" or a comma-separated combination of any of these values. By default, the value is "*", which makes the function search for all backup instances irrespective of protection details. | N | "Protected" |
-| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
-| BackupInstanceName | Use this parameter to search for a particular backup instance by name. By default, the value is "*", which makes the function search for all backup instances. | N | "testvm" |
-| DisplayAllFields | Use this parameter to choose whether to retrieve only a subset of the fields returned by the function. If the value of this parameter is false, the function eliminates storage and retention point related information from the output of the function. This is useful if you are using this function as an intermediate step in a larger query and need to optimize the performance of the query by eliminating columns which you do not require for analysis. By default, the value of this parameter is true, which makes the function return all fields pertaining to the backup instance. | N | true |
-| AggregationType | Use this parameter to specify the time granularity at which data should be retrieved. If the value of this parameter is "Daily", the function returns a record per backup instance per day, allowing you to analyze daily trends of storage consumption and backup instance count. If the value of this parameter is "Weekly", the function returns a record per backup instance per week, allowing you to analyze weekly trends. Similarly, you can specify "Monthly" to analyze monthly trends. Default value is "Daily". If you are viewing data across larger time ranges, it is recommended to use "Weekly" or "Monthly" for better query performance and ease of trend analysis. | N | "Weekly" |
+| **Parameter Name** | **Description** | **Required?** | **Example value** | **Data type** |
+| -- | - | | -- | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter to retrieve all backup instance related records in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" | DateTime |
+| RangeEnd | Use this parameter along with RangeStart parameter to retrieve all backup instance related records in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00" | DateTime |
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those backup instances that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"| String |
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those backup instances that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | `eastus,westus` | String |
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of backup instances across all vaults. | N |`vault1,vault2,vault3` | String |
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. By default, the value of this parameter is '*', which makes the function search for both Recovery Services vaults and Backup vaults. | N | "Microsoft.RecoveryServices/vaults" | String |
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true | Boolean |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM` as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server", "Azure Database for PostgreSQL Server Backup", "Azure Blob Backup", "Azure Disk Backup" or a comma-separated combination of any of these values). | N | `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent` | String |
+| ProtectionInfoList | Use this parameter to choose whether to include only those backup instances that are actively protected, or to also include those instances for which protection has been stopped and instances for which initial backup is pending. For Recovery services vault workloads, supported values are "Protected", "ProtectionStopped", "InitialBackupPending" or a comma-separated combination of any of these values. For Backup vault workloads, supported values are "Protected", "ConfiguringProtection", "ConfiguringProtectionFailed", "UpdatingProtection", "ProtectionError", "ProtectionStopped" or a comma-separated combination of any of these values. By default, the value is "*", which makes the function search for all backup instances irrespective of protection details. | N | "Protected" | String |
+| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" | String |
+| BackupInstanceName | Use this parameter to search for a particular backup instance by name. By default, the value is "*", which makes the function search for all backup instances. | N | "testvm" | String |
+| DisplayAllFields | Use this parameter to choose whether to retrieve only a subset of the fields returned by the function. If the value of this parameter is false, the function eliminates storage and retention point related information from the output of the function. This is useful if you are using this function as an intermediate step in a larger query and need to optimize the performance of the query by eliminating columns which you do not require for analysis. By default, the value of this parameter is true, which makes the function return all fields pertaining to the backup instance. | N | true | Boolean |
+| AggregationType | Use this parameter to specify the time granularity at which data should be retrieved. If the value of this parameter is "Daily", the function returns a record per backup instance per day, allowing you to analyze daily trends of storage consumption and backup instance count. If the value of this parameter is "Weekly", the function returns a record per backup instance per week, allowing you to analyze weekly trends. Similarly, you can specify "Monthly" to analyze monthly trends. Default value is "Daily". If you are viewing data across larger time ranges, it is recommended to use "Weekly" or "Monthly" for better query performance and ease of trend analysis. | N | "Weekly" | String |
**Returned Fields**
-| **Field Name** | **Description** |
-| -- | |
-| UniqueId | Primary key denoting unique ID of the backup instance |
-| Id | Azure Resource Manager (ARM) ID of the backup instance |
-| FriendlyName | Friendly name of the backup instance |
-| ProtectionInfo | Information about the protection settings of the backup instance. For example, protection configured, protection stopped, initial backup pending |
-| LatestRecoveryPoint | Date and time of the latest recovery point associated with the backup instance |
-| OldestRecoveryPoint | Date and time of the oldest recovery point associated with the backup instance |
-| SourceSizeInMBs | Frontend size of the backup instance in MBs |
-| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the backup instance in the vault-standard tier |
-| DataSourceFriendlyName | Friendly name of the datasource corresponding to the backup instance |
-| BackupSolution | Backup Solution that the backup instance is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
-| DatasourceType | Type of the datasource corresponding to the backup instance. For example "Microsoft.Compute/virtualMachines" |
-| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource corresponding to the backup instance. For example, Azure Resource Manager (ARM) ID of the VM |
-| DatasourceSetFriendlyName | Friendly name of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the name of the VM in which the SQL Database exists |
-| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists |
-| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM |
-| PolicyName | Name of the policy associated with the backup instance |
-| PolicyUniqueId | Foreign key that refers to the policy associated with the backup instance |
-| PolicyId | Azure Resource Manager (ARM) ID of the policy associated with the backup instance |
-| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the backup instance |
-| VaultUniqueId | Foreign key which refers to the vault associated with the backup instance |
-| VaultName | Name of the vault associated with the backup instance |
-| VaultTags | Tags of the vault associated with the backup instance |
-| VaultSubscriptionId | Subscription ID of the vault associated with the backup instance |
-| VaultLocation | Location of the vault associated with the backup instance |
-| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the backup instance |
-| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
-| TimeGenerated | Timestamp of the record |
+| **Field Name** | **Description** | **Data type** |
+| -- | | |
+| UniqueId | Primary key denoting unique ID of the backup instance | String|
+| Id | Azure Resource Manager (ARM) ID of the backup instance | String |
+| FriendlyName | Friendly name of the backup instance | String |
+| ProtectionInfo | Information about the protection settings of the backup instance. For example, protection is configured, protection stopped, initial backup pending | String |
+| LatestRecoveryPoint | Date and time of the latest recovery point associated with the backup instance. Currently, this field is only supported for Recovery Services vault workloads | DateTime |
+| OldestRecoveryPoint | Date and time of the oldest recovery point associated with the backup instance | Currently, this field is only supported for Recovery Services vault workloads |
+| SourceSizeInMBs | Frontend size of the backup instance in MBs | Double |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the backup instance in the vault-standard tier | Double |
+| DataSourceFriendlyName | Friendly name of the datasource corresponding to the backup instance | String |
+| BackupSolution | Backup Solution that the backup instance is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. | String |
+| DatasourceType | Type of the datasource corresponding to the backup instance. For example "Microsoft.Compute/virtualMachines" | String |
+| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource corresponding to the backup instance. For example, Azure Resource Manager (ARM) ID of the VM | String |
+| DatasourceSetFriendlyName | Friendly name of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the name of the VM in which the SQL Database exists | String |
+| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists | String |
+| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM | String |
+| PolicyName | Name of the policy associated with the backup instance | String |
+| PolicyUniqueId | Foreign key that refers to the policy associated with the backup instance | String |
+| PolicyId | Azure Resource Manager (ARM) ID of the policy associated with the backup instance | String |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the backup instance | String |
+| VaultUniqueId | Foreign key which refers to the vault associated with the backup instance | String |
+| VaultName | Name of the vault associated with the backup instance | String |
+| VaultTags | Tags of the vault associated with the backup instance | String |
+| VaultSubscriptionId | Subscription ID of the vault associated with the backup instance | String |
+| VaultLocation | Location of the vault associated with the backup instance | String |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the backup instance | String |
+| VaultType | Type of the vault, for example, "Microsoft.RecoveryServices/vaults" or "Microsoft.DataProtection/backupVaults" | String |
+| TimeGenerated | Timestamp of the record | DateTime |
#### _AzureBackup_GetBillingGroupsTrends()
This function returns historical records for each billing entity, allowing you t
**Parameters**
-| **Parameter Name** | **Description** | **Required?** | **Example value** |
-| -- | - | | -- |
-| RangeStart | Use this parameter along with RangeEnd parameter to retrieve all billing group related records in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" |
-| RangeEnd | Use this parameter along with RangeStart parameter to retrieve all billing group related records in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00"|
-| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those billing groups that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
-| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those billing groups that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
-| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of billing groups across all vaults. | N |"vault1,vault2,vault3"|
-| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
-| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
-| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
-| BillingGroupName | Use this parameter to search for a particular billing group by name. By default, the value is "*", which makes the function search for all billing groups. | N | "testvm" |
-| AggregationType | Use this parameter to specify the time granularity at which data should be retrieved. If the value of this parameter is "Daily", the function returns a record per billing group per day, allowing you to analyze daily trends of storage consumption and frontend size. If the value of this parameter is "Weekly", the function returns a record per backup instance per week, allowing you to analyze weekly trends. Similarly, you can specify "Monthly" to analyze monthly trends. Default value is "Daily". If you are viewing data across larger time ranges, it is recommended to use "Weekly" or "Monthly" for better query performance and ease of trend analysis. | N | "Weekly" |
+| **Parameter Name** | **Description** | **Required?** | **Example value** | **Data type** |
+| -- | - | | -- | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter to retrieve all billing group related records in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" | DateTime |
+| RangeEnd | Use this parameter along with RangeStart parameter to retrieve all billing group related records in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00" | DateTime |
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those billing groups that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111" | String |
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those billing groups that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | `eastus,westus` | String |
+| VaultList | Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of billing groups across all vaults. | N |`vault1,vault2,vault3` | String |
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. By default, the value of this parameter is '*', which makes the function search for both Recovery Services vaults and Backup vaults. | N | "Microsoft.RecoveryServices/vaults" | String |
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true | Boolean |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM` as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server", "Azure Database for PostgreSQL Server Backup", "Azure Blob Backup", "Azure Disk Backup" or a comma-separated combination of any of these values). | N | `Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent` | String |
+| BillingGroupName | Use this parameter to search for a particular billing group by name. By default, the value is "*", which makes the function search for all billing groups. | N | "testvm" | String |
+| AggregationType | Use this parameter to specify the time granularity at which data should be retrieved. If the value of this parameter is "Daily", the function returns a record per billing group per day, allowing you to analyze daily trends of storage consumption and frontend size. If the value of this parameter is "Weekly", the function returns a record per backup instance per week, allowing you to analyze weekly trends. Similarly, you can specify "Monthly" to analyze monthly trends. Default value is "Daily". If you are viewing data across larger time ranges, it is recommended to use "Weekly" or "Monthly" for better query performance and ease of trend analysis. | N | "Weekly" | String |
**Returned Fields**
-| **Field Name** | **Description** |
-| -- | |
-| UniqueId | Primary key denoting unique ID of the billing group |
-| FriendlyName | Friendly name of the billing group |
-| Name | Name of the billing group |
-| Type | Type of billing group. For example, ProtectedContainer or BackupItem |
-| SourceSizeInMBs | Frontend size of the billing group in MBs |
-| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the billing group in the vault-standard tier |
-| BackupSolution | Backup Solution that the billing group is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
-| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the billing group |
-| VaultUniqueId | Foreign key which refers to the vault associated with the billing group |
-| VaultName | Name of the vault associated with the billing group |
-| VaultTags | Tags of the vault associated with the billing group |
-| VaultSubscriptionId | Subscription ID of the vault associated with the billing group |
-| VaultLocation | Location of the vault associated with the billing group |
-| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the billing group |
-| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
-| TimeGenerated | Timestamp of the record |
-| ExtendedProperties | Additional properties of the billing group |
+| **Field Name** | **Description** | **Data type** |
+| -- | | |
+| UniqueId | Primary key denoting unique ID of the billing group | String |
+| FriendlyName | Friendly name of the billing group | String |
+| Name | Name of the billing group | String |
+| Type | Type of billing group. For example, ProtectedContainer or BackupItem | String |
+| SourceSizeInMBs | Frontend size of the billing group in MBs | Double |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the billing group in the vault-standard tier | Double |
+| BackupSolution | Backup Solution that the billing group is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. | String |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the billing group | String |
+| VaultUniqueId | Foreign key which refers to the vault associated with the billing group | String |
+| VaultName | Name of the vault associated with the billing group | String |
+| VaultTags | Tags of the vault associated with the billing group | String |
+| VaultSubscriptionId | Subscription ID of the vault associated with the billing group | String |
+| VaultLocation | Location of the vault associated with the billing group | String |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the billing group | String |
+| VaultType | Type of the vault, for example, "Microsoft.RecoveryServices/vaults" or "Microsoft.DataProtection/backupVaults" | String |
+| TimeGenerated | Timestamp of the record | DateTime |
+| ExtendedProperties | Additional properties of the billing group | Dynamic |
## Sample Queries
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md
Title: Configure Azure Backup reports description: Configure and view reports for Azure Backup by using Log Analytics and Azure workbooks Previously updated : 02/14/2022 Last updated : 04/18/2023
Today, Azure Backup provides a reporting solution that uses [Azure Monitor logs]
## Supported scenarios -- Backup reports are supported for Azure VMs, SQL in Azure VMs, SAP HANA in Azure VMs, Microsoft Azure Recovery Services (MARS) agent, Microsoft Azure Backup Server (MABS), and System Center Data Protection Manager (DPM). For Azure File share backup, data is displayed for records created on or after June 1, 2020.
+- Backup reports are supported for Azure VMs, SQL in Azure VMs, SAP HANA in Azure VMs, Microsoft Azure Recovery Services (MARS) agent, Microsoft Azure Backup Server (MABS), System Center Data Protection Manager (DPM), Azure Database for PostgreSQL Server, Azure Blobs and Azure Disks. For Azure File share backup, data is displayed for records created on or after June 1, 2020.
- For Azure File share backup, data on protected instances is displayed for records created after Feb 1st, 2021 (defaults to zero for older records). - For DPM workloads, Backup reports are supported for DPM Version 5.1.363.0 and above and Agent Version 2.0.9127.0 and above. - For MABS workloads, Backup reports are supported for MABS Version 13.0.415.0 and above and Agent Version 2.0.9170.0 and above. - Backup reports can be viewed across all backup items, vaults, subscriptions, and regions as long as their data is being sent to a Log Analytics workspace that the user has access to. To view reports for a set of vaults, you only need to have reader access to the Log Analytics workspace to which the vaults are sending their data. You don't need to have access to the individual vaults. - If you're an [Azure Lighthouse](../lighthouse/index.yml) user with delegated access to your customers' subscriptions, you can use these reports with Azure Lighthouse to view reports across all your tenants. - Currently, data can be viewed in Backup Reports across a maximum of 100 Log Analytics Workspaces (across tenants).
+ >[!Note]
+ >Depending on the complexity of queries and the volume of data processed, it's possible that you might see errors when selecting a large number of workspaces that are less than 100, in some cases. We recommend that you limit the number of workspaces being queried at a time.
- Data for log backup jobs currently isn't displayed in the reports. [!INCLUDE [backup-center.md](../../includes/backup-center.md)]
By default, the data in a Log Analytics workspace is retained for 30 days. To se
### 2. Configure diagnostics settings for your vaults
-Azure Resource Manager resources, such as Recovery Services vaults, record information about scheduled operations and user-triggered operations as diagnostics data.
+Azure Resource Manager resources, such as Recovery Services vaults, record information about scheduled operations and user-triggered operations as diagnostics data. To configure diagnostics settings for your vaults, follow these steps:
+
+**Choose a vault type**:
+
+# [Recovery Services vaults](#tab/recovery-services-vaults)
In the monitoring section of your Recovery Services vault, select **Diagnostics settings** and specify the target for the Recovery Services vault's diagnostic data. To learn more about using diagnostic events, see [Use diagnostics settings for Recovery Services vaults](./backup-azure-diagnostic-events.md).
-![Diagnostics settings pane](./media/backup-azure-configure-backup-reports/resource-specific-blade.png)
-Azure Backup also provides a built-in Azure Policy definition, which automates the configuration of diagnostics settings for all vaults in a given scope. To learn how to use this policy, see [Configure vault diagnostics settings at scale](./azure-policy-configure-diagnostics.md).
+
+Azure Backup also provides a built-in Azure Policy definition, which automates the configuration of diagnostics settings for all Recovery Services vaults in a given scope. To learn how to use this policy, see [Configure vault diagnostics settings at scale](./azure-policy-configure-diagnostics.md).
+
+# [Backup vaults](#tab/backup-vaults)
+
+In the monitoring section of your Backup vault, select **Diagnostics settings** and specify the target for the Backup vault's diagnostic data.
++++ > [!NOTE] > After you configure diagnostics, it might take up to 24 hours for the initial data push to complete. After data starts flowing into the Log Analytics workspace, you might not see data in the reports immediately because data for the current partial day isn't shown in the reports. For more information, see [Conventions used in Backup reports](#conventions-used-in-backup-reports). We recommend that you start viewing the reports two days after you configure your vaults to send data to Log Analytics. #### 3. View reports in the Azure portal
-After you've configured your vaults to send data to Log Analytics, view your Backup reports by going to any vault's pane and selecting **Backup Reports**.
+After you've configured your vaults to send data to Log Analytics, view your Backup reports by going to the Backup center and selecting **Backup Reports**. Select the relevant workspace(s) on the **Get started** tab.
-![Vault dashboard](./media/backup-azure-configure-backup-reports/vault-dashboard.png)
-Select this link to open up the Backup report workbook.
-
-> [!NOTE]
->
-> - Currently, the initial load of the report might take up to 1 minute.
-> - The Recovery Services vault is merely an entry point for Backup reports. After the Backup report workbook opens up from a vault's pane, select the appropriate set of Log Analytics workspaces to see data aggregated across all your vaults.
The report contains various tabs:
The report contains various tabs:
Use this tab to get a high-level overview of your backup estate. You can get a quick glance of the total number of backup items, total cloud storage consumed, the number of protected instances, and the job success rate per workload type. For more detailed information about a specific backup artifact type, go to the respective tabs.
- ![Summary tab](./media/backup-azure-configure-backup-reports/summary.png)
+
##### Backup Items
Use this tab to view key billing parameters for your backups. The information sh
![Usage tab](./media/backup-azure-configure-backup-reports/usage.png) > [!NOTE]
-> For DPM workloads, users might see a slight difference (of the order of 20 MB per DPM server) between the usage values shown in the reports as compared to the aggregate usage value as shown in the Recovery Services vault **Overview** tab. This difference is accounted for by the fact that every DPM server being registered for backup has an associated 'metadata' datasource which isn't surfaced as an artifact for reporting.
+>- For Azure File, Azure Blob and Azure Disk workloads, storage consumed shows as *zero*. This is because field refers to the storage consumed in the vault, and for Azure File, Azure Blob, and Azure Disk; only the snapshot-based backup solution is currently supported in the reports.
+>- For DPM workloads, users might see a slight difference (of the order of 20 MB per DPM server) between the usage values shown in the reports as compared to the aggregate usage value as shown on the Recovery Services vault **Overview** tab. This difference is accounted for by the fact that every DPM server being registered for backup has an associated 'metadata' datasource, which isn't surfaced as an artifact for reporting.
##### Jobs
Use this tab to view long-running trends on jobs, such as the number of failed j
![Jobs tab](./media/backup-azure-configure-backup-reports/jobs.png)
+> [!NOTE]
+> For Azure Database for PostgreSQL, Azure Blob, and Azure Disk workloads, data transferred field is currently not available in the *Jobs* table.
+ ##### Policies Use this tab to view information on all of your active policies, such as the number of associated items and the total cloud storage consumed by items backed up under a given policy. Select a particular policy to view information on each of its associated Backup items.
Using this view, you can identify those backup items that haven't had a successf
To view inactive resources, navigate to the **Optimize** tab, and select the **Inactive Resources** tile. Select this tile displays a grid that contains details of all the inactive resources that exist in the selected scope. By default, the grid shows items that don't have a recovery point in the last seven days. To find inactive resources for a different time range, you can adjust the **Time Range** filter at the top of the tab.
-Once you've identified an inactive resource, you can investigate the issue further by navigating to the backup item dashboard or the Azure resource pane for that resource (wherever applicable). Depending on your scenario, you can choose to either stop backup for the machine (if it doesn't exist anymore) and delete unnecessary backups, which saves costs, or you can fix issues in the machine to ensure that backups are taken reliably.
+Once you've identified an inactive resource, you can investigate the issue further by navigating to the backup item dashboard or the Azure resource pane for that resource (wherever applicable). Depending on your scenario, you can choose to either stop backup for the machine (if it doesn't exist anymore) and delete unnecessary backups, which save costs, or you can fix issues in the machine to ensure that backups are taken reliably.
![Optimize tab - Inactive Resources](./media/backup-azure-configure-backup-reports/optimize-inactive-resources.png)
+> [!NOTE]
+> For Azure Database for PostgreSQL, Azure Blob, and Azure Disk workloads, Inactive Resources view is currently not supported.
+ ###### Backup Items with a large retention duration Using this view, you can identify those items that have backups retained for a longer duration than required by your organization.
For database workloads like SQL and SAP HANA, the retention periods shown in the
![Optimize tab - Retention Optimizations](./media/backup-azure-configure-backup-reports/optimize-retention.png)
+> [!NOTE]
+> For backup instances that are using the vault-standard tier, the Retention Optimizations grid takes into consideration the retention duration in the vault-standard tier. For backup instances that aren't using the vault tier (for example, items protected by Azure Disk Backup solution), the grid takes into consideration the snapshot tier retention.
+ ###### Databases configured for daily full backup Using this view, you can identify database workloads that have been configured for daily full backup. Often, using daily differential backup along with weekly full backup is more cost-effective.
backup Offline Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/offline-backup-overview.md
Azure Backup supports offline backup, which transfers initial backup data offlin
Offline backup is offered in two modes based on the ownership of the storage devices: -- Offline backup based on Azure Data Box (preview)
+- Offline backup based on Azure Data Box
- Offline backup based on the Azure Import/Export service
-## Offline backup based on Azure Data Box (preview)
+## Offline backup based on Azure Data Box
-This mode is currently supported with the Microsoft Azure Recovery Services (MARS) Agent, in preview. This option takes advantage of [Azure Data Box](https://azure.microsoft.com/services/databox/) to ship Microsoft-proprietary, secure, and tamper-resistant transfer appliances with USB connectors to your datacenter or remote office. Backup data is directly written onto these devices. This option saves the effort required to procure your own Azure-compatible disks and connectors or to provision temporary storage as a staging location. Microsoft also handles the end-to-end transfer logistics, which you can track through the Azure portal.
+This option is supported by Microsoft Azure Backup Server (MABS), System Center Data Protection Manager (DPM) DPM-A, and the MARS Agent. This option takes advantage of [Azure Data Box](https://azure.microsoft.com/services/databox/) to ship Microsoft-proprietary, secure, and tamper-resistant transfer appliances with USB connectors to your datacenter or remote office. Backup data is directly written onto these devices. This option saves the effort required to procure your own Azure-compatible disks and connectors or to provision temporary storage as a staging location. Microsoft also handles the end-to-end transfer logistics, which you can track through the Azure portal.
An architecture that describes the movement of backup data with this option is shown here.
The following table compares the two available options so that you can make the
| **Consideration** | **Offline backup based on Azure Data Box** | **Offline backup based on the Azure Import/Export service** | | | | |
-| Azure Backup deployment models | MARS Agent (preview) | MARS Agent, MABS, DPM-A |
+| Azure Backup deployment models | MARS Agent, MABS, DPM-A | MARS Agent, MABS, DPM-A |
| Maximum backup data per server (MARS) or per protection group (MABS, DPM-A) | [Azure Data Box disk](../databox/data-box-disk-overview.md) - 7.2 TB <br> [Azure Data Box](../databox/data-box-overview.md) - 80 TB | 80 TB (up to 10 disks of 8 TB each) | | Security (data, device, and service) | [Data](../databox/data-box-security.md#data-box-data-protection) - AES 256-bit encrypted <br> [Device](../databox/data-box-security.md#data-box-device-protection) - Rugged case, proprietary, credential-based interface to copy data <br> [Service](../databox/data-box-security.md#data-box-service-protection) - Protected by Azure security features | Data - BitLocker encrypted | | Temporary staging location provisioning | Not required | More than or equal to the estimated backup data size |
cognitive-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/manage-resources.md
https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.
# [PowerShell](#tab/powershell) ```powershell
-Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} -ApiVersion 2021-04-30`
+Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} -ApiVersion 2021-04-30
``` # [Azure CLI](#tab/azure-cli)
az resource delete --ids /subscriptions/{subscriptionId}/providers/Microsoft.Cog
* [Create a new resource using the Azure portal](cognitive-services-apis-create-account.md) * [Create a new resource using the Azure CLI](cognitive-services-apis-create-account-cli.md) * [Create a new resource using the client library](cognitive-services-apis-create-account-client-library.md)
-* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
+* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Send and receive Loop components | ❌ | | | Send and receive Emojis | ❌ | | | Send and receive Stickers | ❌ |
-| | Send and receive Stickers | ❌ |
| | Send and receive Teams messaging extensions | ❌ | | | Use typing indicators | ✔️ | | | Read receipt | ❌ |
In this article, you will learn which capabilities are supported for Teams exter
| | React to chat message | ❌ | | | [Data Loss Prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams) | ✔️*| | | [Customer Managed Keys (CMK)](/microsoft-365/compliance/customer-key-overview) | ✔️ |
+| Chat with Teams Interoperability | Send and receive text messages | ✔️ |
+| | Send and receive rich text messages | ✔️ |
+| | Send and receive typing indicators | ✔️ |
+| | [Receive inline images](../../../tutorials/chat-interop/meeting-interop-features-inline-image.md) | ✔️** |
+| | Receive read receipts | ❌ |
+| | Receive shared files | ❌ |
| Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | | | Switch between cameras | ✔️ |
When Teams external users leave the meeting, or the meeting ends, they can no lo
*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP)](../../../how-tos/chat-sdk/data-loss-prevention.md)
+**Inline image support is currently in public preview and is available in the Chat SDK for JavaScript only. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review [Supplemental Terms of Use for Microsoft Azure Previews.](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+
+**If the Teams external user sends a message with images uploaded via "Upload from this device" menu or via drag-and-drop (such as dragging images directly to the send box) in the Teams, then these scenarios would be covered under the file sharing capability, which is currently not supported.
+ ## Server capabilities The following table shows supported server-side capabilities available in Azure Communication
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
This article provides information about limitations and known issues related to
The following sections provide information about known issues associated with the Communication Services JavaScript voice and video calling SDKs. ### Firefox Known Issues
-Firefox desktop browser support is now available in public preview. Known issues currently known when using Firefox are:
+Firefox desktop browser support is now available in public preview. Known issues are:
- Enumerating speakers is not available: If you're using Firefox, your app won't be able to enumerate or select speakers through the Communication Services device manager. In this scenario, you must select devices via the operating system. - Virtual cameras are not currently supported when making Firefox desktop audio\video calls.
+### iOS Chrome Known Issues
+iOS Chrome browser support is now available in public preview. Known issues are:
+- No outgoing and incoming audio when switching browser to background or locking the device
+- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of ACS call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it is not reproducible on iOS 16.
### iOS 16 introduced bugs when putting browser in the background during a call The iOS 16 release has introduced a bug that can stop the ACS audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an ACS call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone.
Chrome version 98 introduced a regression with abnormal generation of video keyf
### No incoming audio during a call Occasionally, a user in an ACS call may not be able to hear the audio from remote participants.
-There is a related [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1402250) bug which causes this issue, the issue can be mitigated by reconnecting the PeerConnection. We've added this workaround since SDK 1.9.1 (stable) and SDK 1.10.0 (beta)
+There is a related [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1402250) bug that causes this issue, the issue can be mitigated by reconnecting the PeerConnection. We've added this workaround since SDK 1.9.1 (stable) and SDK 1.10.0 (beta)
-On Android Chrome, if a user joins ACS call several times, the incoming audio can also disappear. The user will not be able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage.
+On Android Chrome, if a user joins ACS call several times, the incoming audio can also disappear. The user is not able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage.
### Some Android devices failing call scenarios except for group calls.
A number of specific Android devices fail to start, accept calls, and meetings.
### Android Chrome mutes the call after browser goes to background for one minute
-On Android Chrome, if a user is on an ACS call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone will be available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940)
+On Android Chrome, if a user is on an ACS call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone is available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940)
### The user has dropped the call but is still on the participant list.
This problem can occur if another application or the operating system takes over
- A user plays a YouTube video, for example, or starts a FaceTime call. Switching to another native application can capture access to the microphone or camera. - A user enables Siri, which will capture access to the microphone.
-On iOS for example, while on an ACS call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the ACS call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the ACS call for audio to start flowing again in the ACS call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the ACS call and the ACS call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the ACS call.
+On iOS, for example, while on an ACS call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the ACS call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the ACS call for audio to start flowing again in the ACS call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the ACS call and the ACS call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the ACS call.
In case camera is on and an interruption occurs, ACS call may or may not lose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera.
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following table represents the set of supported browsers, which are currentl
| Platform | Chrome | Safari | Edge | Firefox | Webview | | | | | | - | - | | Android | ✔️ | ❌ | ❌ | ❌ | ✔️ * |
-| iOS | ❌ | ✔️ | ❌ | ❌ | ✔️ |
+| iOS | ✔️ | ✔️ | ❌ | ❌ | ✔️ |
| macOS | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | | Windows | ✔️ | ❌ | ✔️ | ✔️ | ❌ | | Ubuntu/Linux | ✔️ | ❌ | ❌ | ❌ | ❌ |
communication-services Meeting Interop Features Inline Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-interop/meeting-interop-features-inline-image.md
+
+ Title: Enable Inline Image Support in your Chat app
+
+description: In this tutorial, you'll learn how to enable inline image interoperability with the Azure Communication Chat SDK
++ Last updated : 03/27/2023++++++
+# Tutorial: Enable inline interoperability features in your Chat app
+
+## Add inline image support
+The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive inline images sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript.
+++
+## Next steps
+
+For more information, see the following articles:
+
+- Learn more about other [supported interoperability features](../../concepts/interop/guest/capabilities.md)
+- Check out our [chat hero sample](../../samples/chat-hero-sample.md)
+- Learn more about [how chat works](../../concepts/chat/concepts.md)
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
# Virtual appointments
-This tutorial describes concepts for virtual appointment applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you will understand common use cases that a virtual appointments application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further.
+This tutorial describes concepts for virtual appointment applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you'll understand common use cases that a virtual appointments application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further.
Virtual appointments are a communication pattern where a **consumer** and a **business** assemble for a scheduled appointment. The **organizational boundary** between consumer and business, and **scheduled** nature of the interaction, are key attributes of most virtual appointments. Many industries operate virtual appointments: meetings with a healthcare provider, a loan officer, or a product support technician.
These three **implementation options** are columns in the table below, while eac
| *Provider* | Managing upcoming appointments | Outlook & Teams | Outlook & Teams | Custom | | *Provider* | Join the appointment | Teams | Teams | ACS Calling & Chat | | *Consumer* | Schedule an appointment | Bookings | Bookings | ACS Rooms |
-| *Consumer*| Be reminded of a appointment | Bookings | Bookings | ACS SMS |
+| *Consumer*| Be reminded of an appointment | Bookings | Bookings | ACS SMS |
| *Consumer*| Join the appointment | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat | There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience:
The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. T
![High-level architecture of a hybrid virtual appointments solution](./media/virtual-visits/virtual-visit-arch.svg) 1. Consumer schedules the appointment using Microsoft 365 Bookings.
-2. Consumer gets a appointment reminder through SMS and Email.
+2. Consumer gets an appointment reminder through SMS and Email.
3. Provider joins the appointment using Microsoft Teams. 4. Consumer uses a link from the Bookings reminders to launch the Contoso consumer app and join the underlying Teams meeting.
-5. The users communicate with each other using voice, video, and text chat in a meeting.
+5. The users communicate with each other using voice, video, and text chat in a meeting. Specifically, Teams chat interoperability enables Teams user to send inline images directly to ACS users seamlessly.
## Building a virtual appointment sample
-In this section weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application will be a desktop and mobile friendly browser experience, with code that you can use to explore and productionize.
+In this section, weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile friendly browser experience, with code that you can use to explore and for production.
### Step 1 - Configure bookings
This sample uses takes advantage of the Microsoft 365 Bookings app to power the
![Screenshot of Booking configuration experience.](./media/virtual-visits/bookings-url.png)
-Make sure online meeting is enable for the calendar by going to https://outlook.office.com/bookings/services.
+Make sure online meeting is enabled in the calendar by going to https://outlook.office.com/bookings/services.
![Screenshot of Booking services configuration experience.](./media/virtual-visits/bookings-services.png)
The deployment launches an Azure Resource Manager (ARM) template that deploys th
![Screenshot of Sample builder arm template.](./media/virtual-visits/sample-builder-arm.png)
-After walking through the ARM template you can **Go to resource group**.
+After walking through the ARM template, you can **Go to resource group**.
![Screenshot of a completed Azure Resource Manager Template.](./media/virtual-visits/azure-complete-deployment.png)
Enter the application url followed by "/visit" in the "Deployed App URL" field i
The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual appointment: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production. ### Launching patterns
-Consumers want to jump directly to the virtual appointment from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that will be used in reminders. If your prefix is `https://<YOUR URL>/VISIT`, Bookings will point users to `https://<YOUR URL>/VISIT?MEETINGURL=<MEETING URL>.`
+Consumers want to jump directly to the virtual appointment from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that is used in reminders. If your prefix is `https://<YOUR URL>/VISIT`, Bookings point users to `https://<YOUR URL>/VISIT?MEETINGURL=<MEETING URL>.`
### Integrate into your existing app The app service generated by the Sample Builder is a stand-alone artifact, designed for desktop and mobile browsers. However you may have a website or mobile application already and need to migrate these experiences to that existing codebase. The code generated by the Sample Builder should help, but you can also use:
The app service generated by the Sample Builder is a stand-alone artifact, desig
- **Core SDKs ΓÇô** The underlying [Call](../quickstarts/voice-video-calling/get-started-teams-interop.md) and [Chat](../quickstarts/chat/meeting-interop.md) services can be accessed and you can build any kind of user experience. ### Identity & security
-The Sample BuilderΓÇÖs consumer experience does not authenticate the end user, but provides [Azure Communication Services user access tokens](../quickstarts/identity/access-tokens.md) to any random visitor. That isnΓÇÖt realistic for most scenarios, and you will want to implement an authentication scheme.
+The Sample BuilderΓÇÖs consumer experience doesn't authenticate the end user, but provides [Azure Communication Services user access tokens](../quickstarts/identity/access-tokens.md) to any random visitor. That isnΓÇÖt realistic for most scenarios, and you want to implement an authentication scheme.
connectors Connect Common Data Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md
ms.suite: integration Previously updated : 09/07/2022 Last updated : 04/17/2023 tags: connectors
tags: connectors
> with the legacy Dataverse connector. However, make sure to review these workflows, and update them promptly. > > Starting October 2023, the legacy version becomes unavailable for new workflows. Existing workflows continue
-> to work, but you *must* use the current Dataverse connector for new workflows. Starting October 31, 2023,
-> *all* workflows must use the current Dataverse connector. Any existing workflows that still use the legacy
-> version will stop working.
+> to work, but you *must* use the current Dataverse connector for new workflows. At that time, a timeline for the shutdown date for the legacy actions and triggers will be announced.
> > Since November 2020, the Common Data Service connector was renamed Microsoft Dataverse (Legacy).
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Platform metrics and the Activity logs are collected automatically, whereas you
| | | | | | **DataPlaneRequests** | All APIs | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` | | **MongoRequests** | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- | **Ca/ssandraRequests** | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ | **CassandraRequests** | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
| **GremlinRequests** | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` | | **QueryRuntimeStatistics** | NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` | | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
description: This article explains how to group costs using tag inheritance. Previously updated : 03/30/2023 Last updated : 04/17/2023
Tag inheritance is available for the following billing account types:
- Microsoft Customer Agreement (MCA) - Microsoft Partner Agreement (MPA) with Azure plan subscriptions
-Here's an example diagram showing how a tag is inherited.
+Here's an example diagram showing how a tag is inherited. *Note that inherited tags are applied to child resource usage records and not the resources themselves.*
:::image type="content" source="./media/enable-tag-inheritance/tag-example-01.svg" alt-text="Example diagram showing how a tag is inherited." border="false" lightbox="./media/enable-tag-inheritance/tag-example-01.svg":::
cost-management-billing Save Compute Costs Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md
Title: What are Azure Reservations? description: Learn about Azure Reservations and pricing to save on your reserved instances for virtual machines, SQL databases, Azure Cosmos DB, and other resource costs. -+ Previously updated : 12/06/2022 Last updated : 04/14/2023
Software plans:
- **SUSE Linux** - A reservation covers the software plan costs. The discounts apply only to SUSE meters and not to the virtual machine usage. - **Red Hat Plans** - A reservation covers the software plan costs. The discounts apply only to RedHat meters and not to the virtual machine usage.-- **Azure VMware Solution by CloudSimple** - A reservation covers the VMware CloudSimple Nodes. Additional software costs still apply. - **Azure Red Hat OpenShift** - A reservation applies to the OpenShift costs, not to Azure infrastructure costs. For Windows virtual machines and SQL Database, the reservation discount doesn't apply to the software costs. You can cover the licensing costs with [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
+
+ Title: Release notes for Microsoft Azure Data Manager for Agriculture Preview #Required; page title is displayed in search results. Include the brand.
+description: This article provides release notes for Azure Data Manager for Agriculture Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results.
++++ Last updated : 04/14/2023 #Required; mm/dd/yyyy format.+++
+# Release Notes for Azure Data Manager for Agriculture Preview
+
+Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
+
+- The latest releases
+- Known issues
+- Bug fixes
+- Deprecated functionality
+- Plans for changes
+
+ We'll provide information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Agriculture Preview monthly.
+
+> [!NOTE]
+> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). See Azure Data Manager for Agriculture specific terms of use [**here**](supplemental-terms-azure-data-manager-for-agriculture.md).
+>
+> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
+
+## March 2023
+
+### Key Announcement: Preview Release
+Azure Data Manager for Agriculture is now available in preview. See our blog post [here](https://azure.microsoft.com/blog/announcing-microsoft-azure-data-manager-for-agriculture-accelerating-innovation-across-the-agriculture-value-chain/).
+
+### Audit logs
+In Azure Data Manager for Agriculture Preview, you can monitor how and when your resources are accessed, and by whom. You can also debug reasons for failure for data-plane requests. [Audit Logs](how-to-set-up-audit-logs.md) are now available for your use.
+
+### Private links
+You can connect to Azure Data Manager for Agriculture service from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Agriculture Preview instance over these private IP addresses. [Private Links](how-to-set-up-private-links.md) are now available for your use.
+
+### BYOL for satellite imagery
+To support scalable ingestion of geometry-clipped imagery, we've partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. Read more about our satellite connector [here](concepts-ingest-satellite-imagery.md).
+
+## Next steps
+* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md).
+* Understand our APIs [here](/rest/api/data-manager-for-agri).
databox-online Azure Stack Edge Gpu Modify Fpga Modules Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-modify-fpga-modules-gpu.md
To set memory and CPU usage, use processor limits for modules in the `k8s-experi
``` The memory and CPU specification are not necessary but generally good practice. If `requests` isn't specified, the values set in limits are used as the minimum required.
-Using shared memory for modules also requires a different way. For example, you can use the Host IPC mode for shared memory access between Live Video Analytics and Inference solutions as described in [Deploy Live Video Analytics on Azure Stack Edge](../azure-video-analyzer/video-analyzer-docs/overview.md).
-
+Using shared memory for modules also requires a different way. For example, you can use the Host IPC mode for shared memory access between Live Video Analytics and Inference solutions as described in [Deploy Live Video Analytics on Azure Stack Edge](/previous-versions/azure/azure-video-analyzer/video-analyzer-docs/articles/azure-video-analyzer/video-analyzer-docs/overview).
## Web proxy
databox-online Azure Stack Edge Gpu System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-system-requirements.md
Previously updated : 04/13/2023 Last updated : 04/17/2023
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 03/26/2023 Last updated : 03/29/2023 # Security alerts - a reference guide
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Detected suspicious network activity** | Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | - | Low | | **Detected suspicious new firewall rule** | Analysis of host data detected a new firewall rule has been added via netsh.exe to allow traffic from an executable in a suspicious location. | - | Medium | | **Detected suspicious use of Cacls to lower the security state of the system** | Attackers use myriad ways like brute force, spear phishing etc. to achieve initial compromise and get a foothold on the network. Once initial compromise is achieved they often take steps to lower the security settings of a system. CaclsΓÇöshort for change access control list is Microsoft Windows native command-line utility often used for modifying the security permission on folders and files. A lot of time the binary is used by the attackers to lower the security settings of a system. This is done by giving Everyone full access to some of the system binaries like ftp.exe, net.exe, wscript.exe etc. Analysis of host data on %{Compromised Host} detected suspicious use of Cacls to lower the security of a system. | - | Medium |
-| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file which is configured to connect to a remote FTP server and download more malicious binaries. | - | Medium |
-| **Detected suspicious use of Pcalua.exe to launch executable code** | Analysis of host data on %{Compromised Host} detected the use of pcalua.exe to launch executable code. Pcalua.exe is component of the Microsoft Windows "Program Compatibility Assistant" which detects compatibility issues during the installation or execution of a program. Attackers are known to abuse functionality of legitimate Windows system tools to perform malicious actions, for example using pcalua.exe with the -a switch to launch malicious executables either locally or from remote shares. | - | Medium |
+| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file, which is configured to connect to a remote FTP server and download more malicious binaries. | - | Medium |
+| **Detected suspicious use of Pcalua.exe to launch executable code** | Analysis of host data on %{Compromised Host} detected the use of pcalua.exe to launch executable code. Pcalua.exe is component of the Microsoft Windows "Program Compatibility Assistant", which detects compatibility issues during the installation or execution of a program. Attackers are known to abuse functionality of legitimate Windows system tools to perform malicious actions, for example using pcalua.exe with the -a switch to launch malicious executables either locally or from remote shares. | - | Medium |
| **Detected the disabling of critical services** | The analysis of host data on %{Compromised Host} detected execution of "net.exe stop" command being used to stop critical services like SharedAccess or the Windows Security app. The stopping of either of these services can be indication of a malicious behavior. | - | Medium | | **Digital currency mining related behavior detected** | Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining. | - | High | | **Dynamic PS script construction** | Analysis of host data on %{Compromised Host} detected a PowerShell script being constructed dynamically. Attackers sometimes use this approach of progressively building up a script in order to evade IDS systems. This could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Malicious firewall rule created by ZINC server implant [seen multiple times]** | A firewall rule was created using techniques that match a known actor, ZINC. The rule was possibly used to open a port on %{Compromised Host} to allow for Command & Control communications. This behavior was seen [x] times today on the following machines: [Machine names] | - | High | | **Malicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is considered malicious. | - | High | | **Multiple Domain Accounts Queried** | Analysis of host data has determined that an unusual number of distinct domain accounts are being queried within a short time period from %{Compromised Host}. This kind of activity could be legitimate, but can also be an indication of compromise. | - | Medium |
-| **Possible credential dumping detected [seen multiple times]** | Analysis of host data has detected use of native windows tool (e.g. sqldumper.exe) being used in a way that allows to extract credentials from memory. Attackers often use these techniques to extract credentials that they then further use for lateral movement and privilege escalation. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
+| **Possible credential dumping detected [seen multiple times]** | Analysis of host data has detected use of native windows tool (for example, sqldumper.exe) being used in a way that allows to extract credentials from memory. Attackers often use these techniques to extract credentials that they then further use for lateral movement and privilege escalation. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
| **Potential attempt to bypass AppLocker detected** | Analysis of host data on %{Compromised Host} detected a potential attempt to bypass AppLocker restrictions. AppLocker can be configured to implement a policy that limits what executables are allowed to run on a Windows system. The command-line pattern similar to that identified in this alert has been previously associated with attacker attempts to circumvent AppLocker policy by using trusted executables (allowed by AppLocker policy) to execute untrusted code. This could be legitimate activity, or an indication of a compromised host. | - | High | | **PsExec execution detected**<br>(VM_RunByPsExec) | Analysis of host data indicates that the process %{Process Name} was executed by PsExec utility. PsExec can be used for running processes remotely. This technique might be used for malicious purposes. | Lateral Movement, Execution | Informational | | **Ransomware indicators detected [seen multiple times]** | Analysis of host data indicates suspicious activity traditionally associated with lock-screen and encryption ransomware. Lock screen ransomware displays a full-screen message preventing interactive use of the host and access to its files. Encryption ransomware prevents access by encrypting data files. In both cases a ransom message is typically displayed, requesting payment in order to restore file access. This behavior was seen [x] times today on the following machines: [Machine names] | - | High |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Rare SVCHOST service group executed**<br>(VM_SvcHostRunInRareServiceGroup) | The system process SVCHOST was observed running a rare service group. Malware often uses SVCHOST to masquerade its malicious activity. | Defense Evasion, Execution | Informational | | **Sticky keys attack detected** | Analysis of host data indicates that an attacker may be subverting an accessibility binary (for example sticky keys, onscreen keyboard, narrator) in order to provide backdoor access to the host %{Compromised Host}. | - | Medium | | **Successful brute force attack**<br>(VM_LoginBruteForceSuccess) | Several sign in attempts were detected from the same source. Some successfully authenticated to the host.<br>This resembles a burst attack, in which an attacker performs numerous authentication attempts to find valid account credentials. | Exploitation | Medium/High |
-| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it is a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
+| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
| **Suspect service installation** | Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium | | **Suspected Kerberos Golden Ticket attack parameters observed** | Analysis of host data detected commandline parameters consistent with a Kerberos Golden Ticket attack. | - | Medium | | **Suspicious Account Creation Detected** | Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator. | - | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Suspicious SVCHOST process executed** | The system process SVCHOST was observed running in an abnormal context. Malware often uses SVCHOST to masquerade its malicious activity. | - | High | | **Suspicious system process executed**<br>(VM_SystemProcessInAbnormalContext) | The system process %{process name} was observed running in an abnormal context. Malware often uses this process name to masquerade its malicious activity. | Defense Evasion, Execution | High | | **Suspicious Volume Shadow Copy Activity** | Analysis of host data has detected a shadow copy deletion activity on the resource. Volume Shadow Copy (VSC) is an important artifact that stores data snapshots. Some malware and specifically Ransomware, targets VSC to sabotage backup strategies. | - | High |
-| **Suspicious WindowPosition registry value detected** | Analysis of host data on %{Compromised Host} detected an attempted WindowPosition registry configuration change that could be indicative of hiding application windows in non-visible sections of the desktop. This could be legitimate activity, or an indication of a compromised machine: this type of activity has been previously associated with known adware (or unwanted software) such as Win32/OneSystemCare and Win32/SystemHealer and malware such as Win32/Creprote. When the WindowPosition value is set to 201329664, (Hex: 0x0c00 0c00, corresponding to X-axis=0c00 and the Y-axis=0c00) this places the console app's window in a non-visible section of the user's screen in an area that is hidden from view below the visible start menu/taskbar. Known suspect Hex value includes, but not limited to c000c000 | - | Low |
+| **Suspicious WindowPosition registry value detected** | Analysis of host data on %{Compromised Host} detected an attempted WindowPosition registry configuration change that could be indicative of hiding application windows in nonvisible sections of the desktop. This could be legitimate activity, or an indication of a compromised machine: this type of activity has been previously associated with known adware (or unwanted software) such as Win32/OneSystemCare and Win32/SystemHealer and malware such as Win32/Creprote. When the WindowPosition value is set to 201329664, (Hex: 0x0c00 0c00, corresponding to X-axis=0c00 and the Y-axis=0c00) this places the console app's window in a non-visible section of the user's screen in an area that is hidden from view below the visible start menu/taskbar. Known suspect Hex value includes, but not limited to c000c000 | - | Low |
| **Suspiciously named process detected** | Analysis of host data on %{Compromised Host} detected a process whose name is very similar to but different from a very commonly run process (%{Similar To Process Name}). While this process could be benign attackers are known to sometimes hide in plain sight by naming their malicious tools to resemble legitimate process names. | - | Medium | | **Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium | | **Unusual deletion of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Microsoft Defender for Cloud test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High | | **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Phishing content hosted on Azure Webapps**<br>(AppServices_PhishingContent) | URL used for phishing attack found on the Azure AppServices website. This URL was part of a phishing attack sent to Microsoft 365 customers. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website.<br>(Applies to: App Service on Windows and App Service on Linux) | Collection | High |
-| **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder does not usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium |
+| **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder doesn't usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium |
| **Possible Cryptocoinminer download detected**<br>(AppServices_CryptoCoinMinerDownload) | Analysis of host data has detected the download of a file normally associated with digital currency mining.<br>(Applies to: App Service on Linux) | Defense Evasion, Command and Control, Exploitation | Medium | | **Possible data exfiltration detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium | | **Potential dangling DNS record for an App Service resource detected**<br>(AppServices_PotentialDanglingDomain) | A DNS record that points to a recently deleted App Service resource (also known as "dangling DNS" entry) has been detected. This might leave you susceptible to a subdomain takeover. Subdomain takeovers enable malicious actors to redirect traffic intended for an organization's domain to a site performing malicious activity. In this case, a text record with the Domain Verification ID was found. Such text records prevent subdomain takeover but we still recommend removing the dangling domain. If you leave the DNS record pointing at the subdomain you're at risk if anyone in your organization deletes the TXT file or record in the future.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Raw data download detected**<br>(AppServices_DownloadCodeFromWebsite) | Analysis of App Service processes detected an attempt to download code from raw-data websites such as Pastebin. This action was run by a PHP process. This behavior is associated with attempts to download web shells or other malicious components to the App Service.<br>(Applies to: App Service on Windows) | Execution | Medium | | **Saving curl output to disk detected**<br>(AppServices_CurlToDisk) | Analysis of App Service processes detected the running of a curl command in which the output was saved to the disk. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells.<br>(Applies to: App Service on Windows) | - | Low | | **Spam folder referrer detected**<br>(AppServices_SpamReferrer) | Azure App Service activity log indicates web activity that was identified as originating from a web site associated with spam activity. This can occur if your website is compromised and used for spam activity.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
-| **Suspicious access to possibly vulnerable web page detected**<br>(AppServices_ScanSensitivePage) | Azure App Service activity log indicates a web page that seems to be sensitive was accessed. This suspicious activity originated from a source IP address whose access pattern resembles that of a web scanner.<br>This activity is often associated with an attempt by an attacker to scan your network to try and gain access to sensitive or vulnerable web pages.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
+| **Suspicious access to possibly vulnerable web page detected**<br>(AppServices_ScanSensitivePage) | Azure App Service activity log indicates a web page that seems to be sensitive was accessed. This suspicious activity originated from a source IP address whose access pattern resembles that of a web scanner.<br>This activity is often associated with an attempt by an attacker to scan your network to try to gain access to sensitive or vulnerable web pages.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
| **Suspicious domain name reference**<br>(AppServices_CommandlineSuspectDomain) | Analysis of host data detected reference to suspicious domain name. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools.<br>(Applies to: App Service on Linux) | Exfiltration | Low | | **Suspicious download using Certutil detected**<br>(AppServices_DownloadUsingCertutil) | Analysis of host data on {NAME} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed.<br>(Applies to: App Service on Windows) | Execution | Medium | | **Suspicious PHP execution detected**<br>(AppServices_SuspectPhp) | Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run operating system commands or PHP code from the command line, by using the PHP process. While this behavior can be legitimate, in web applications this behavior might indicate malicious activities, such as attempts to infect websites with web shells.<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium | | **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium | | **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium |
-| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
+| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation, which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
| **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium | | **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert's extended properties. | Execution | Medium | | **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low | | **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | | **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
-| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
+| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It's rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
| **Detected suspicious use of the useradd command**<br>(K8S.NODE_SuspectUserAddition) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium | | **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High | | **Digital currency mining related behavior detected**<br>(K8S.NODE_DigitalCurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low | | **Indicators associated with DDOS toolkit detected**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium | | **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes that contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | | **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | | **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces shouldn't contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low | | **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | | **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium | | **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High | | **PREVIEW - Impossible travel activity**<br>(ARM.MCAS_ImpossibleTravelActivity) | Two user activities (in a single or multiple sessions) have occurred, originating from geographically distant locations. This occurs within a time period shorter than the time it would have taken the user to travel from the first location to the second. This indicates that a different user is using the same credentials.<br>This detection uses a machine learning algorithm that ignores obvious false positives contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The detection has an initial learning period of seven days, during which it learns a new user's activity pattern.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
| **PREVIEW - Suspicious key vault recovery detected**<br>(Arm_Suspicious_Vault_Recovering) | Microsoft Defender for Resource Manager detected a suspicious recovery operation for a soft-deleted key vault resource.<br> The user recovering the resource is different from the user that deleted it. This is highly suspicious because the user rarely invokes such an operation. In addition, the user logged on without multi-factor authentication (MFA).<br> This might indicate that the user is compromised and is attempting to discover secrets and keys to gain access to sensitive resources, or to perform lateral movement across your network. | Lateral movement | Medium/high | | **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Unusual volume of data extracted**<br>(CosmosDB_DataExfiltrationAnomaly) | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium | | **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High | | **Suspicious extraction of Azure Cosmos DB account keys** (AzureCosmosDB_SuspiciousListKeys.SuspiciousPrincipal) | A suspicious source extracted Azure Cosmos DB account access keys from your subscription. If this source is not a legitimate source, this may be a high impact issue. The access key that was extracted provides full control over the associated databases and the data stored within. See the details of each specific alert to understand why the source was flagged as suspicious. | Credential Access | high |
-| **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isn't authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
+| **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isn't authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts can't work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
| **SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack won't succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, it's an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low | ## <a name="alerts-azurenetlayer"></a>Alerts for Azure network layer
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspicious secret listing and query in a key vault**<br>(KV_ListGetAnomaly) | A user or service principal has performed an anomalous Secret List operation followed by one or more Secret Get operations. This pattern is not normally performed by the specified user or service principal and is typically associated with secret dumping. This may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault and is trying to discover secrets that can be used to move laterally through your network and/or gain access to sensitive resources. We recommend further investigations. | Credential Access | Medium | | **Unusual access denied - User accessing high volume of key vaults denied**<br>(KV_AccountVolumeAccessDeniedAnomaly) | A user or service principal has attempted access to anomalously high volume of key vaults in the last 24 hours. This anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. We recommend further investigations. | Discovery | Low | | **Unusual access denied - Unusual user accessing key vault denied**<br>(KV_UserAccessDeniedAnomaly) | A key vault access was attempted by a user that does not normally access it, this anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. | Initial Access, Discovery | Low |
-| **Unusual application accessed a key vault**<br>(KV_AppAnomaly) | A key vault has been accessed by a service principal that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
+| **Unusual application accessed a key vault**<br>(KV_AppAnomaly) | A key vault has been accessed by a service principal that doesn't normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
| **Unusual operation pattern in a key vault**<br>(KV_OperationPatternAnomaly) | An anomalous pattern of key vault operations was performed by a user, service principal, and/or a specific key vault. This anomalous activity pattern may be legitimate, but it could be an indication that a threat actor has gained access to the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **Unusual user accessed a key vault**<br>(KV_UserAnomaly) | A key vault has been accessed by a user that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
-| **Unusual user-application pair accessed a key vault**<br>(KV_UserAppAnomaly) | A key vault has been accessed by a user-service principal pair that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
+| **Unusual user-application pair accessed a key vault**<br>(KV_UserAppAnomaly) | A key vault has been accessed by a user-service principal pair that doesn't normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
| **User accessed high volume of key vaults**<br>(KV_AccountVolumeAnomaly) | A user or service principal has accessed an anomalously high volume of key vaults. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to multiple key vaults in an attempt to access the secrets contained within them. We recommend further investigations. | Credential Access | Medium | | **Denied access from a suspicious IP to a key vault**<br>(KV_SuspiciousIPAccessDenied) | An unsuccessful key vault access has been attempted by an IP that has been identified by Microsoft Threat Intelligence as a suspicious IP address. Though this attempt was unsuccessful, it indicates that your infrastructure might have been compromised. We recommend further investigations. | Credential Access | Low |
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | | SQL on VM has a user account with commonly used username and allows code execution on the VM (Preview) | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)| | SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
-| Managed database with excessive internet exposure allows basic (local user/password) authentication | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
-| Internet exposed VM has high severity vulnerabilities and a hosted database installed | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.
+| Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| Internet exposed VM has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.
| Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container (Preview) | An internal Azure storage container replicates its data to another Azure storage container which is reachable from the internet and allows public access, and poses this data at risk. | | Internet exposed Azure Blob Storage container with sensitive data is publicly accessible (Preview) | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md).|
-| Internet exposed managed database allows basic (local user/password) authentication (Preview) | A database can be accessed through the internet and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
-| Internet exposed database server allows basic (user/password) authentication method (Preview) | Azure SQL database can be accessed through the internet and allows user/password authentication which exposes the DB to brute force attacks. |
### AWS data
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
|Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | |SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | | SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-|Managed database with excessive internet exposure allows basic (local user/password) authentication | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+|Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
|Internet exposed EC2 instance has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution. | Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket (Preview) | An internal AWS S3 bucket replicates its data to another S3 bucket which is reachable from the internet and allows public access, and poses this data at risk. | | RDS snapshot is publicly available to all AWS accounts (Preview) | A snapshot of an RDS instance or cluster is publicly accessible by all AWS accounts. |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket (Preview) | Private AWS S3 bucket is replicating data to internet exposed and publicly accessible AWS S3 bucket | | Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket (Preview) | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket| | RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts |
-| Internet exposed database server allows basic (user/password) authentication method (Preview) | AWS RDS database can be accessed through the internet and allows user/password authentication which exposes the DB to brute force attacks. |
### Azure containers
This section lists all of the cloud security graph components (connections and
- [Identify and analyze risks across your environment](concept-attack-path.md) - [Identify and remediate attack paths](how-to-manage-attack-path.md)-- [Cloud security explorer](how-to-manage-cloud-security-explorer.md)
+- [Cloud security explorer](how-to-manage-cloud-security-explorer.md)
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
Sensitive data discovery is available in the Defender CSPM and Defender for Stor
- When you enable one of the plans, the sensitive data discovery extension is turned on as part of the plan. - If you have existing plans running, the extension is available, but turned off by default.-- Existing plan status shows as ΓÇ£PartialΓÇ¥ rather than ΓÇ£FullΓÇ¥ until the feature is turned on manually.
+- Existing plan status shows as ΓÇ£PartialΓÇ¥ rather than ΓÇ£FullΓÇ¥ if one or more extensions aren't turned on.
- The feature is turned on at the subscription level.
What Azure regions are supported? | You can discover Azure storage accounts in:<
What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Discovery is done locally in the region. Do I need to install an agent? | No, discovery is agentless. What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesn’t include other costs except for the respective plan costs.
-What permissions do I need to edit data sensitivity settings? | You need one of these permissions: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.
+What permissions do I need to view/edit data sensitivity settings? | You need one of these permissions: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.
## Configuring data sensitivity settings
Defender for Cloud starts discovering data immediately after enabling a plan, or
- A new Azure storage account that's added to an already discovered subscription is discovered within 24 hours or less. - A new AWS S3 bucket that's added to an already discovered AWS account is discovered within 48 hours or less.
-### Discovering AWS storage
+### Discovering AWS S3 buckets
In order to protect AWS resources in Defender for Cloud, you set up an AWS connector, using a CloudFormation template to onboard the AWS account. - To discover AWS data resources, Defender for Cloud updates the CloudFormation template.-- The CloudFormation template creates a new role in AWS IAM, to allow permission for the Defender for Cloud scanner to access data in the S3 buckets.
+- The CloudFormation template creates a new role in AWS IAM, to allow permission for the Defender for Cloud scanner to access data in the S3 buckets.
- To connect AWS accounts, you need Administrator permissions on the account. - The role allows these permissions: S3 read only; KMS decrypt.
defender-for-cloud Data Security Posture Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-posture-enable.md
Follow these steps to enable data-aware security posture. Don't forget to review
### Before you start -- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#discovering-aws-storage) for AWS discovery, and [required permissions](concept-data-security-posture-prepare.md#whats-supported).
+- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#discovering-aws-s3-buckets) for AWS discovery, and [required permissions](concept-data-security-posture-prepare.md#whats-supported).
- Check that there's no policy that blocks the connection to your Amazon S3 buckets. ### Enable for AWS resources
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
description: A description of what's new and changed in Microsoft Defender for C
Previously updated : 02/05/2023 Last updated : 04/17/2023 # Archive for what's new in Defender for Cloud?
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## October 2022
+
+Updates in October include:
+
+- [Announcing the Microsoft cloud security benchmark](#announcing-the-microsoft-cloud-security-benchmark)
+- [Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)](#attack-path-analysis-and-contextual-security-capabilities-in-defender-for-cloud-preview)
+- [Agentless scanning for Azure and AWS machines (Preview)](#agentless-scanning-for-azure-and-aws-machines-preview)
+- [Defender for DevOps (Preview)](#defender-for-devops-preview)
+- [Regulatory Compliance Dashboard now supports manual control management and detailed information on Microsoft's compliance status](#regulatory-compliance-dashboard-now-supports-manual-control-management-and-detailed-information-on-microsofts-compliance-status)
+- [Auto-provisioning has been renamed to Settings & monitoring and has an updated experience](#auto-provisioning-has-been-renamed-to-settings--monitoring-and-has-an-updated-experience)
+- [Defender Cloud Security Posture Management (CSPM) (Preview)](#defender-cloud-security-posture-management-cspm)
+- [MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations](#mitre-attck-framework-mapping-is-now-available-also-for-aws-and-gcp-security-recommendations)
+- [Defender for Containers now supports vulnerability assessment for Elastic Container Registry (Preview)](#defender-for-containers-now-supports-vulnerability-assessment-for-elastic-container-registry-preview)
+
+### Announcing the Microsoft cloud security benchmark
+
+The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) (MCSB) is a new framework defining fundamental cloud security principles based on common industry standards and compliance frameworks. Together with detailed technical guidance for implementing these best practices across cloud platforms. MCSB is replacing the Azure Security Benchmark. MCSB provides prescriptive details for how to implement its cloud-agnostic security recommendations on multiple cloud service platforms, initially covering Azure and AWS.
+
+You can now monitor your cloud security compliance posture per cloud in a single, integrated dashboard. You can see MCSB as the default compliance standard when you navigate to Defender for Cloud's regulatory compliance dashboard.
+
+Microsoft cloud security benchmark is automatically assigned to your Azure subscriptions and AWS accounts when you onboard Defender for Cloud.
+
+Learn more about the [Microsoft cloud security benchmark](concept-regulatory-compliance.md).
+
+### Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)
+
+The new cloud security graph, attack path analysis and contextual cloud security capabilities are now available in Defender for Cloud in preview.
+
+One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all.
+
+Defender for Cloud's new cloud security graph and attack path analysis capabilities gives security teams the ability to assess the risk behind each security issue. Security teams can also identify the highest risk issues that need to be resolved soonest. Defender for Cloud works with security teams to reduce the risk of an affectful breach to their environment in the most effective way.
+
+Learn more about the new [cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
+
+### Agentless scanning for Azure and AWS machines (Preview)
+
+Until now, Defender for Cloud based its posture assessments for VMs on agent-based solutions. To help customers maximize coverage and reduce onboarding and management friction, we're releasing agentless scanning for VMs to preview.
+
+With agentless scanning for VMs, you get wide visibility on installed software and software CVEs. You get the visibility without the challenges of agent installation and maintenance, network connectivity requirements, and performance affect on your workloads. The analysis is powered by Microsoft Defender vulnerability management.
+
+Agentless vulnerability scanning is available in both Defender Cloud Security Posture Management (CSPM) and in [Defender for Servers P2](defender-for-servers-introduction.md), with native support for AWS and Azure VMs.
+
+- Learn more about [agentless scanning](concept-agentless-data-collection.md).
+- Find out how to enable [agentless vulnerability assessment](enable-vulnerability-assessment-agentless.md).
+
+### Defender for DevOps (Preview)
+
+Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across hybrid and multicloud environments including Azure, AWS, Google, and on-premises resources.
+
+Now, the new Defender for DevOps plan integrates source code management systems, like GitHub and Azure DevOps, into Defender for Cloud. With this new integration, we're empowering security teams to protect their resources from code to cloud.
+
+Defender for DevOps allows you to gain visibility into and manage your connected developer environments and code resources. Currently, you can connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) systems to Defender for Cloud and onboard DevOps repositories to Inventory and the new DevOps Security page. It provides security teams with a high-level overview of the discovered security issues that exist within them in a unified DevOps Security page.
+
+Security teams can now configure pull request annotations to help developers address secret scanning findings in Azure DevOps directly on their pull requests.
+
+You can configure the Microsoft Security DevOps tools on Azure Pipelines and GitHub workflows to enable the following security scans:
+
+| Name | Language | License |
+|--|--|--|
+| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/main/LICENSE) |
+| [BinSkim](https://github.com/Microsoft/binskim) | Binary ΓÇô Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
+| [Template Analyze](https://github.com/Azure/template-analyzer) | ARM template, Bicep file | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
+| [Terrascan](https://github.com/tenable/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
+| [Trivy](https://github.com/aquasecurity/trivy) | Container images, file systems, git repositories | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
+
+The following new recommendations are now available for DevOps:
+
+| Recommendation | Description | Severity |
+|--|--|--|
+| (Preview) [Code repositories should have code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/c68a8c2a-6ed4-454b-9e37-4b7654f2165f/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
+| (Preview) [Code repositories should have secret scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27/showSecurityCenterCommandBar~/false) | Defender for DevOps has found a secret in code repositories.  This should be remediated immediately to prevent a security breach.  Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results may not reflect the complete status of secrets in your repositories. (No related policy) | High |
+| (Preview) [Code repositories should have Dependabot scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/822425e3-827f-4f35-bc33-33749257f851/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
+| (Preview) [Code repositories should have infrastructure as code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/2ebc815f-7bc7-4573-994d-e1cc46fb4a35/showSecurityCenterCommandBar~/false) | (Preview) Code repositories should have infrastructure as code scanning findings resolved | Medium |
+| (Preview) [GitHub repositories should have code scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6672df26-ff2e-4282-83c3-e2f20571bd11/showSecurityCenterCommandBar~/false) | GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project. (No related policy) | Medium |
+| (Preview) [GitHub repositories should have secret scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d/showSecurityCenterCommandBar~/false) | GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project. (No related policy) | High |
+| (Preview) [GitHub repositories should have Dependabot scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35/showSecurityCenterCommandBar~/false) | GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems. (No related policy) | Medium |
+
+The Defender for DevOps recommendations replaced the deprecated vulnerability scanner for CI/CD workflows that was included in Defender for Containers.
+
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md)
+
+### Regulatory Compliance dashboard now supports manual control management and detailed information on Microsoft's compliance status
+
+The compliance dashboard in Defender for Cloud is a key tool for customers to help them understand and track their compliance status. Customers can continuously monitor environments in accordance with requirements from many different standards and regulations.
+
+Now, you can fully manage your compliance posture by manually attesting to operational and non-technical controls. You can now provide evidence of compliance for controls that aren't automated. Together with the automated assessments, you can now generate a full report of compliance within a selected scope, addressing the entire set of controls for a given standard.
+
+In addition, with richer control information and in-depth details and evidence for Microsoft's compliance status, you now have all of the information required for audits at your fingertips.
+
+Some of the new benefits include:
+
+- **Manual customer actions** provide a mechanism for manually attesting compliance with non-automated controls. Including the ability to link evidence, set a compliance date and expiration date.
+
+- Richer control details for supported standards that showcase **Microsoft actions** and **manual customer actions** in addition to the already existing automated customer actions.
+
+- Microsoft actions provide transparency into MicrosoftΓÇÖs compliance status that includes audit assessment procedures, test results, and Microsoft responses to deviations.
+
+- **Compliance offerings** provide a central location to check Azure, Dynamics 365, and Power Platform products and their respective regulatory compliance certifications.
+
+Learn more on how to [Improve your regulatory compliance](regulatory-compliance-dashboard.md) with Defender for Cloud.
+
+### Auto-provisioning has been renamed to Settings & monitoring and has an updated experience
+
+We've renamed the Auto-provisioning page to **Settings & monitoring**.
+
+Auto-provisioning was meant to allow at-scale enablement of prerequisites, which are needed by Defender for Cloud's advanced features and capabilities. To better support our expanded capabilities, we're launching a new experience with the following changes:
+
+**The Defender for Cloud's plans page now includes**:
+- When you enable a Defender plan that requires monitoring components, those components are enabled for automatic provisioning with default settings. These settings can optionally be edited at any time.
+- You can access the monitoring component settings for each Defender plan from the Defender plan page.
+- The Defender plans page clearly indicates whether all the monitoring components are in place for each Defender plan, or if your monitoring coverage is incomplete.
+
+**The Settings & monitoring page**:
+- Each monitoring component indicates the Defender plans to which it's related.
+
+Learn more about [managing your monitoring settings](monitoring-components.md).
+
+### Defender Cloud Security Posture Management (CSPM)
+
+One of Microsoft Defender for Cloud's main pillars for cloud security is Cloud Security Posture Management (CSPM). CSPM provides you with hardening guidance that helps you efficiently and effectively improve your security. CSPM also gives you visibility into your current security situation.
+
+We're announcing a new Defender plan: Defender CSPM. This plan enhances the security capabilities of Defender for Cloud and includes the following new and expanded features:
+
+- Continuous assessment of the security configuration of your cloud resources
+- Security recommendations to fix misconfigurations and weaknesses
+- Secure score
+- Governance
+- Regulatory compliance
+- Cloud security graph
+- Attack path analysis
+- Agentless scanning for machines
+
+Learn more about the [Defender CSPM plan](concept-cloud-security-posture-management.md).
+
+### MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations
+
+For security analysts, itΓÇÖs essential to identify the potential risks associated with security recommendations and understand the attack vectors, so that they can efficiently prioritize their tasks.
+
+Defender for Cloud makes prioritization easier by mapping the Azure, AWS and GCP security recommendations against the MITRE ATT&CK framework. The MITRE ATT&CK framework is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations, allowing customers to strengthen the secure configuration of their environments.
+
+The MITRE ATT&CK framework has been integrated in three ways:
+
+- Recommendations map to MITRE ATT&CK tactics and techniques.
+- Query MITRE ATT&CK tactics and techniques on recommendations using the Azure Resource Graph.
++
+### Defender for Containers now supports vulnerability assessment for Elastic Container Registry (Preview)
+
+Microsoft Defender for Containers now provides agentless vulnerability assessment scanning for Elastic Container Registry (ECR) in Amazon AWS. Expanding on coverage for multicloud environments, building on the release earlier this year of advanced threat protection and Kubernetes environment hardening for AWS and Google GCP. The agentless model creates AWS resources in your accounts to scan your images without extracting images out of your AWS accounts and with no footprint on your workload.
+
+Agentless vulnerability assessment scanning for images in ECR repositories helps reduce the attack surface of your containerized estate by continuously scanning images to identify and manage container vulnerabilities. With this new release, Defender for Cloud scans container images after they're pushed to the repository and continually reassess the ECR container images in the registry. The findings are available in Microsoft Defender for Cloud as recommendations, and you can use Defender for Cloud's built-in automated workflows to take action on the findings, such as opening a ticket for fixing a high severity vulnerability in an image.
+
+Learn more about [vulnerability assessment for Amazon ECR images](defender-for-containers-vulnerability-assessment-elastic.md).
+ ## September 2022 Updates in September include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/13/2023 Last updated : 04/17/2023 # What's new in Microsoft Defender for Cloud?
Updates in April include:
- [New preview Unified Disk Encryption recommendation](#unified-disk-encryption-recommendation-preview) - [Changes in the recommendation "Machines should be configured securely"](#changes-in-the-recommendation-machines-should-be-configured-securely)
+- [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies)
### Unified Disk Encryption recommendation (preview) We have introduced a unified disk encryption recommendation in public preview, `Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost` and `Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost`.
-These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources` which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled` which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and either being enabled on a virtual machine is recommended. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
+These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources` which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled` which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and we recommend enabling one of them on every virtual machine. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
-The new recommendations require [guest config](https://aka.ms/gcpol).
+The new recommendations require [Azure Automanage Machine Configuration](https://aka.ms/gcpol).
These recommendations are based on the following policies: -- [(Preview) Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost. - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3dc5edcd-002d-444c-b216-e123bbfa37c0)-- [(Preview) Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost. - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fca88aadc-6e2b-416c-9de2-5a0f01d1693f)
+- [(Preview) Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3dc5edcd-002d-444c-b216-e123bbfa37c0)
+- [(Preview) Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fca88aadc-6e2b-416c-9de2-5a0f01d1693f)
Learn more about [ADE and EncryptionAtHost and how to enable one of them](../virtual-machines/disk-encryption-overview.md).
As part of this update, the recommendation's ID was changed from `181ac480-f7c4-
No action is required on the customer side, and there's no expected impact on the secure score.
+### Deprecation of App Service language monitoring policies
+
+The following App Service language monitoring policies have been deprecated due to their ability to generate false negatives and because they don't provide better security. You should always ensure you're using a language version without any known vulnerabilities.
+
+| Policy name | Policy ID |
+|--|--|
+| [App Service apps that use Java should use the latest 'Java version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) | 496223c3-ad65-4ecd-878a-bae78737e9ed |
+| [App Service apps that use Python should use the latest 'Python version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) | 7008174a-fd10-4ef0-817e-fc820a951d73 |
+| [Function apps that use Java should use the latest 'Java version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) | 9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc |
+| [Function apps that use Python should use the latest 'Python version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) | 7238174a-fd10-4ef0-817e-fc820a951d73 |
+| [App Service apps that use PHP should use the latest 'PHP version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3)| 7261b898-8a84-4db8-9e04-18527132abb3 |
+
+Customers can use alternative built-in policies to monitor any specified language version for their App Services.
+
+These policies are no longer available in Defender for Cloud's built-in recommendations. You can [add them as custom recommendations](create-custom-recommendations.md) to have Defender for Cloud monitor them.
+ ## March 2023 Updates in March include:
+- [New alert in Defender for Resource Manager](#new-alert-in-defender-for-resource-manager)
- [A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection](#a-new-defender-for-storage-plan-is-available-including-near-real-time-malware-scanning-and-sensitive-data-threat-detection) - [Data-aware security posture (preview)](#data-aware-security-posture-preview) - [Improved experience for managing the default Azure security policies](#improved-experience-for-managing-the-default-azure-security-policies)
Updates in March include:
- [New preview recommendation for Azure SQL Servers](#new-preview-recommendation-for-azure-sql-servers) - [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault)
+### New alert in Defender for Resource Manager
+
+Defender for Resource Manager has the following new alert:
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+|||:-:||
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
+
+You can see a list of all of the [alerts available for Resource Manager](alerts-reference.md#alerts-resourcemanager).
+ ### A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection Cloud storage plays a key role in the organization and stores large volumes of valuable and sensitive data. Today we are announcing a new Defender for Storage plan. If youΓÇÖre using the previous plan (now renamed to "Defender for Storage (classic)"), you will need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to use the new features and benefits.
The recommendation [`Lambda functions should have a dead-letter queue configured
|--|--|--| | Lambda functions should have a dead-letter queue configured | This control checks whether a Lambda function is configured with a dead-letter queue. The control fails if the Lambda function isn't configured with a dead-letter queue. As an alternative to an on-failure destination, you can configure your function with a dead-letter queue to save discarded events for further processing. A dead-letter queue acts the same as an on-failure destination. It's used when an event fails all processing attempts or expires without being processed. A dead-letter queue allows you to look back at errors or failed requests to your Lambda function to debug or identify unusual behavior. From a security perspective, it's important to understand why your function failed and to ensure that your function doesn't drop data or compromise data security as a result. For example, if your function can't communicate to an underlying resource that could be a symptom of a denial of service (DoS) attack elsewhere in the network. | Medium |
-## October 2022
-
-Updates in October include:
--- [Announcing the Microsoft cloud security benchmark](#announcing-the-microsoft-cloud-security-benchmark)-- [Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)](#attack-path-analysis-and-contextual-security-capabilities-in-defender-for-cloud-preview)-- [Agentless scanning for Azure and AWS machines (Preview)](#agentless-scanning-for-azure-and-aws-machines-preview)-- [Defender for DevOps (Preview)](#defender-for-devops-preview)-- [Regulatory Compliance Dashboard now supports manual control management and detailed information on Microsoft's compliance status](#regulatory-compliance-dashboard-now-supports-manual-control-management-and-detailed-information-on-microsofts-compliance-status)-- [Auto-provisioning has been renamed to Settings & monitoring and has an updated experience](#auto-provisioning-has-been-renamed-to-settings--monitoring-and-has-an-updated-experience)-- [Defender Cloud Security Posture Management (CSPM) (Preview)](#defender-cloud-security-posture-management-cspm)-- [MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations](#mitre-attck-framework-mapping-is-now-available-also-for-aws-and-gcp-security-recommendations)-- [Defender for Containers now supports vulnerability assessment for Elastic Container Registry (Preview)](#defender-for-containers-now-supports-vulnerability-assessment-for-elastic-container-registry-preview)-
-### Announcing the Microsoft cloud security benchmark
-
-The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) (MCSB) is a new framework defining fundamental cloud security principles based on common industry standards and compliance frameworks. Together with detailed technical guidance for implementing these best practices across cloud platforms. MCSB is replacing the Azure Security Benchmark. MCSB provides prescriptive details for how to implement its cloud-agnostic security recommendations on multiple cloud service platforms, initially covering Azure and AWS.
-
-You can now monitor your cloud security compliance posture per cloud in a single, integrated dashboard. You can see MCSB as the default compliance standard when you navigate to Defender for Cloud's regulatory compliance dashboard.
-
-Microsoft cloud security benchmark is automatically assigned to your Azure subscriptions and AWS accounts when you onboard Defender for Cloud.
-
-Learn more about the [Microsoft cloud security benchmark](concept-regulatory-compliance.md).
-
-### Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)
-
-The new cloud security graph, attack path analysis and contextual cloud security capabilities are now available in Defender for Cloud in preview.
-
-One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all.
-
-Defender for Cloud's new cloud security graph and attack path analysis capabilities gives security teams the ability to assess the risk behind each security issue. Security teams can also identify the highest risk issues that need to be resolved soonest. Defender for Cloud works with security teams to reduce the risk of an affectful breach to their environment in the most effective way.
-
-Learn more about the new [cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
-
-### Agentless scanning for Azure and AWS machines (Preview)
-
-Until now, Defender for Cloud based its posture assessments for VMs on agent-based solutions. To help customers maximize coverage and reduce onboarding and management friction, we're releasing agentless scanning for VMs to preview.
-
-With agentless scanning for VMs, you get wide visibility on installed software and software CVEs. You get the visibility without the challenges of agent installation and maintenance, network connectivity requirements, and performance affect on your workloads. The analysis is powered by Microsoft Defender vulnerability management.
-
-Agentless vulnerability scanning is available in both Defender Cloud Security Posture Management (CSPM) and in [Defender for Servers P2](defender-for-servers-introduction.md), with native support for AWS and Azure VMs.
--- Learn more about [agentless scanning](concept-agentless-data-collection.md).-- Find out how to enable [agentless vulnerability assessment](enable-vulnerability-assessment-agentless.md).-
-### Defender for DevOps (Preview)
-
-Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across hybrid and multicloud environments including Azure, AWS, Google, and on-premises resources.
-
-Now, the new Defender for DevOps plan integrates source code management systems, like GitHub and Azure DevOps, into Defender for Cloud. With this new integration, we're empowering security teams to protect their resources from code to cloud.
-
-Defender for DevOps allows you to gain visibility into and manage your connected developer environments and code resources. Currently, you can connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) systems to Defender for Cloud and onboard DevOps repositories to Inventory and the new DevOps Security page. It provides security teams with a high-level overview of the discovered security issues that exist within them in a unified DevOps Security page.
-
-Security teams can now configure pull request annotations to help developers address secret scanning findings in Azure DevOps directly on their pull requests.
-
-You can configure the Microsoft Security DevOps tools on Azure Pipelines and GitHub workflows to enable the following security scans:
-
-| Name | Language | License |
-|--|--|--|
-| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/main/LICENSE) |
-| [BinSkim](https://github.com/Microsoft/binskim) | Binary ΓÇô Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
-| [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
-| [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
-| [Template Analyze](https://github.com/Azure/template-analyzer) | ARM template, Bicep file | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
-| [Terrascan](https://github.com/tenable/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
-| [Trivy](https://github.com/aquasecurity/trivy) | Container images, file systems, git repositories | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
-
-The following new recommendations are now available for DevOps:
-
-| Recommendation | Description | Severity |
-|--|--|--|
-| (Preview) [Code repositories should have code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/c68a8c2a-6ed4-454b-9e37-4b7654f2165f/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
-| (Preview) [Code repositories should have secret scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27/showSecurityCenterCommandBar~/false) | Defender for DevOps has found a secret in code repositories.  This should be remediated immediately to prevent a security breach.  Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results may not reflect the complete status of secrets in your repositories. (No related policy) | High |
-| (Preview) [Code repositories should have Dependabot scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/822425e3-827f-4f35-bc33-33749257f851/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
-| (Preview) [Code repositories should have infrastructure as code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/2ebc815f-7bc7-4573-994d-e1cc46fb4a35/showSecurityCenterCommandBar~/false) | (Preview) Code repositories should have infrastructure as code scanning findings resolved | Medium |
-| (Preview) [GitHub repositories should have code scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6672df26-ff2e-4282-83c3-e2f20571bd11/showSecurityCenterCommandBar~/false) | GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project. (No related policy) | Medium |
-| (Preview) [GitHub repositories should have secret scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d/showSecurityCenterCommandBar~/false) | GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project. (No related policy) | High |
-| (Preview) [GitHub repositories should have Dependabot scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35/showSecurityCenterCommandBar~/false) | GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems. (No related policy) | Medium |
-
-The Defender for DevOps recommendations replaced the deprecated vulnerability scanner for CI/CD workflows that was included in Defender for Containers.
-
-Learn more about [Defender for DevOps](defender-for-devops-introduction.md)
-
-### Regulatory Compliance dashboard now supports manual control management and detailed information on Microsoft's compliance status
-
-The compliance dashboard in Defender for Cloud is a key tool for customers to help them understand and track their compliance status. Customers can continuously monitor environments in accordance with requirements from many different standards and regulations.
-
-Now, you can fully manage your compliance posture by manually attesting to operational and non-technical controls. You can now provide evidence of compliance for controls that aren't automated. Together with the automated assessments, you can now generate a full report of compliance within a selected scope, addressing the entire set of controls for a given standard.
-
-In addition, with richer control information and in-depth details and evidence for Microsoft's compliance status, you now have all of the information required for audits at your fingertips.
-
-Some of the new benefits include:
--- **Manual customer actions** provide a mechanism for manually attesting compliance with non-automated controls. Including the ability to link evidence, set a compliance date and expiration date.--- Richer control details for supported standards that showcase **Microsoft actions** and **manual customer actions** in addition to the already existing automated customer actions.--- Microsoft actions provide transparency into MicrosoftΓÇÖs compliance status that includes audit assessment procedures, test results, and Microsoft responses to deviations.--- **Compliance offerings** provide a central location to check Azure, Dynamics 365, and Power Platform products and their respective regulatory compliance certifications.-
-Learn more on how to [Improve your regulatory compliance](regulatory-compliance-dashboard.md) with Defender for Cloud.
-
-### Auto-provisioning has been renamed to Settings & monitoring and has an updated experience
-
-We've renamed the Auto-provisioning page to **Settings & monitoring**.
-
-Auto-provisioning was meant to allow at-scale enablement of prerequisites, which are needed by Defender for Cloud's advanced features and capabilities. To better support our expanded capabilities, we're launching a new experience with the following changes:
-
-**The Defender for Cloud's plans page now includes**:
-- When you enable a Defender plan that requires monitoring components, those components are enabled for automatic provisioning with default settings. These settings can optionally be edited at any time.-- You can access the monitoring component settings for each Defender plan from the Defender plan page.-- The Defender plans page clearly indicates whether all the monitoring components are in place for each Defender plan, or if your monitoring coverage is incomplete.-
-**The Settings & monitoring page**:
-- Each monitoring component indicates the Defender plans to which it's related.-
-Learn more about [managing your monitoring settings](monitoring-components.md).
-
-### Defender Cloud Security Posture Management (CSPM)
-
-One of Microsoft Defender for Cloud's main pillars for cloud security is Cloud Security Posture Management (CSPM). CSPM provides you with hardening guidance that helps you efficiently and effectively improve your security. CSPM also gives you visibility into your current security situation.
-
-We're announcing a new Defender plan: Defender CSPM. This plan enhances the security capabilities of Defender for Cloud and includes the following new and expanded features:
--- Continuous assessment of the security configuration of your cloud resources-- Security recommendations to fix misconfigurations and weaknesses-- Secure score-- Governance-- Regulatory compliance-- Cloud security graph-- Attack path analysis-- Agentless scanning for machines-
-Learn more about the [Defender CSPM plan](concept-cloud-security-posture-management.md).
-
-### MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations
-
-For security analysts, itΓÇÖs essential to identify the potential risks associated with security recommendations and understand the attack vectors, so that they can efficiently prioritize their tasks.
-
-Defender for Cloud makes prioritization easier by mapping the Azure, AWS and GCP security recommendations against the MITRE ATT&CK framework. The MITRE ATT&CK framework is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations, allowing customers to strengthen the secure configuration of their environments.
-
-The MITRE ATT&CK framework has been integrated in three ways:
--- Recommendations map to MITRE ATT&CK tactics and techniques.-- Query MITRE ATT&CK tactics and techniques on recommendations using the Azure Resource Graph.--
-### Defender for Containers now supports vulnerability assessment for Elastic Container Registry (Preview)
-
-Microsoft Defender for Containers now provides agentless vulnerability assessment scanning for Elastic Container Registry (ECR) in Amazon AWS. Expanding on coverage for multicloud environments, building on the release earlier this year of advanced threat protection and Kubernetes environment hardening for AWS and Google GCP. The agentless model creates AWS resources in your accounts to scan your images without extracting images out of your AWS accounts and with no footprint on your workload.
-
-Agentless vulnerability assessment scanning for images in ECR repositories helps reduce the attack surface of your containerized estate by continuously scanning images to identify and manage container vulnerabilities. With this new release, Defender for Cloud scans container images after they're pushed to the repository and continually reassess the ECR container images in the registry. The findings are available in Microsoft Defender for Cloud as recommendations, and you can use Defender for Cloud's built-in automated workflows to take action on the findings, such as opening a ticket for fixing a high severity vulnerability in an image.
-
-Learn more about [vulnerability assessment for Amazon ECR images](defender-for-containers-vulnerability-assessment-elastic.md).
- ## Next steps For past changes to Defender for Cloud, see [Archive for what's new in Defender for Cloud?](release-notes-archive.md).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 03/20/2023 Last updated : 04/16/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Three alerts in the Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-will-be-deprecated) | March 2023 | | [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 | | [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 |
-| [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies) | April 2023 |
| [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 | | [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) | April 2023 |
You can also view the [full list of alerts](alerts-reference.md#defender-for-ser
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
-### Deprecation of App Service language monitoring policies
-
-The following App Service language monitoring policies are set to be deprecated because they generate false negatives and they don't necessarily provide better security. Instead, you should always ensure you're using a language version without any known vulnerabilities.
-
-| Policy name | Policy ID |
-|--|--|
-| [App Service apps that use Java should use the latest 'Java version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) | 496223c3-ad65-4ecd-878a-bae78737e9ed |
-| [App Service apps that use Python should use the latest 'Python version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) | 7008174a-fd10-4ef0-817e-fc820a951d73 |
-| [Function apps that use Java should use the latest 'Java version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) | 9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc |
-| [Function apps that use Python should use the latest 'Python version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) | 7238174a-fd10-4ef0-817e-fc820a951d73 |
-| [App Service apps that use PHP should use the latest 'PHP version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3)| 7261b898-8a84-4db8-9e04-18527132abb3 |
-
-Customers can use alternative built-in policies to monitor any specified language version for their App Services.
-
-These will no longer be in Defender for Cloud's built-in recommendations. You can add them as custom recommendations to have Defender for Cloud monitor them.
- ### Deprecation of legacy compliance standards across cloud environments **Estimated date for change: April 2023**
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Last updated 04/05/2023
Every security program includes multiple workflows for incident response. These processes might include notifying relevant stakeholders, launching a change management process, and applying specific remediation steps. Security experts recommend that you automate as many steps of those procedures as you can. Automation reduces overhead. It can also improve your security by ensuring the process steps are done quickly, consistently, and according to your predefined requirements.
-This article describes the workflow automation feature of Microsoft Defender for Cloud. This feature can trigger consumption Logic Apps on security alerts, recommendations, and changes to regulatory compliance. For example, you might want Defender for Cloud to email a specific user when an alert occurs. You'll also learn how to create Logic Apps using [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
+This article describes the workflow automation feature of Microsoft Defender for Cloud. This feature can trigger consumption logic apps on security alerts, recommendations, and changes to regulatory compliance. For example, you might want Defender for Cloud to email a specific user when an alert occurs. You'll also learn how to create logic apps using [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
## Availability
This article describes the workflow automation feature of Microsoft Defender for
|-|:-| |Release state:|General availability (GA)| |Pricing:|Free|
-|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for Logic App creation and modification<br>If you want to use Logic App connectors, you may need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)|
+|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification<br>If you want to use Logic Apps connectors, you may need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)| ## Create a logic app and define when it should automatically run
This article describes the workflow automation feature of Microsoft Defender for
Here you can enter: 1. A name and description for the automation.
- 1. The triggers that will initiate this automatic workflow. For example, you might want your Logic App to run when a security alert that contains "SQL" is generated.
+ 1. The triggers that will initiate this automatic workflow. For example, you might want your logic app to run when a security alert that contains "SQL" is generated.
> [!NOTE] > If your trigger is a recommendation that has "sub-recommendations", for example **Vulnerability assessment findings on your SQL databases should be remediated**, the logic app will not trigger for every new security finding; only when the status of the parent recommendation changes.
- 1. The consumption Logic App that will run when your trigger conditions are met.
+ 1. The consumption logic app that will run when your trigger conditions are met.
-1. From the Actions section, select **visit the Logic Apps page** to begin the Logic App creation process.
+1. From the Actions section, select **visit the Logic Apps page** to begin the logic app creation process.
- :::image type="content" source="media/workflow-automation/visit-logic.png" alt-text="Screenshot that shows where on the screen you need to select the visit the logic apps page in the actions section of the add workflow automation screen." border="true":::
+ :::image type="content" source="media/workflow-automation/visit-logic.png" alt-text="Screenshot that shows the actions section of the add workflow automation screen and the link to visit Azure Logic Apps." border="true":::
You'll be taken to Azure Logic Apps. 1. Select **(+) Add**.
- :::image type="content" source="media/workflow-automation/logic-apps-create-new.png" alt-text="Screenshot of the create a logic app screen." lightbox="media/workflow-automation/logic-apps-create-new.png":::
+ :::image type="content" source="media/workflow-automation/logic-apps-create-new.png" alt-text="Screenshot of the screen to create a logic app." lightbox="media/workflow-automation/logic-apps-create-new.png":::
1. Fill out all required fields and select **Review + Create**.
This article describes the workflow automation feature of Microsoft Defender for
[![Sample logic app.](media/workflow-automation/sample-logic-app.png)](media/workflow-automation/sample-logic-app.png#lightbox)
-1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). Select **Refresh** to ensure your new Logic App is available for selection.
+1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). Select **Refresh** to ensure your new logic app is available for selection.
![Refresh.](media/workflow-automation/refresh-the-list-of-logic-apps.png)
-1. Select your logic app and save the automation. The Logic App dropdown only shows Logic Apps with supporting Defender for Cloud connectors mentioned above.
+1. Select your logic app and save the automation. The logic app dropdown only shows those with supporting Defender for Cloud connectors mentioned above.
-## Manually trigger a Logic App
+## Manually trigger a logic app
-You can also run Logic Apps manually when viewing any security alert or recommendation.
+You can also run logic apps manually when viewing any security alert or recommendation.
-To manually run a Logic App, open an alert, or a recommendation and select **Trigger Logic App**:
+To manually run a logic app, open an alert, or a recommendation and select **Trigger logic app**:
-[![Manually trigger a Logic App.](media/workflow-automation/manually-trigger-logic-app.png)](media/workflow-automation/manually-trigger-logic-app.png#lightbox)
+[![Manually trigger a logic app.](media/workflow-automation/manually-trigger-logic-app.png)](media/workflow-automation/manually-trigger-logic-app.png#lightbox)
## Configure workflow automation at scale using the supplied policies
To implement these policies:
## Data types schemas
-To view the raw event schemas of the security alerts or recommendations events passed to the Logic App instance, visit the [Workflow automation data types schemas](https://aka.ms/ASCAutomationSchemas). This can be useful in cases where you aren't using Defender for Cloud's built-in Logic App connectors mentioned above, but instead are using Logic App's generic HTTP connector - you could use the event JSON schema to manually parse it as you see fit.
+To view the raw event schemas of the security alerts or recommendations events passed to the logic app, visit the [Workflow automation data types schemas](https://aka.ms/ASCAutomationSchemas). This can be useful in cases where you aren't using Defender for Cloud's built-in Logic Apps connectors mentioned above, but instead are using the generic HTTP connector - you could use the event JSON schema to manually parse it as you see fit.
## FAQ - Workflow automation ### Does workflow automation support any business continuity or disaster recovery (BCDR) scenarios?
-When preparing your environment for BCDR scenarios, where the target resource is experiencing an outage or other disaster, it's the organization's responsibility to prevent data loss by establishing backups according to the guidelines from Azure Event Hubs, Log Analytics workspace, and Logic App.
+When preparing your environment for BCDR scenarios, where the target resource is experiencing an outage or other disaster, it's the organization's responsibility to prevent data loss by establishing backups according to the guidelines from Azure Event Hubs, Log Analytics workspace, and Logic Apps.
For every active automation, we recommend you create an identical (disabled) automation and store it in a different location. When there's an outage, you can enable these backup automations and maintain normal operations.
Unfortunately, this change came with an unavoidable breaking change. The breakin
## Next steps
-In this article, you learned about creating Logic Apps, automating their execution in Defender for Cloud, and running them manually. For more information, see the following documentation:
+In this article, you learned about creating logic apps, automating their execution in Defender for Cloud, and running them manually. For more information, see the following documentation:
- [Use workflow automation to automate a security response](/training/modules/resolve-threats-with-azure-security-center/) - [Security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md)
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
The following types of updates generate new records in the **SecurityAlert** tab
The [Microsoft Defender for IoT](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) solution is a set of bundled, out-of-the-box content that's configured specifically for Defender for IoT data, and includes analytics rules, workbooks, and playbooks. > [!div class="nextstepaction"]
-> [Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md)
+> [Install the Microsoft Defender for IoT solution](iot-advanced-threat-monitoring.md)
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
The **Device inventory** page on the Azure portal supports new grouping categori
### Focused inventory in the Azure device inventory (Public preview)
-The **Device inventory** page on the Azure portal now includes a network location indication for your devices, to help focus your device inventory on the devices within your IoT/OT scope. See and filter which devices are defined as *local* or *routed*, according to your configured subnets. The *Network location* filter is on by default, and the *Network location* column can be added by editing the columns in the device inventory. For more information, see [Subnet](configure-sensor-settings-portal.md#subnet).
+The **Device inventory** page on the Azure portal now includes a network location indication for your devices, to help focus your device inventory on the devices within your IoT/OT scope. 
+
+See and filter which devices are defined as *local* or *routed*, according to your configured subnets. The **Network location** filter is on by default. Add the **Network location** column by editing the columns in the device inventory.
+ 
+Configure your subnets either on the Azure portal or on your OT sensor. For more information, see:
+
+- [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
+- [Configure OT sensor settings from the Azure portal](configure-sensor-settings-portal.md#subnet)
+- [Define OT and IoT subnets on the OT sensor](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets)
### Configure OT sensor settings from the Azure portal (Public preview)
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
Azure Digital Twins comes equipped with control plane APIs, data plane APIs, and
## Control plane APIs
-The control plane APIs are [ARM](../azure-resource-manager/management/overview.md) APIs used to manage your Azure Digital Twins instance as a whole, so they cover operations like creating or deleting your entire instance. You'll also use these APIs to create and delete endpoints.
-To call the APIs directly, reference the latest Swagger folder in the [control plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable). This folder also includes a folder of examples that show the usage.
-
-Here are the SDKs currently available for the Azure Digital Twins control APIs.
-
-| SDK language | Package link | Reference documentation | Source code |
-| | | | |
-| .NET (C#) | [Azure.ResourceManager.DigitalTwins on NuGet](https://www.nuget.org/packages/Azure.ResourceManager.DigitalTwins) | [Reference for Azure DigitalTwins SDK for .NET](/dotnet/api/overview/azure/digitaltwins) | [Microsoft Azure Digital Twins management client library for .NET on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/digitaltwins/Azure.ResourceManager.DigitalTwins) |
-| Java | [azure-resourcemanager-digitaltwins on Maven](https://repo1.maven.org/maven2/com/azure/resourcemanager/azure-resourcemanager-digitaltwins/) | [Reference for Resource Management - Digital Twins](/java/api/overview/azure/digital-twins) | [Azure Resource Manager AzureDigitalTwins client library for Java on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins) |
-| JavaScript | [AzureDigitalTwinsManagement client library for JavaScript on npm](https://www.npmjs.com/package/@azure/arm-digitaltwins) | | [AzureDigitalTwinsManagement client library for JavaScript on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/digitaltwins/arm-digitaltwins) |
-| Python | [azure-mgmt-digitaltwins on PyPI](https://pypi.org/project/azure-mgmt-digitaltwins/) | | [Microsoft Azure SDK for Python on GitHub](https://github.com/Azure/azure-sdk-for-python/tree/release/v3/sdk/digitaltwins/azure-mgmt-digitaltwins) |
-| Go | [azure-sdk-for-go/services/digitaltwins/mgmt](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt) | | [Azure SDK for Go on GitHub](https://github.com/Azure/azure-sdk-for-go)
-
-You can also exercise control plane APIs by interacting with Azure Digital Twins through the [Azure portal](https://portal.azure.com) and [CLI](/cli/azure/dt).
+You can also exercise the control plane APIs by interacting with Azure Digital Twins through the [Azure portal](https://portal.azure.com) and [CLI](/cli/azure/dt).
## Data plane APIs
-The data plane APIs are the Azure Digital Twins APIs used to manage the elements within your Azure Digital Twins instance. They include operations like creating routes, uploading models, creating relationships, and managing twins, and can be broadly divided into the following categories:
-* `DigitalTwinModels` - The DigitalTwinModels category contains APIs to manage the [models](concepts-models.md) in an Azure Digital Twins instance. Management activities include upload, validation, retrieval, and deletion of models authored in DTDL.
-* `DigitalTwins` - The DigitalTwins category contains the APIs that let developers create, modify, and delete [digital twins](concepts-twins-graph.md) and their relationships in an Azure Digital Twins instance.
-* `Query` - The Query category lets developers [find sets of digital twins in the twin graph](how-to-query-graph.md) across relationships.
-* `Event Routes` - The Event Routes category contains APIs to [route data](concepts-route-events.md), through the system and to downstream services.
-* `Import Jobs` - The Jobs API lets you manage a long running, asynchronous action to [import models, twins, and relationships in bulk](#bulk-import-with-the-jobs-api).
-
-To call the APIs directly, reference the latest Swagger folder in the [data plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This folder also includes a folder of examples that show the usage. You can also view the [data plane API reference documentation](/rest/api/azure-digitaltwins/).
-
-Here are the SDKs currently available for the Azure Digital Twins control APIs.
-
-| SDK language | Package link | Reference documentation | Source code |
-| | | | |
-| .NET (C#) | [Azure.DigitalTwins.Core on NuGet](https://www.nuget.org/packages/Azure.DigitalTwins.Core) | [Reference for Azure IoT Digital Twins client library for .NET](/dotnet/api/overview/azure/digitaltwins.core-readme) | [Azure IoT Digital Twins client library for .NET on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/digitaltwins/Azure.DigitalTwins.Core) |
-| Java | [com.azure:azure-digitaltwins-core on Maven](https://search.maven.org/artifact/com.azure/azure-digitaltwins-core/1.0.0/jar) | [Reference for Azure Digital Twins SDK for Java](/java/api/overview/azure/digital-twins) | [Azure IoT Digital Twins client library for Java on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins/azure-digitaltwins-core) |
-| JavaScript | [Azure Azure Digital Twins Core client library for JavaScript on npm](https://www.npmjs.com/package/@azure/digital-twins-core) | [Reference for @azure/digital-twins-core](/javascript/api/@azure/digital-twins-core) | [Azure Azure Digital Twins Core client library for JavaScript on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/digitaltwins/digital-twins-core) |
-| Python | [Azure Azure Digital Twins Core client library for Python on PyPI](https://pypi.org/project/azure-digitaltwins-core/) | [Reference for azure-digitaltwins-core](/python/api/azure-digitaltwins-core/azure.digitaltwins.core) | [Azure Azure Digital Twins Core client library for Python on GitHub](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/digitaltwins/azure-digitaltwins-core) |
-You can also exercise date plane APIs by interacting with Azure Digital Twins through the [CLI](/cli/azure/dt).
+You can also exercise the data plane APIs by interacting with Azure Digital Twins through the [CLI](/cli/azure/dt).
## Usage notes
external-attack-surface-management Asn Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/asn-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for ASN assets specifically, including operators and applicable field values. -+ Last updated 12/14/2022
These filters specifically apply to ASN assets. Use these filters when searching
## Free form filters
-The following filters require that the user manually enters the value with which they want to search. This list is organized by the number of applicable operators for each filter, then alphabetically.
+The following filters require that the user manually enters the value with which they want to search. This list is organized according to the number of applicable operators for each filter, then alphabetically.
| Filter name | Description | Value format | Applicable operators | ||-|--|-|
-| ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN will have associated public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` |
+| ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN associates any public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` |
| Whois Admin Email | The email address of the listed administrator of a Whois record. | name@domain.com | `Equals` `Not Equals` `Starts with` `Does not start with` `Matches` `Does Not Match` `In` `Not in` `Starts with in` `Does not start with in` `Matches in` `Does not match in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | | Whois Admin Name | The name of the listed administrator. | John Smith | | | Whois Admin Organization | The organization associated with the administrator. | Contoso Ltd. | |
external-attack-surface-management Contact Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/contact-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for contact assets specifically, including operators and applicable field values. -+ Last updated 12/14/2022
These filters specifically apply to contact assets. Use these filters when searc
## Free form filters
-The following filters require that the user manually enters the value with which they want to search. This list is organized by the number of applicable operators for each filter, then alphabetically. Note that many of these values are case-sensitive.
+The following filters require that the user manually enters the value with which they want to search. This list is organized by the number of applicable operators for each filter, then alphabetically. Many of these values are case-sensitive.
| Filter name | Description | Value format | Applicable operators | |--|-||-|
external-attack-surface-management Data Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/data-connections.md
Title: Defender EASM Data Connections
description: "The data connector sends Defender EASM asset data to two different platforms: Microsoft Log Analytics and Azure Data Explorer. Users need to be active customers to export Defender EASM data to either tool, and data connections are subject to the pricing model for each respective platform." -+ # ms.prod: # To use ms.prod, uncomment it and delete ms.service Last updated 03/20/2023
external-attack-surface-management Deploying The Defender Easm Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md
Title: Creating a Defender EASM Azure resource
description: This article explains how to create an Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource using the Azure portal. -+ Last updated 07/14/2022
external-attack-surface-management Discovering Your Attack Surface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/discovering-your-attack-surface.md
Title: Discovering your attack surface
description: Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. -+ Last updated 07/14/2022
Before completing this tutorial, see the [What is discovery?](what-is-discovery.
## Accessing your automated attack surface
-Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It is recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running additional discoveries. This enables users to quickly access their inventory as Defender EASM refreshes the data, adding additional assets and recent context to your Attack Surface.
+Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It's recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running other discoveries. This process enables users to quickly access their inventory as Defender EASM refreshes the data, adding more assets and recent context to your Attack Surface.
1. When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces.
Microsoft has preemptively configured the attack surfaces of many organizations,
![Screenshot of pre-configured attack surface option](media/Tutorial-1.png)
-At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Please review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. Please read the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
+At this point, the discovery runs in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. Read the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
If you notice any missing assets or have other entities to manage that may not be discovered through infrastructure clearly linked to your organization, you can elect to run customized discoveries to detect these outlier assets.
If you notice any missing assets or have other entities to manage that may not b
Custom discoveries are ideal for organizations that require deeper visibility into infrastructure that may not be immediately linked to their primary seed assets. By submitting a larger list of known assets to operate as discovery seeds, the discovery engine will return a wider pool of assets. Custom discovery can also help organizations find disparate infrastructure that may relate to independent business units and acquired companies. ## Discovery groups
-Custom discoveries are organized into Discovery Groups. They are independent seed clusters that comprise a single discovery run and operate on their own recurrence schedules. Users can elect to organize their Discovery Groups to delineate assets in whatever way best benefits their company and workflows. Common options include organizing by responsible team/business unit, brands or subsidiaries.
+Custom discoveries are organized into Discovery Groups. They're independent seed clusters that comprise a single discovery run and operate on their own recurrence schedules. Users can elect to organize their Discovery Groups to delineate assets in whatever way best benefits their company and workflows. Common options include organizing by responsible team/business unit, brands or subsidiaries.
## Creating a discovery group
Custom discoveries are organized into Discovery Groups. They are independent see
![Screenshot of pre-baked attack surface selection page,](media/Tutorial-7.png)
- Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
+ Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they aren't added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but don't belong to your organization.
Once your seeds have been selected, select **Review + Create**.
Custom discoveries are organized into Discovery Groups. They are independent see
![Screenshot of review + create screen](media/Tutorial-8.png)
-You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
+You are then taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you can see new assets added to your Approved Inventory.
## Next steps - [Understanding asset details](understanding-asset-details.md)
external-attack-surface-management Domain Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/domain-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for domain assets specifically, including operators and applicable field values. -+ Last updated 12/14/2022
These filters specifically apply to domain assets. Use these filters when search
## Defined value filters
-The following filters provide a drop-down list of options to select. The available values are pre-defined.
+The following filters provide a drop-down list of options to select. The available values are predefined.
| Filter name | Description | Value format example | Applicable operators | ||-|-|--|
The following filters provide a drop-down list of options to select. The availab
## Free form filters
-The following filters require that the user manually enters the value with which they want to search. This list is organized by the number of applicable operators for each filter, then alphabetically. Please note that many of these values are case-sensitive.
+The following filters require that the user manually enters the value with which they want to search. This list is organized according to the number of applicable operators for each filter, then alphabetically. Note that many values are case-sensitive.
| Filter name | Description | Value format example | Applicable operators | ||-|-|--|
external-attack-surface-management Host Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/host-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for host assets specifically, including operators and applicable field values. -+ Last updated 12/14/2022
These filters specifically apply to host assets. Use these filters when searchin
## Defined value filters
-The following filters provide a drop-down list of options to select. The available values are pre-defined.
+The following filters provide a drop-down list of options to select. The available values are predefined.
| Filter name | Description | Value format example | Applicable operators | |-|-|-|--|
-| IPv4 | Indicates that the host resolves to a 32-bit number notated in four octets (e.g. 192.168.92.73). | true / false | `Equals` `Not Equals` |
+| IPv4 | Indicates that the host resolves to a 32-bit number notated in four octets (for example, 192.168.92.73). | true / false | `Equals` `Not Equals` |
| IPv6 | Indicates that the host resolves to an IP comprised of 128-bit hexadecimal digits noted in eight four-digit groups. | true / false | | | Is Mail Server Record | Indicates that the host powers a mail server. | true / false | | | Is Name Server Record | Indicates that the host powers a name server. | true / false | |
The following filters provide a drop-down list of options to select. The availab
## Free form filters
-The following filters require that the user manually enters the value with which they want to search. This list is organized by the number of applicable operators for each filter, then alphabetically. Please note that many of these values are case-sensitive.
+The following filters require that the user manually enters the value with which they want to search. This list is organized according to the number of applicable operators for each filter, then alphabetically. Note that many values are case-sensitive.
| Filter name | Description | Value format example | Applicable operators | |-|-|-|-| | Port State | Indicates the status of the observed port. | Open, Filtered | `Equals` `In` | | Port | Any ports detected on the asset. | 443, 80 | `Equals` `Not Equals` `In` `Not In` |
-| ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN will have associated public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` |
+| ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN is associated to any public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` |
| Affected CVSS Score | Searches for assets with a CVE that matches a specific numerical score or range of scores. | Numerical (1-10) | `Equals` `Not Equals` `In` `Not In` `Greater Than or Equal To` `Less Than or Equal To` `Between` `Empty` `Not Empty` | | Affected CVSS v3 Score | Searches for assets with a CVE v3 that matches a specific numerical score or range of scores. | Numerical (1-10) | |
-| Attribute Type | Additional services running on the asset. This can include IP addresses trackers. | address, AdblockPlusAcceptableAdsSignature | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
+| Attribute Type | Services running on the asset. These services can include IP addresses trackers. | address, AdblockPlusAcceptableAdsSignature | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
| Attribute Type & Value | The attribute type and value within a single field. | address 192.168.92.73 | | | Attribute Value | The values for any attributes found on the asset. | 192.168.92.73 | | | CWE ID | Searches for assets by a specific CWE ID, or range of IDs. | CWE-89 | |
external-attack-surface-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/index.md
Title: Overview
description: Microsoft Defender External Attack Surface Management (Defender EASM) continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. -+ Last updated 07/14/2022
external-attack-surface-management Inventory Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/inventory-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management (Defender EASM), helping users surface specific subsets of inventory assets based on selected parameters. -+ Last updated 12/14/2022
external-attack-surface-management Ip Address Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/ip-address-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for IP address assets specifically, includiung operators and applicable field values. -+ Last updated 12/14/2022
external-attack-surface-management Ip Block Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/ip-block-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for IP block assets specifically, including operators and applicable field values. -+ Last updated 12/14/2022
These filters specifically apply to IP block assets. Use these filters when sear
## Defined value filters
-The following filters provide a drop-down list of options to select. The available values are pre-defined.
+The following filters provide a drop-down list of options to select. The available values are predefined.
| Filter name | Description | Value format | Applicable operators | |--|-|-||
The following filters provide a drop-down list of options to select. The availab
## Free form filters
-The following filters require that the user manually enters the value with which they want to search. This list is organized by the number of applicable operators for each filter, then alphabetically.
+The following filters require that the user manually enters the value with which they want to search. This list is organized according to the number of applicable operators for each filter, then alphabetically.
| Filter name | Description | Value format | Applicable operators | ||-|--||
-| ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN will have associated public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` |
+| ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN is associated to any public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` |
| BGP Prefix | Any text values in the BGP prefix. | 123 4567 89 192.168.92.73/16 | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | | IP Block | The IP block that is associated with the asset. | 192.168.92.73/16 | | | Whois Admin Email | The email address of the listed administrator of a Whois record. | name@domain.com | |
external-attack-surface-management Labeling Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/labeling-inventory-assets.md
description: This article outlines how to label assets with custom text values of a user's choice for improved categorization and operationalization of their inventory data. -+ Last updated 3/1/2022
external-attack-surface-management Page Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/page-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for page assets specifically, including operators and applicable field values. -+ Last updated 12/14/2022
These filters specifically apply to page assets. Use these filters when searchin
## Defined value filters
-The following filters provide a drop-down list of options to select. The available values are pre-defined.
+The following filters provide a drop-down list of options to select. The available values are predefined.
| Filter name | Description | Value format example | Applicable operators | ||-|-|--| | IPv4 | Indicates that the host resolves to a 32-bit number notated in four octets (e.g. 192.168.92.73). | true / false | `Equals` `Not Equals` |
-| IPv6 | Indicates that the host resolves to an IP comprised of 128-bit hexadecimal digits noted in eight four-digit groups. | true / false | |
+| IPv6 | Indicates that the host resolves to an IP comprised of 128-bit hexadecimal digits noted in eight 4-digit groups. | true / false | |
| Live | Indicates if the page is hosting live web content. | true / false | | | Parked Domain | Indicates whether a domain is registered but not connected to an online service (website, email hosting). | true / false | | | Parked Page | Indicates whether a webpage is registered but not connected to an online service (website, email hosting). | true / false | |
The following filters require that the user manually enters the value with which
| Affected CVSS Score | Searches for assets with a CVE that matches a specific numerical score or range of scores. | Numerical (1-10) | `Equals` `Not Equals` `In` `Not In` `Greater Than or Equal To` `Less Than or Equal To` `Between` `Empty` `Not Empty` | | Affected CVSS v3 Score | Searches for assets with a CVE v3 that matches a specific numerical score or range of scores. | Numerical (1-10) | | | Final Response Code | The final response code associated to the final URL. | 200 | |
-| Reponse Code | Other detected responses codes when attempting to access the asset. | 400 | |
+| Response Code | Other detected responses codes when attempting to access the asset. | 400 | |
| Attribute Type | Additional services running on the asset. This can include IP addresses trackers. | address, AdblockPlusAcceptableAdsSignature | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | | Attribute Type & Value | The attribute type and value within a single field. | address 192.168.92.73 | | | Attribute Value | The values for any attributes found on the asset. | 192.168.92.73 | |
The following filters require that the user manually enters the value with which
| State/Province Code | The state or province code associated with the state of origin. | WA | | | Web Component Type | The infrastructure type of a detected component. | Hosting Provider, DDOS Protection, Service, Server | | | Domain | The parent domain of the page. | contoso.com | `Equals` `Not Equals` `Starts with` `Does not start with` `Matches` `Does Not Match` `In` `Not in` `Starts with in` `Does not start with in` `Matches in` `Does not match in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
-| Error | Error message when retrieving a page. | It is recommended that users use the ΓÇ£containsΓÇ¥ operator to find errors that match a specific keyword (e.g. ΓÇ¥script errorΓÇ¥). | |
+| Error | Error message when retrieving a page. | It's recommended that users use the ΓÇ£containsΓÇ¥ operator to find errors that match a specific keyword (e.g. ΓÇ¥script errorΓÇ¥). | |
| Final URL | The final URL that is presented via the page URL. This value will be the same as the Page name unless there were detected redirects. | https://contoso.com/mainpage.html | | | Framework | Framework services running on the asset. | PHP, J2EE, Java | | | Host | Any host(s) associated with the asset. | host1.contoso.com | |
external-attack-surface-management Ssl Certificate Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/ssl-certificate-asset-filters.md
description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management for SSL certificate assets specifically, including operators and applicable field values. -+ Last updated 12/14/2022
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
Title: Understanding asset details
description: Understanding asset details- Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. -+ Last updated 07/14/2022
external-attack-surface-management Understanding Billable Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-billable-assets.md
description: This article describes how users are billed for their Defender EASM resource usage, and guides them to the dashboard that displays their counts. -+ Last updated 11/28/2022
external-attack-surface-management Understanding Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-dashboards.md
Title: Understanding dashboards
description: Microsoft Defender External Attack Surface Management (Defender EASM) offers a series of four dashboards designed to help users quickly surface valuable insights derived from their Attack Surface inventory. -+ Last updated 07/14/2022
external-attack-surface-management Understanding Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md
Title: Understanding inventory assets
description: Microsoft's proprietary discovery technology recursively searches for infrastructure with observed connections to known legitimate assets. -+ Last updated 07/14/2022
These asset states are uniquely processed and monitored to ensure that customers
- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md) - [Understanding asset details](understanding-asset-details.md)-- [Using and managing discovery](using-and-managing-discovery.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
Title: Using and managing discovery
description: Using and managing discovery - Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. -+ Last updated 07/14/2022
Before you run a custom discovery, see the [What is discovery?](what-is-discover
## Accessing your automated attack surface
-Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It is recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running additional discoveries. This enables users to quickly access their inventory as Defender EASM refreshes the data, adding additional assets and recent context to your Attack Surface.
+Microsoft has pre-emptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It's recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running other discoveries. This process enables users to quickly access their inventory as Defender EASM refreshes the data, adding additional assets and recent context to your Attack Surface.
When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces. Then select your organization from the list and click ΓÇ£Build my Attack SurfaceΓÇ¥.
-At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. See the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
+At this point, the discovery runs in the background. If you selected a preconfigured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Review these dashboard insights to become familiar with your Attack Surface as you wait for more assets to be discovered and populated in your inventory. See the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
-If you notice any missing assets or have other entities to manage that may not be discovered through infrastructure clearly linked to your organization, you can elect to run customized discoveries to detect these outlier assets.
+If you notice any missing assets or have other entities to manage that may not be discovered through infrastructure that is clearly linked to your organization, elect to run customized discoveries to detect these outlier assets.
## Customizing discovery
-Custom discoveries are ideal for organizations that require deeper visibility into infrastructure that may not be immediately linked to their primary seed assets. By submitting a larger list of known assets to operate as discovery seeds, the discovery engine will return a wider pool of assets. Custom discovery can also help organizations find disparate infrastructure that may relate to independent business units and acquired companies.
+Custom discoveries are ideal for organizations that require deeper visibility into infrastructure that may not be immediately linked to their primary seed assets. By submitting a larger list of known assets to operate as discovery seeds, the discovery engine returns a wider pool of assets. Custom discovery can also help organizations find disparate infrastructure that may relate to independent business units and acquired companies.
### Discovery groups
Custom discoveries are organized into Discovery Groups. They are independent see
:::image type="content" source="media/Discovery_6.png" alt-text="Screenshot of pre-baked attack surface selection page, then output in seed list.":::
- :::image type="content" source="media/Discovery_7.png" alt-text="Screenshot of pre-baked attack surface selection page..":::
+ :::image type="content" source="media/Discovery_7.png" alt-text="Screenshot of pre-baked attack surface selection page.":::
- Alternatively, users can manually input their seeds. Defender EASM accepts organization names, domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
+ Alternatively, users can manually input their seeds. Defender EASM accepts organization names, domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they aren't added to your inventory if detected. For example, exclusions are useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
Once your seeds have been selected, select **Review + Create**.
Custom discoveries are organized into Discovery Groups. They are independent see
:::image type="content" source="media/Discovery_8.png" alt-text="Screenshot of review + create screen.":::
- You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
+ You'll then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you'll see new assets added to your Approved Inventory.
### Viewing and editing discovery groups
Click on any discovery group to view more information, edit the group, or immedi
The discovery group details page contains the run history for the group. Once expanded, this section displays key information about each discovery run that has been performed on the specific group of seeds. The Status column indicates whether the run is ΓÇ£In ProgressΓÇ¥, ΓÇ£Complete,ΓÇ¥ or ΓÇ£FailedΓÇ¥. This section also includes ΓÇ£startedΓÇ¥ and ΓÇ£completedΓÇ¥ timestamps and counts of the total number of assets versus new assets discovered.
-Run history is organized by the seed assets scanned during the discovery run. To see a list of the applicable seeds, click ΓÇ£DetailsΓÇ¥. This opens a right-hand pane that lists all the seeds and exclusions by kind and name.
+Run history is organized by the seed assets scanned during the discovery run. To see a list of the applicable seeds, click ΓÇ£DetailsΓÇ¥. This action opens a right-hand pane that lists all the seeds and exclusions by kind and name.
:::image type="content" source="media/Discovery_10.png" alt-text="Screenshot of run history for disco group screen.":::
The seed list view displays seed values with three columns: type, source name, a
### Exclusions
-Similarly, you can click the ΓÇ£ExclusionsΓÇ¥ tab to see a list of entities that have been excluded from the discovery group. This means that these assets will not be used as discovery seeds or added to your inventory. It is important to note that exclusions only impact future discovery runs for an individual discovery group. The ΓÇ£type" field displays the category of the excluded entity. The source name is the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups where this exclusion is present; each value is clickable, taking you to the details page for that discovery group.
+Similarly, you can click the ΓÇ£ExclusionsΓÇ¥ tab to see a list of entities that have been excluded from the discovery group. This means that these assets will not be used as discovery seeds or added to your inventory. It's important to note that exclusions only impact future discovery runs for an individual discovery group. The ΓÇ£type" field displays the category of the excluded entity. The source name is the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups where this exclusion is present; each value is clickable, taking you to the details page for that discovery group.
## Next steps
external-attack-surface-management What Is Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/what-is-discovery.md
Title: What is Discovery?
description: What is Discovery - Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. -+ Last updated 07/14/2022
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
IDPS signature rules have the following properties:
||| |Signature ID |Internal ID for each signature. This ID is also presented in Azure Firewall Network Rules logs.| |Mode |Indicates if the signature is active or not, and whether firewall drops or alerts upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You receive alerts and suspicious traffic is blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures isn't blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br> Note: IDPS alerts are available in the portal via network rule log query.|
-|Severity |Each signature has an associated severity level and assigned priority that indicates the probability that the signature is an actual attack.<br>- **Low (priority 3)**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium (priority 2)**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High (priority 1)**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.|
+|Severity |Each signature has an associated severity level and assigned priority that indicates the probability that the signature is an actual attack.<br>- **Low (priority 1)**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium (priority 2)**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High (priority 3)**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.|
|Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.| |Group |The group name that the signature belongs to.| |Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE.|
governance Agent Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/agent-notes.md
Title: Azure Automanage machine configuration agent release notes description: Details guest configuration agent release notes, issues, and frequently asked questions.- Last updated 09/13/2022 -- # Azure Automanage machine configuration agent release notes
governance Machine Configuration Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-assignments.md
Title: Understand machine configuration assignment resources description: Machine configuration creates extension resources named machine configuration assignments that map configurations to machines.- Last updated 01/12/2023 -- # Understand machine configuration assignment resources
governance Machine Configuration Azure Automation Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-azure-automation-migration.md
Title: Azure Automation State Configuration to machine configuration migration p
description: This article provides process and technical guidance for customers interested in moving from DSC version 2 in Azure Automation to version 3 in Azure Policy. Last updated 03/06/2023 - -- # Azure Automation state configuration to machine configuration migration planning
the configuration to a MOF file and create a machine configuration package.
Some modules might encounter compatibility issues with machine configuration. The most common problems are related to .NET framework vs .NET core. Detailed technical information is available on the page,
-[Differences between Windows PowerShell 5.1 and PowerShell (core) 7.x](/powershell/scripting/whats-new/differences-from-windows-powershell)
+[Differences between Windows PowerShell 5.1 and PowerShell (core) 7.x](/powershell/gallery/how-to/working-with-local-psrepositories)
One option to resolve compatibility issues is to run commands in Windows PowerShell from within a module that is imported in PowerShell 7, by running `powershell.exe`.
governance Machine Configuration Create Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-assignment.md
Title: How to create a machine configuration assignment using templates
description: Learn how to deploy configurations to machines directly from Azure Resource Manager. Last updated 07/25/2022 --- # How to create a machine configuration assignment using templates
governance Machine Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-definition.md
Title: How to create custom machine configuration policy definitions
description: Learn how to create a machine configuration policy. Last updated 10/17/2022 --- # How to create custom machine configuration policy definitions
governance Machine Configuration Create Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-publish.md
Title: How to publish custom machine configuration package artifacts
description: Learn how to publish a machine configuration package file top Azure blob storage and get a SAS token for secure access. Last updated 07/25/2022 - -- # How to publish custom machine configuration package artifacts
governance Machine Configuration Create Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-setup.md
Title: How to install the machine configuration authoring module
description: Learn how to install the PowerShell module for creating and testing machine configuration policy definitions and assignments. Last updated 01/13/2023 --- # How to set up a machine configuration authoring environment
governance Machine Configuration Create Signing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-signing.md
Title: How to sign machine configuration packages
description: You can optionally sign machine configuration content packages and force the agent to only allow signed content Last updated 07/25/2022 --- # How to sign machine configuration packages
governance Machine Configuration Create Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-test.md
Title: How to test machine configuration package artifacts
description: The experience creating and testing packages that audit or apply configurations to machines. Last updated 07/25/2022 --- # How to test machine configuration package artifacts
To run PowerShell as "LocalSystem" in Windows, use the SysInternals tool
[PSExec](/sysinternals/downloads/psexec). To run PowerShell as "Root" in Linux, use the
-[Su command](https://manpages.ubuntu.com/manpages/man1/su.1.html).
+[sudo command](https://www.sudo.ws/docs/man/sudo.man/).
## Validate the configuration package meets requirements
governance Machine Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create.md
Title: How to create custom machine configuration package artifacts
description: Learn how to create a machine configuration package file. Last updated 02/14/2023 --- # How to create custom machine configuration package artifacts
governance Machine Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-custom.md
Title: Changes to behavior in PowerShell Desired State Configuration for machine configuration description: This article describes the platform used to deliver configuration changes to machines through Azure Policy.- Last updated 07/15/2022 -- # Changes to behavior in PowerShell Desired State Configuration for machine configuration
governance Machine Configuration Dsc Extension Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-dsc-extension-migration.md
Title: Planning a change from Desired State Configuration extension for Linux to
description: Guidance for moving from Desired State Configuration extension to the machine configuration feature of Azure Policy. Last updated 07/25/2022 --- # Planning a change from Desired State Configuration extension for Linux to machine configuration
governance Machine Configuration Policy Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-policy-effects.md
Title: Remediation options for machine configuration description: Azure Policy's machine configuration feature offers options for continuous remediation or control using remediation tasks.- Last updated 07/25/2022 -- # Remediation options for machine configuration
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
Title: Understand Azure Automanage Machine Configuration description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines.- Last updated 03/02/2023 -- # Understand the machine configuration feature of Azure Automanage
Capture information from log files using
following example Bash script can be helpful. ```bash
-linesToIncludeBeforeMatch=0
-linesToIncludeAfterMatch=10
-logPath=/var/lib/GuestConfig/gc_agent_logs/gc_agent.log
-egrep -B $linesToIncludeBeforeMatch -A $linesToIncludeAfterMatch 'DSCEngine|DSCManagedEngine' $logPath | tail
+LINES_TO_INCLUDE_BEFORE_MATCH=0
+LINES_TO_INCLUDE_AFTER_MATCH=10
+LOGPATH=/var/lib/GuestConfig/gc_agent_logs/gc_agent.log
+egrep -B $LINES_TO_INCLUDE_BEFORE_MATCH -A $LINES_TO_INCLUDE_AFTER_MATCH 'DSCEngine|DSCManagedEngine' $LOGPATH | tail
``` ### Agent files
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
You can also enter the key URI here:
Additionally, ensure that the soft delete is enabled in the properties of the Key Vault. Not completing these steps will result in a deployment error. For more information, see [Verify if soft delete is enabled on a key vault and enable soft delete](../../key-vault/general/key-vault-recovery.md?tabs=azure-portal#verify-if-soft-delete-is-enabled-on-a-key-vault-and-enable-soft-delete). > [!NOTE]
-> Using customer-managed keys in Brazil South and East Asia regions requires an Enterprise Application ID generated by Microsoft. You can request Enterprise Application ID by creating a one-time support ticket through the Azure portal. After receiving the Application ID, follow [the instructions to register the application](/azure/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys?tabs=azure-portal#the-customer-grants-the-service-providers-app-access-to-the-key-in-the-key-vault).
+> Using customer-managed keys in the Brazil South, East Asia, and Southeast Asia Azure regions requires an Enterprise Application ID generated by Microsoft. You can request Enterprise Application ID by creating a one-time support ticket through the Azure portal. After receiving the Application ID, follow [the instructions to register the application](/azure/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys?tabs=azure-portal#the-customer-grants-the-service-providers-app-access-to-the-key-in-the-key-vault).
For existing FHIR accounts, you can view the key encryption choice (**Service-managed key** or **Customer-managed key**) in the **Database** blade as shown below. The configuration option can't be modified once it's selected. However, you can modify and update your key.
healthcare-apis Overview Of Fhir Destination Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-fhir-destination-mapping.md
Previously updated : 04/14/2023 Last updated : 04/17/2023
The MedTech service requires two types of [JSON](https://www.json.org/) mappings
## FHIR destination mapping basics
-The FHIR destination mapping controls how the data extracted from a device message is mapped into a FHIR observation.
+The FHIR destination mapping controls how the normalized data extracted from a device message is mapped into a FHIR observation.
- Should an observation be created for a point in time or over a period of an hour? - What codes should be added to the observation?
The FHIR destination mapping controls how the data extracted from a device messa
These data types are all options the FHIR destination mapping configuration controls.
-Once a device message is transformed into a normalized data model, the data is collected for transformation to a [FHIR Observation](https://www.hl7.org/fhir/observation.html). If the Observation type is [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), the data is grouped according to device identifier, measurement type, and time period (time period can be either 1 hour or 24 hours). The output of this grouping is sent for conversion into a single [FHIR Observation](https://www.hl7.org/fhir/observation.html) that represents the time period for that data type. For other Observation types ([Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept) and [string](https://www.hl7.org/fhir/datatypes.html#string)) data is not grouped, but instead each measurement is transformed into a single Observation representing a point in time.
+Once device data is transformed into a normalized data model, the normalized data is collected for transformation to a [FHIR Observation](https://www.hl7.org/fhir/observation.html). If the Observation type is [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), the data is grouped according to device identifier, measurement type, and time period (time period can be either 1 hour or 24 hours). The output of this grouping is sent for conversion into a single [FHIR Observation](https://www.hl7.org/fhir/observation.html) that represents the time period for that data type. For other Observation types ([Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept) and [string](https://www.hl7.org/fhir/datatypes.html#string)) data isn't grouped, but instead each measurement is transformed into a single Observation representing a point in time.
> [!TIP] > For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
An enrollment group that uses [symmetric key attestation](concepts-symmetric-key
::: zone pivot="programming-language-ansi-c"
-In this section, you'll prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device during the device's boot sequence.
+In this section, you'll prepare a development environment that's used to build the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device during the device's boot sequence.
1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
In this section, you'll prepare a development environment that's used to build t
1. Copy the tag name for the latest release of the Azure IoT C SDK.
-1. In a Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository (replace `<release-tag>` with the tag you copied in the previous step).
+1. In a Windows command prompt, run the following commands to clone the latest release of the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step, for example: `lts_01_2023`.
```cmd git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
In this section, you'll prepare a development environment that's used to build t
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-node.git --recursive
In this section, you'll prepare a development environment that's used to build t
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT SDK for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+2. Clone the [Azure IoT Device SDK for Python](https://github.com/Azure/azure-iot-sdk-python/tree/v2) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
- ```
+ ```cmd
+ git clone -b v2 https://github.com/Azure/azure-iot-sdk-python.git --recursive
+ ```
+
+ >[!NOTE]
+ >The samples used in this tutorial are in the **v2** branch of the azure-iot-sdk-python repository. V3 of the Python SDK is available to use in beta. For information about updating V2 code samples to use a V3 release of the Python SDK, see [Azure IoT Device SDK for Python migration guide](https://github.com/Azure/azure-iot-sdk-python/blob/main/migration_guide_provisioning.md).
::: zone-end
In this section, you'll prepare a development environment that's used to build t
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
To update and run the provisioning sample with your device information:
3. Open a command prompt and go to the *SymmetricKeySample* in the cloned sdk repository: ```cmd
- cd .\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample
+ cd .\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\SymmetricKeySample
``` 4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters that are supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
To update and run the provisioning sample with your device information:
7. You should see something similar to the following output. A "TestMessage" string is sent to the hub as a test message. ```output
- D:\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample>dotnet run --s 0ne00000A0A --i sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+ D:\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\SymmetricKeySample>dotnet run --i 0ne00000A0A --r sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
Initializing the device provisioning client... Initialized for registration Id sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6.
To update and run the provisioning sample with your device information:
5. Open a command prompt for building. Go to the provisioning sample project folder of the Java SDK repository. ```cmd
- cd azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-symmetrickey-individual-sample
+ cd azure-iot-sdk-java\provisioning\provisioning-device-client-samples\provisioning-symmetrickey-individual-sample
``` 6. Build the sample.
iot-dps How To Provision Multitenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-provision-multitenant.md
For each VM:
2. Find and copy the tag name for the [latest release](https://github.com/Azure/azure-iot-sdk-c/releases/latest) of the SDK.
-3. Clone the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) on both VMs. Use the tag you found in the previous step as the value for the `-b` parameter:
+3. Clone the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) on both VMs. Use the tag you found in the previous step as the value for the `-b` parameter, for example: `lts_01_2023`.
```bash git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
The DPS SDKs help to provision devices to your IoT hubs. Microsoft also provides
## Device SDKs
-The DPS device SDKs provide implementations of the [Register](/rest/api/iot-dps/device/runtime-registration/register-device) API and others that devices call to provision through DPS. The device SDKs can run on general MPU-based computing devices such as a PC, tablet, smartphone, or Raspberry Pi. The SDKs support development in C and in modern managed languages including in C#, Node.JS, Python, and Java.
-
-| Platform | Package | Code repository | Samples | Quickstart | Reference |
-| --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-csharp&tabs=windows)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
-| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Samples](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-ansi-c&tabs=windows)|[Reference](https://github.com/Azure/azure-iot-sdk-c/) |
-| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-device-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=windows)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) |
-| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-nodejs&tabs=windows)|[Reference](/javascript/api/azure-iot-provisioning-device) |
-| Python|[pip](https://pypi.org/project/azure-iot-device/) |[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-python&tabs=windows)|[Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient) |
-
-> [!WARNING]
-> The **C SDK** listed above is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
### Embedded device SDKs
-These SDKs were designed and created to run on devices with limited compute and memory resources and are implemented using the C language.
-
-| RTOS | SDK | Source | Samples | Reference |
-| :-- | :-- | :-- | :-- | :-- |
-| **Azure RTOS** | Azure RTOS Middleware | [GitHub](https://github.com/azure-rtos/netxduo) | [Quickstarts](../iot-develop/quickstart-devkit-mxchip-az3166.md) | [Reference](https://github.com/azure-rtos/netxduo/tree/master/addons/azure_iot) |
-| **FreeRTOS** | FreeRTOS Middleware | [GitHub](https://github.com/Azure/azure-iot-middleware-freertos) | [Samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples) | [Reference](https://azure.github.io/azure-iot-middleware-freertos) |
-| **Bare Metal** | Azure SDK for Embedded C | [GitHub](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot) | [Samples](https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/samples/iot/README.md) | [Reference](https://azure.github.io/azure-sdk-for-c) |
-
-Learn more about the device and embedded device SDKs in the [IoT Device Development documentation](../iot-develop/about-iot-sdks.md).
## Service SDKs
-The DPS service SDKs help you build backend applications to manage enrollments and registration records in DPS instances.
-
-| Platform | Package | Code repository | Samples | Quickstart | Reference |
-| --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-csharp&tabs=symmetrickey)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
-| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=symmetrickey)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.service) |
-| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-service)|[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/service/samples)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-nodejs&tabs=symmetrickey)|[Reference](/javascript/api/azure-iot-provisioning-service) |
## Management SDKs
-The DPS management SDKs help you build backend applications that manage the DPS instances and their metadata in your Azure subscription.
-
-| Platform | Package | Code repository | Reference |
-| --|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.DeviceProvisioningServices) |[GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/deviceprovisioningservices/Microsoft.Azure.Management.DeviceProvisioningServices)| [Reference](/dotnet/api/overview/azure/resourcemanager.deviceprovisioningservices-readme) |
-| Java|[Maven](https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager-deviceprovisioningservices) |[GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/deviceprovisioningservices/azure-resourcemanager-deviceprovisioningservices)| [Reference](/java/api/com.azure.resourcemanager.deviceprovisioningservices) |
-| Node.js|[npm](https://www.npmjs.com/package/@azure/arm-deviceprovisioningservices)|[GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/deviceprovisioningservices/arm-deviceprovisioningservices)|[Reference](/javascript/api/overview/azure/arm-deviceprovisioningservices-readme) |
-| Python|[pip](https://pypi.org/project/azure-mgmt-iothubprovisioningservices/) |[GitHub](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/iothub/azure-mgmt-iothubprovisioningservices)|[Reference](/python/api/azure-mgmt-iothubprovisioningservices) |
## Next steps
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
In this section, you prepare a development environment that's used to build the
4. Copy the tag name for the latest release of the Azure IoT C SDK.
-5. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository(replace `<release-tag>` with the tag you copied in the previous step).
+5. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step, for example: `lts_01_2023`.
```cmd git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
In this section, you prepare a development environment that's used to build the
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-node.git --recursive
In this section, you prepare a development environment that's used to build the
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT SDK for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for Python](https://github.com/Azure/azure-iot-sdk-python/tree/v2) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
- ```
+ ```cmd
+ git clone -b v2 https://github.com/Azure/azure-iot-sdk-python.git --recursive
+ ```
+
+ >[!NOTE]
+ >The samples used in this tutorial are in the **v2** branch of the azure-iot-sdk-python repository. V3 of the Python SDK is available to use in beta. For information about updating V2 code samples to use a V3 release of the Python SDK, see [Azure IoT Device SDK for Python migration guide](https://github.com/Azure/azure-iot-sdk-python/blob/main/migration_guide_provisioning.md).
::: zone-end
In this section, you prepare a development environment that's used to build the
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
To update and run the provisioning sample with your device information:
3. Open a command prompt and go to the *SymmetricKeySample* in the cloned sdk repository: ```cmd
- cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample\'
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\SymmetricKeySample\'
``` 4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters that are supported by the sample. Only the first three required parameters are used in this article when running the sample. Review the code in this file. No changes are needed.
To update and run the provisioning sample with your device information:
7. You should now see something similar to the following output. A "TestMessage" string is sent to the hub as a test message. ```output
- D:\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample>dotnet run --i 0ne00000A0A --r symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+ D:\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\SymmetricKeySample>dotnet run --i 0ne00000A0A --r symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
Initializing the device provisioning client... Initialized for registration Id symm-key-csharp-device-01.
To update and run the provisioning sample with your device information:
5. Open a command prompt for building. Go to the provisioning sample project folder of the Java SDK repository. ```cmd
- cd azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-symmetrickey-individual-sample
+ cd azure-iot-sdk-java\provisioning\provisioning-device-client-samples\provisioning-symmetrickey-individual-sample
``` 6. Build the sample.
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll prepare a development environment used to build the [Azu
4. Copy the tag name for the latest release of the Azure IoT C SDK.
-5. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `<release-tag>` with the tag you copied in the previous step).
+5. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step, for example: `lts_01_2023`.
```cmd/sh git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
In this section, you'll build and execute a sample that reads the endorsement ke
1. In a command prompt, change directories to the project directory for the TPM device provisioning sample. ```cmd
- cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\TpmSample\'
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\TpmSample\'
``` 2. Type the following command to build and run the TPM device provisioning sample. Copy the endorsement key returned from your TPM 2.0 hardware security module to use later when enrolling your device.
In this section, you'll configure sample code to use the [Advanced Message Queui
3. In a command prompt, change directories to the project directory for the TPM device provisioning sample. ```cmd
- cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\TpmSample\'
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\TpmSample\'
``` 4. Run the following command to register your device. Replace `<IdScope>` with the value for the DPS you copied and `<RegistrationId>` with the value you used when creating the device enrollment.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you'll prepare a development environment that's used to build t
3. Copy the tag name for the latest release of the Azure IoT C SDK.
-4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step.
+4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step, for example: `lts_01_2023`.
```cmd git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
git clone https://github.com/Azure/azure-iot-sdk-csharp.git
::: zone pivot="programming-language-nodejs"
-In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-node.git
git clone https://github.com/Azure/azure-iot-sdk-node.git
::: zone pivot="programming-language-python"
-In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT Device SDK for Python](https://github.com/Azure/azure-iot-sdk-python/tree/v2) GitHub repository using the following command:
```cmd
-git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
+git clone -b v2 https://github.com/Azure/azure-iot-sdk-python.git --recursive
```
+>[!NOTE]
+>The samples used in this tutorial are in the **v2** branch of the azure-iot-sdk-python repository. V3 of the Python SDK is available to use in beta. For information about updating V2 code samples to use a V3 release of the Python SDK, see [Azure IoT Device SDK for Python migration guide](https://github.com/Azure/azure-iot-sdk-python/blob/main/migration_guide_provisioning.md).
+ ::: zone-end ::: zone pivot="programming-language-java"
-1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
In this section, you'll use your Windows command prompt.
:::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
-3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\Getting Started\X509Sample* directory off the directory where you cloned the samples on your computer.
+3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\getting started\X509Sample* directory off the directory where you cloned the samples on your computer.
4. Enter the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section. The certificate file will default to *./certificate.pfx* and prompt for the .pfx password.
In this section, you use both your Windows command prompt and your Git Bash prom
1. In your Windows command prompt, navigate to the sample project folder. The path shown is relative to the location where you cloned the SDK ```cmd
- cd .\azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-X509-sample
+ cd .\azure-iot-sdk-java\provisioning\provisioning-device-client-samples\provisioning-X509-sample
``` 1. Enter the provisioning service and X.509 identity information in the sample code. This information is used during provisioning, for attestation of the simulated device, prior to device registration.
- 1. Open the file `.\src\main\java\samples\com/microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
+ 1. Open the file `.\src\main\java\samples\com\microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
1. Update the following values with the **ID Scope** and **Provisioning Service Global Endpoint** that you copied previously.
iot-dps Quick Enroll Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-tpm.md
This section shows you how to create a .NET Core console app that adds an indivi
1. Go to the sample folder: ```cmd
- cd azure-iot-sdk-java\provisioning\provisioning-samples\service-enrollment-sample
+ cd azure-iot-sdk-java\provisioning\provisioning-service-client-samples\service-enrollment-sample
``` 1. Open the file *\src\main\java\samples\com\microsoft\azure\sdk\iot\ServiceEnrollmentSample.java* in an editor.
AToAAQALAAMAsgAgg3GXZ0SEs/gakMyNRqXXJP1S124GUgtk8qHaGzMUaaoABgCAAEMAEAgAAAAAAAEA
:::zone pivot="programming-language-java"
-1. From the *azure-iot-sdk-java\provisioning\provisioning-samples\service-enrollment-sample* folder in your command prompt, run the following command to build the sample:
+1. From the *azure-iot-sdk-java\provisioning\provisioning-service-client-samples\service-enrollment-sample* folder in your command prompt, run the following command to build the sample:
```cmd\sh mvn install -DskipTests
iot-dps Quick Enroll Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md
This section shows you how to create a Node.js script that adds an enrollment gr
1. From the location where you downloaded the repo, go to the sample folder: ```cmd\sh
- cd azure-iot-sdk-java\provisioning\provisioning-samples\service-enrollment-group-sample
+ cd azure-iot-sdk-java\provisioning\provisioning-service-client-samples\service-enrollment-group-sample
``` 1. Open the file *_/src/main/java/samples/com/microsoft/azure/sdk/iot/ServiceEnrollmentGroupSample.java_* in an editor of your choice.
This section shows you how to create a Node.js script that adds an enrollment gr
:::zone pivot="programming-language-java"
-1. From the *azure-iot-sdk-java\provisioning\provisioning-samples\service-enrollment-group-sample* folder in your command prompt, run the following command to build the sample:
+1. From the *azure-iot-sdk-java\provisioning\provisioning-service-client-samples\service-enrollment-group-sample* folder in your command prompt, run the following command to build the sample:
```cmd\sh mvn install -DskipTests
iot-dps Tutorial Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-allocation-policies.md
This section is oriented toward a Windows-based workstation. For a Linux example
2. Find the tag name for the [latest release](https://github.com/Azure/azure-iot-sdk-c/releases/latest) of the SDK.
-3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Use the tag you found in the previous step as the value for the `-b` parameter:
+3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Use the tag you found in the previous step as the value for the `-b` parameter, for example: `lts_01_2023`.
```cmd/sh git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
In this section, you'll prepare a development environment used to build the [Azu
3. Copy the tag name for the latest release of the Azure IoT C SDK.
-4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step.
+4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step, for example: `lts_01_2023`.
```cmd git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
git clone https://github.com/Azure/azure-iot-sdk-csharp.git
::: zone pivot="programming-language-nodejs"
-In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-node.git
git clone https://github.com/Azure/azure-iot-sdk-node.git
::: zone pivot="programming-language-python"
-In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT Device SDK for Python](https://github.com/Azure/azure-iot-sdk-python/tree/v2) GitHub repository using the following command:
```cmd
-git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
+git clone -b v2 https://github.com/Azure/azure-iot-sdk-python.git --recursive
```
+>[!NOTE]
+>The samples used in this tutorial are in the **v2** branch of the azure-iot-sdk-python repository. V3 of the Python SDK is available to use in beta. For information about updating V2 code samples to use a V3 release of the Python SDK, see [Azure IoT Device SDK for Python migration guide](https://github.com/Azure/azure-iot-sdk-python/blob/main/migration_guide_provisioning.md).
+ ::: zone-end ::: zone pivot="programming-language-java"
-1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
In the rest of this section, you'll use your Windows command prompt.
:::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
-3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\Getting Started\X509Sample* directory off the directory where you cloned the samples on your computer.
+3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\getting started\X509Sample* directory off the directory where you cloned the samples on your computer.
4. Enter the following command to build and run the X.509 device provisioning sample (replace `<id-scope>` with the ID Scope that you copied in step 2. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
In the following steps, you'll use both your Windows command prompt and your Git
1. In your Windows command prompt, navigate to the sample project folder. The path shown is relative to the location where you cloned the SDK ```cmd
- cd .\azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-X509-sample
+ cd .\azure-iot-sdk-java\provisioning\provisioning-device-client-samples\provisioning-X509-sample
``` 1. Enter the provisioning service and X.509 identity information in the sample code. This is used during provisioning, for attestation of the simulated device, prior to device registration.
- 1. Open the file `.\src\main\java\samples\com/microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
+ 1. Open the file `.\src\main\java\samples\com\microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
1. Update the following values. For `idScope`, use the **ID Scope** that you copied previously. For global endpoint, use the **Global device endpoint**. This endpoint is the same for every DPS instance, `global.azure-devices-provisioning.net`.
iot Iot Overview Solution Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md
IoT Hub and the Device Provisioning Service (DPS) provide a set of service APIs
- Managing enrollment groups (DPS) - Managing initial device twin state (DPS)
-For a list of the available service APIs, see [Service SDKs](iot-sdks.md#service-sdks)
+For a list of the available service APIs, see [Service SDKs](iot-sdks.md#iot-hub-service-sdks).
### REST APIs (IoT Central)
iot Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-sdks.md
Use the device SDKs to develop code to run on IoT devices that connect to IoT Hu
To learn more about how to use the device SDKs, see [What is Azure IoT device and application development?](../iot-develop/about-iot-develop.md).
-## Embedded device SDKs
+### Embedded device SDKs
[!INCLUDE [iot-hub-sdks-embedded](../../includes/iot-hub-sdks-embedded.md)]
Use the embedded device SDKs to develop code to run on IoT devices that connect
To learn more about when to use the embedded device SDKs, see [C SDK and Embedded C SDK usage scenarios](../iot-develop/concepts-using-c-sdk-and-embedded-c-sdk.md).
-## Service SDKs
+## IoT Hub service SDKs
[!INCLUDE [iot-hub-sdks-service](../../includes/iot-hub-sdks-service.md)] To learn more about using the service SDKs to interact with devices through an IoT hub, see [IoT Plug and Play service developer guide](../iot-develop/concepts-developer-guide-service.md).
-## Management SDKs
+## IoT Hub management SDKs
[!INCLUDE [iot-hub-sdks-management](../../includes/iot-hub-sdks-management.md)] Alternatives to the management SDKs include the [Azure CLI](../iot-hub/iot-hub-create-using-cli.md), [PowerShell](../iot-hub/iot-hub-create-using-powershell.md), and [REST API](../iot-hub/iot-hub-rm-rest.md).
+## DPS device SDKs
++
+### DPS embedded device SDKs
++
+## DPS service SDKs
++
+## DPS management SDKs
++
+## Azure Digital Twins control plane APIs
++
+## Azure Digital Twins data plane APIs
++ ## Next steps Suggested next steps include:
lab-services How To Attach External Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md
For file share with a public endpoint:
#!/bin/bash # Assign variables values for your storage account and file share
-storage_account_name=""
-storage_account_key=""
-fileshare_name=""
+STORAGE_ACCOUNT_NAME=""
+STORAGE_ACCOUNT_KEY=""
+FILESHARE_NAME=""
# Do not use 'mnt' for mount directory. # Using ΓÇÿmntΓÇÖ will cause issues on student VMs.
-mount_directory="prm-mnt"
+MOUNT_DIRECTORY="prm-mnt"
-sudo mkdir /$mount_directory/$fileshare_name
+sudo mkdir /$MOUNT_DIRECTORY/$FILESHARE_NAME
if [ ! -d "/etc/smbcredentials" ]; then sudo mkdir /etc/smbcredentials fi
-if [ ! -f "/etc/smbcredentials/$storage_account_name.cred" ]; then
- sudo bash -c "echo ""username=$storage_account_name"" >> /etc/smbcredentials/$storage_account_name.cred"
- sudo bash -c "echo ""password=$storage_account_key"" >> /etc/smbcredentials/$storage_account_name.cred"
+if [ ! -f "/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" ]; then
+ sudo bash -c "echo ""username=$STORAGE_ACCOUNT_NAME"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred"
+ sudo bash -c "echo ""password=$STORAGE_ACCOUNT_KEY"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred"
fi
-sudo chmod 600 /etc/smbcredentials/$storage_account_name.cred
+sudo chmod 600 /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred
-sudo bash -c "echo ""//$storage_account_name.file.core.windows.net/$fileshare_name /$mount_directory/$fileshare_name cifs nofail,vers=3.0,credentials=/etc/smbcredentials/$storage_account_name.cred,dir_mode=0777,file_mode=0777,serverino"" >> /etc/fstab"
-sudo mount -t cifs //$storage_account_name.file.core.windows.net/$fileshare_name /$mount_directory/$fileshare_name -o vers=3.0,credentials=/etc/smbcredentials/$storage_account_name.cred,dir_mode=0777,file_mode=0777,serverino
+sudo bash -c "echo ""//$STORAGE_ACCOUNT_NAME.file.core.windows.net/$FILESHARE_NAME /$MOUNT_DIRECTORY/$FILESHARE_NAME cifs nofail,vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino"" >> /etc/fstab"
+sudo mount -t cifs //$STORAGE_ACCOUNT_NAME.file.core.windows.net/$FILESHARE_NAME /$MOUNT_DIRECTORY/$FILESHARE_NAME -o vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino
``` For file share with a private endpoint:
For file share with a private endpoint:
#!/bin/bash # Assign variables values for your storage account and file share
-storage_account_name=""
-storage_account_ip=""
-storage_account_key=""
-fileshare_name=""
+STORAGE_ACCOUNT_NAME=""
+STORAGE_ACCOUNT_IP=""
+STORAGE_ACCOUNT_KEY=""
+FILESHARE_NAME=""
# Do not use 'mnt' for mount directory. # Using ΓÇÿmntΓÇÖ will cause issues on student VMs.
-mount_directory="prm-mnt"
+MOUNT_DIRECTORY="prm-mnt"
-sudo mkdir /$mount_directory/$fileshare_name
+sudo mkdir /$MOUNT_DIRECTORY/$FILESHARE_NAME
if [ ! -d "/etc/smbcredentials" ]; then sudo mkdir /etc/smbcredentials fi
-if [ ! -f "/etc/smbcredentials/$storage_account_name.cred" ]; then
- sudo bash -c "echo ""username=$storage_account_name"" >> /etc/smbcredentials/$storage_account_name.cred"
- sudo bash -c "echo ""password=$storage_account_key"" >> /etc/smbcredentials/$storage_account_name.cred"
+if [ ! -f "/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" ]; then
+ sudo bash -c "echo ""username=$STORAGE_ACCOUNT_NAME"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred"
+ sudo bash -c "echo ""password=$STORAGE_ACCOUNT_KEY"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred"
fi sudo chmod 600 /etc/smbcredentials/$storage_account_name.cred
-sudo bash -c "echo ""//$storage_account_ip/$fileshare_name /$mount_directory/$fileshare_name cifs nofail,vers=3.0,credentials=/etc/smbcredentials/$storage_account_name.cred,dir_mode=0777,file_mode=0777,serverino"" >> /etc/fstab"
-sudo mount -t cifs //$storage_account_name.file.core.windows.net/$fileshare_name /$mount_directory/$fileshare_name -o vers=3.0,credentials=/etc/smbcredentials/$storage_account_name.cred,dir_mode=0777,file_mode=0777,serverino
+sudo bash -c "echo ""//$STORAGE_ACCOUNT_IP/$FILESHARE_NAME /$MOUNT_DIRECTORY/$fileshare_name cifs nofail,vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino"" >> /etc/fstab"
+sudo mount -t cifs //$STORAGE_ACCOUNT_NAME.file.core.windows.net/$FILESHARE_NAME /$MOUNT_DIRECTORY/$FILESHARE_NAME -o vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino
``` If the template VM that mounts the Azure Files share to the `/mnt` directory is already published, the student can either:
To use an Azure NetApp Files share in Azure Lab
exit 1 fi
- volume_name=$1
- capacity_pool_ipaddress=0.0.0.0 # IP address of capacity pool
+ VOLUME_NAME=$1
+ CAPACITY_POOL_IP_ADDR=0.0.0.0 # IP address of capacity pool
# Do not use 'mnt' for mount directory. # Using ΓÇÿmntΓÇÖ might cause issues on student VMs.
- mount_directory="prm-mnt"
+ MOUNT_DIRECTORY="prm-mnt"
- sudo mkdir -p /$mount_directory
- sudo mkdir /$mount_directory/$folder_name
+ sudo mkdir -p /$MOUNT_DIRECTORY
+ sudo mkdir /$MOUNT_DIRECTORY/$FOLDER_NAME
- sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp $capacity_pool_ipaddress:/$volume_name /$mount_directory/$volume_name
- sudo bash -c "echo ""$capacity_pool_ipaddress:/$volume_name /$mount_directory/$volume_name nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0"" >> /etc/fstab"
+ sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp $CAPACITY_POOL_IP_ADDR:/$VOLUME_NAME /$MOUNT_DIRECTORY/$VOLUME_NAME
+ sudo bash -c "echo ""$CAPACITY_POOL_IP_ADDR:/$VOLUME_NAME /$MOUNT_DIRECTORY/$VOLUME_NAME nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0"" >> /etc/fstab"
``` 6. If all students are sharing access to the same Azure NetApp Files volume, you can run the `mount_fileshare.sh` script on the template machine before publishing. If students each get their own volume, save the script to be run later by the student.
load-balancer Ipv6 Configure Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/ipv6-configure-template-json.md
description: This article shows how to deploy an IPv6 dual stack application in
documentationcenter: na -+ Previously updated : 03/31/2020 Last updated : 04/17/2023
logic-apps Create Maps Data Transformation Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-maps-data-transformation-visual-studio-code.md
+
+ Title: Create maps for data transformation
+description: Create maps to transform data between schemas in Azure Logic Apps using Visual Studio Code.
++
+ms.suite: integration
++ Last updated : 04/17/2023
+# As a developer, I want to transform data in Azure Logic Apps by creating a map between schemas with Visual Studio Code.
++
+# Create maps to transform data in Azure Logic Apps with Visual Studio Code (preview)
+
+> [!IMPORTANT]
+> This capability is in preview and is subject to the
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++
+To exchange messages that have different XML or JSON formats in an Azure Logic Apps workflow, you have to transform the data from one format to another, especially if you have gaps between the source and target schema structures. Data transformation helps you bridge those gaps. For this task, you need to create a map that defines the transformation between data elements in the source and target schemas.
+
+To visually create and edit a map, you can use Visual Studio Code with the Data Mapper extension within the context of a Standard logic app project. The Data Mapper tool provides a unified experience for XSLT mapping and transformation using drag and drop gestures, a prebuilt functions library for creating expressions, and a way to manually test the maps that you create and use in your workflows.
+
+After you create your map, you can directly call that map from a workflow in your logic app project or from a workflow in the Azure portal. For this task, add the **Data Mapper Operations** action named **Transform using Data Mapper XSLT** to your workflow. To use this action in the Azure portal, add the map to either of the following resources:
+
+- An integration account for a Consumption or Standard logic app resource
+- The Standard logic app resource itself
+
+This how-to guide shows how to complete the following tasks:
+
+- Create a blank data map.
+- Specify the source and target schemas to use.
+- Navigate the map.
+- Select the target and source elements to map.
+- Create a direct mapping between elements.
+- Create a complex mapping between elements.
+- Create a loop between arrays.
+- Crete an if condition between elements.
+- Save the map.
+- Test the map.
+- Call the map from a workflow in your logic app project.
+
+## Limitations
+
+- The Data Mapper extension currently works only in Visual Studio Code running on Windows operating systems.
+
+- The Data Mapper tool is currently available only in Visual Studio Code, not the Azure portal, and only from within Standard logic app projects, not Consumption logic app projects.
+
+- To call maps created with the Data Mapper tool, you can only use the **Data Mapper Operations** action named **Transform using Data Mapper XSLT**. [For maps created by any other tool, use the **XML Operations** action named **Transform XML**](logic-apps-enterprise-integration-transform.md).
+
+- The Data Mapper tool's **Code view** pane is currently read only.
+
+- The map layout and item position are currently automatic and read only.
+
+## Known issues
+
+The Data Mapper extension currently works only with schemas in flat folder-structured projects.
+
+## Prerequisites
+
+- [Same prerequisites for using Visual Studio Code and the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites) to create Standard logic app workflows.
+
+- The latest **Azure Logic Apps - Data Mapper** extension. You can download and install this extension from inside Visual Studio Code through the Marketplace, or you can find this extension externally on the [Marketplace website](https://marketplace.visualstudio.com/vscode).
+
+- The source and target schema files that describe the data types to transform. These files can have either the following formats:
+
+ - An XML schema definition file with the .xsd file extension
+ - A JavaScript Object Notation file with the .json file extension
+
+- A Standard logic app project that includes a stateful or stateless workflow with at least a trigger. If you don't have a project, follow these steps in Visual Studio Code:
+
+ 1. [Connect to your Azure account](create-single-tenant-workflows-visual-studio-code.md#connect-azure-account), if you haven't already.
+
+ 1. [Create a local folder, a local Standard logic app project, and a stateful or stateless workflow](create-single-tenant-workflows-visual-studio-code.md#create-project). During workflow creation, select **Open in current window**.
+
+ After you create your logic app project, in your project's root folder, open the **local.settings.json** file, and add the following values:
+
+ - `"FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated"`
+
+ - `"AzureWebJobsFeatureFlags": "EnableMultiLanguageWorker"`
+
+- Sample input data if you want to test the map and check that the transformation works as you expect.
+
+## Create a data map
+
+1. On the Visual Studio Code left menu, select the **Azure** icon.
+
+1. In the **Azure** pane, under the **Data Mapper** section, select **Create new data map**.
+
+ ![Screenshot showing Visual Studio Code with Data Mapper extension installed, Azure window open, and selected button for Create new data map.](media/create-maps-data-transformation-visual-studio-code/create-new-data-map.png)
+
+1. Provide a name for your data map.
+
+1. Specify your source and target schemas by following these steps:
+
+ 1. On the map surface, select **Add a source schema**.
+
+ ![Screenshot showing Visual Studio Code with Data Mapper open, new data map, and selected option for Add a source schema.](media/create-maps-data-transformation-visual-studio-code/select-source-schema.png)
+
+ 1. On the **Configure** pane that opens, select **Add new** > **Browse**.
+
+ 1. Find and select your source schema file, and then select **Add**.
+
+ If your source schema doesn't appear in the **Open** window, from the file type list, change **XSD File (\*.xsd)** to **All Files (\*.\*)**.
+
+ The map surface now shows the data types from the source schema. For the examples in this guide,
+
+ 1. On the map surface, select **Add a target schema**.
+
+ 1. On the **Configure** pane that opens, select **Add new** > **Browse**.
+
+ 1. Find and select your target schema file, and then select **Add**.
+
+ If your target schema doesn't appear in the **Open** window, from the file type list, change **XSD File (\*.xsd)** to **All Files (\*.\*)**.
+
+ The map surface now shows data types from the target schema.
+
+ Alternatively, you can also add your source and target schema files locally to your logic app project in the **Artifacts** **Schemas** folder, so that they appear in Visual Studio Code. In this case, you can specify your source and target schema in the Data Mapper tool on the **Configure** pane by selecting **Select existing**, rather than **Add new**.
+
+ When you're done, your map looks similar to the following example:
+
+ ![Screenshot showing the Data Mapper open and data map with sample source and target schemas.](media/create-maps-data-transformation-visual-studio-code/high-level-schema-example.png)
+
+The following table describes the possible data types that might appear in a schema:
+
+| Symbol | Type | More info |
+|--||--|
+| ![Icon representing an Array data type.](media/create-maps-data-transformation-visual-studio-code/array-icon.png) | Array | Contains items or repeating item nodes |
+| ![Icon representing a Binary data type.](media/create-maps-data-transformation-visual-studio-code/binary-icon.png) | Binary | |
+| ![Icon representing a Bool data type.](media/create-maps-data-transformation-visual-studio-code/bool-icon.png) | Bool | True or false only |
+| ![Icon representing a Complex data type.](media/create-maps-data-transformation-visual-studio-code/complex-icon.png) | Complex | An XML object with children properties, similar to the Object JSON type |
+| ![Icon representing a DateTime data type.](media/create-maps-data-transformation-visual-studio-code/datetime-icon.png) | DateTime | |
+| ![Icon representing a Decimal data type.](media/create-maps-data-transformation-visual-studio-code/decimal-icon.png) | Decimal | |
+| ![Icon representing an Integer data type.](media/create-maps-data-transformation-visual-studio-code/integer-icon.png) | Integer | Whole numbers only |
+| ![Icon representing the NULL symbol.](media/create-maps-data-transformation-visual-studio-code/null-icon.png) | Null | Not a data type, but appears when an error or an invalid type exists |
+| ![Icon representing a Number data type.](media/create-maps-data-transformation-visual-studio-code/number-icon.png) | Number | A JSON integer or decimal |
+| ![Icon representing an Object data type.](media/create-maps-data-transformation-visual-studio-code/object-icon.png) | Object | A JSON object with children properties, similar to the Complex XML type |
+| ![Icon representing a String data type.](media/create-maps-data-transformation-visual-studio-code/string-icon.png) | String | |
+
+<a name="navigate-map"></a>
+
+## Navigate the map
+
+To move around the map, you have the following options:
+
+- To pan around, drag your pointer around the map surface. Or, press and hold the mouse wheel, while you move the mouse or trackball.
+
+- After you move one level down into the map, in the map's lower left corner, a navigation bar appears where you can select from the following options:
+
+ ![Screenshot showing map navigation bar.](media/create-maps-data-transformation-visual-studio-code/map-navigation-bar.png)
+
+ | Option | Alternative gesture |
+ |--||
+ | **Zoom out** | On the map surface, press SHIFT + double select. <br>-or- <br>Scroll down with the mouse wheel. |
+ | **Zoom in** | On the map surface, double select. <br>-or- <br>Scroll up with the mouse wheel. |
+ | **Zoom to fit** | None |
+ | **Show (Hide) mini-map** | None |
+
+- To move up one level on the map, on the breadcrumb path at the top of the map, select a previous level.
+
+<a name="select-elements"></a>
+
+## Select target and source elements to map
+
+1. On the map surface, starting from the right side, in the target schema area, select the target element that you want to map. If the element you want is a child of a parent element, find and expand the parent first.
+
+1. Now, on the left side, from the source schema area, select **Select element**.
+
+1. In the **Source schema** window that appears, select one or more source elements to show on the map.
+
+ - To include a parent and direct children, open the parent's shortcut menu, and select **Add children**.
+
+ - To include a parent and all the children for that parent, including any sub-parents, open the top-level parent's shortcut menu, and select **Add children (recursive)**.
+
+1. When you're done, you can close the source schema window. You can always add more source elements later. On the map, in the upper left corner, select **Show source schema** (![Icon for Show source schema.](media/create-maps-data-transformation-visual-studio-code/show-source-schema-icon.png)).
+
+<a name="create-direct-mapping"></a>
+
+## Create a direct mapping between elements
+
+For a straightforward transformation between elements with the same type in the source and target schemas, follow these steps:
+
+1. To review what happens in code while you create the mapping, in the map's upper right corner, select **Show code**.
+
+1. If you haven't already, on the map, [select the target elements and then the source elements that you want to map](#select-elements).
+
+1. Move your pointer over the source element so that both a circle and a plus sign (**+**) appear.
+
+ ![Screenshot showing the data map and starting a mapping between EmployeeID and ID in the source and target schema, respectively.](media/create-maps-data-transformation-visual-studio-code/direct-mapping-start-source-element.png)
+
+1. Drag a line to the target element so that the line connects to the circle that appears.
+
+ ![Screenshot showing the data map and ending a mapping between EmployeeID and ID in the source and target schema, respectively.](media/create-maps-data-transformation-visual-studio-code/direct-mapping-target-element.png)
+
+ You've now created a direct mapping between both elements.
+
+ ![Screenshot showing the data map and a finished mapping between EmployeeID and ID in the source and target schema, respectively.](media/create-maps-data-transformation-visual-studio-code/direct-mapping-complete.png)
+
+ The code view window reflects the mapping relationship that you created:
+
+ ![Screenshot showing code view with direct mapping between EmployeeID and ID in the source and target schema, respectively.](media/create-maps-data-transformation-visual-studio-code/direct-mapping-example-code-view.png)
+
+> [!NOTE]
+>
+> If you create a mapping between elements where their data types don't match, a warning appears on the target element, for example:
+>
+> ![Screenshot showing direct mapping between mismatching data types.](media/create-maps-data-transformation-visual-studio-code/data-type-mismatch.png)
+
+<a name="create-complex-mapping"></a>
+
+## Create a complex mapping between elements
+
+For a more complex transformation between elements in the source and target schemas, such as elements that you want to combine or that have different data types, you can use one or more functions to perform tasks for that transformation.
+
+The following table lists the available function groups and *example* functions that you can use:
+
+| Group | Example functions |
+|-|-|
+| Collection | Average, Count, Direct Access, Index, Join, Maximum, Minimum, Sum |
+| Conversion | To date, To integer, To number, To string |
+| Date and time | Add days |
+| Logical comparison | Equal, Exists, Greater, Greater or equal, If, If else, Is nil, Is null, Is number, Is string, Less, Less or equal, Logical AND, Logical NOT, Logical OR, Not equal |
+| Math | Absolute, Add, Arctangent, Ceiling, Cosine, Divide, Exponential, Exponential (base 10), Floor, Integer divide, Log, Log (base 10), Module, Multiply, Power, Round, Sine, Square root, Subtract, Tangent |
+| String | Code points to string, Concat, Contains, Ends with, Length, Lowercase, Name, Regular expression matches, Regular expression replace, Replace, Starts with, String to code-points, Substring, Substring after, Substring before, Trim, Trim left, Trim right, Uppercase |
+| Utility | Copy, Error, Format date-time, Format number |
+
+On the map, the function's label looks like the following example and is color-coded based on the function group. To the function name's left side, a symbol for the function appears. To the function name's right side, a symbol for the function output's data type appears.
+
+![Screenshot showing example function label.](media/create-maps-data-transformation-visual-studio-code/example-function.png)
+
+### Add a function without a mapping relationship
+
+The example in this section transforms the source element type from String type to DateTime type, which matches the target element type. The example uses the **To date** function, which takes a single input.
+
+1. To review what happens in code while you create the mapping, in the map's upper right corner, select **Show code**.
+
+1. If you haven't already, on the map, [select the target elements and then the source elements that you want to map](#select-elements).
+
+1. In the map's upper left corner, select **Show functions** (![Icon for Show functions.](media/create-maps-data-transformation-visual-studio-code/function-icon.png)).
+
+ ![Screenshot showing source and target schema elements plus the selected function, Show functions.](media/create-maps-data-transformation-visual-studio-code/no-mapping-show-functions.png)
+
+1. From the functions list that opens, find and select the function that you want to use, which adds the function to the map. If the function doesn't appear visible on the map, try zooming out on the map surface.
+
+ This example selects the **To date** function.
+
+ ![Screenshot showing the selected function named To date.](media/create-maps-data-transformation-visual-studio-code/no-mapping-select-function.png)
+
+ > [!NOTE]
+ >
+ > If no mapping line exists or is selected when you add a function to the map, the function
+ > appears on the map, but disconnected from any elements or other functions, for example:
+ >
+ > ![Screenshot showing the disconnected function, To date.](media/create-maps-data-transformation-visual-studio-code/disconnected-function-to-date.png)
+ >
+
+1. Expand the function shape to display the function's details and connection points. To expand the function shape, select inside the shape.
+
+1. Connect the function to the source and target elements.
+
+ 1. Drag and draw a line between the source elements and the function's left edge. You can start either from the source elements or from the function.
+
+ ![Screenshot showing start mapping between source element and function.](media/create-maps-data-transformation-visual-studio-code/function-connect-to-date-start.png)
+
+ 1. Drag and draw a line between the function's right edge and the target element. You can start either from the target element or from the function.
+
+ ![Screenshot showing finish mapping between function and target element.](media/create-maps-data-transformation-visual-studio-code/function-connect-to-date-end.png)
+
+1. On the function's **Properties** tab, confirm or edit the input to use.
+
+ ![Screenshot showing Properties tab for the function, To date.](media/create-maps-data-transformation-visual-studio-code/function-connect-to-date-confirm-inputs.png)
+
+ For some data types, such as arrays, the scope for the transformation might also appear available. This scope is usually the immediate element, such as an array, but in some scenarios, the scope might exist beyond the immediate element.
+
+ The code view window reflects the mapping relationship that you created:
+
+ ![Screenshot showing code view with direct mapping relationship between source and target elements.](media/create-maps-data-transformation-visual-studio-code/to-date-example-code-view.png)
+
+For example, to iterate through array items, see [Create a loop between arrays](#loop-through-array). To perform a task when an element's value meets a condition, see [Add a condition between elements](#add-condition).
+
+### Add a function to an existing mapping relationship
+
+When a mapping relationship already exists between source and target elements, you can add the function by following these steps:
+
+1. On the map, select the line for the mapping that you created.
+
+1. Move your pointer over the selected line, and select the plus sign (**+**) that appears.
+
+1. From the functions list that opens, find and select the function that you want to use.
+
+ The function appears on the map and is automatically connected between the source and target elements.
+
+### Add a function with multiple inputs
+
+The example in this section concatenates multiple source element types so that you can map the results to the target element type. The example uses the **Concat** function, which takes multiple inputs.
+
+1. To review what happens in code while you create the mapping, in the map's upper right corner, select **Show code**.
+
+1. If you haven't already, on the map, [select the target elements and then the source elements that you want to map](#select-elements).
+
+1. In the map's upper left corner, select **Show functions** (![Icon for Show functions.](media/create-maps-data-transformation-visual-studio-code/function-icon.png)).
+
+ ![Screenshot showing source and target schema elements and the selected function named Show functions.](media/create-maps-data-transformation-visual-studio-code/multi-inputs-show-functions.png)
+
+1. From the functions list that opens, find and select the function that you want to use, which adds the function to the map. If the function doesn't appear visible on the map, try zooming out on the map surface.
+
+ This example selects the **Concat** function:
+
+ ![Screenshot showing the selected function named Concat.](media/create-maps-data-transformation-visual-studio-code/select-function.png)
+
+ > [!NOTE]
+ >
+ > If no mapping line exists or is selected when you add a function to the map, the function
+ > appears on the map, but disconnected from any elements or other functions. If the function
+ > requires configuration, a red dot appears in the function's upper right corner, for example:
+ >
+ > ![Screenshot showing the disconnected function, Concat.](media/create-maps-data-transformation-visual-studio-code/disconnected-function-concat.png)
+
+1. Expand the function shape to display the function's details and connection points. To expand the function shape, select inside the shape.
+
+1. In the function information pane, on the **Properties** tab, under **Inputs**, select the source data elements to use as the inputs.
+
+ This example selects the **FirstName** and **LastName** source elements as the function inputs, which automatically add the respective connections on the map.
+
+ ![Screenshot showing multiple source data elements selected as function inputs.](media/create-maps-data-transformation-visual-studio-code/function-multiple-inputs.png)
+
+1. To complete the mapping drag and draw a line between the function's right edge and the target element. You can start either from the target element or from the function.
+
+ ![Screenshot showing finished mapping from function with multiple inputs to target element.](media/create-maps-data-transformation-visual-studio-code/function-multiple-inputs-single-output.png)
+
+ The code view window reflects the mapping relationship that you created:
+
+ ![Screenshot showing code view with complex mapping relationship between source and target elements.](media/create-maps-data-transformation-visual-studio-code/concat-example-code-view.png)
+
+<a name="loop-through-array"></a>
+
+## Create a loop between arrays
+
+If your source and target schemas include arrays, you can create a loop mapping relationship that iterates through the items in those arrays. The example in this section loops through an Employee source array and a Person target array.
+
+1. To review what happens in code while you create the mapping, in the map's upper right corner, select **Show code**.
+
+1. On the map, in the target schema area, [select the target array element and target array item elements that you want to map](#select-elements).
+
+1. On the map, in the target schema area, expand the target array element and array items.
+
+1. In the source schema area, add the source array element and array item elements to the map.
+
+1. [Create a direct mapping between the source and target elements](#create-direct-mapping).
+
+ ![Screenshot showing the data map and drawing a connection between Name array items in the source and target arrays, Employee and Person, respectively.](media/create-maps-data-transformation-visual-studio-code/loop-example-map-array-items.png)
+
+ When you first create a mapping relationship between a matching pair of array items, a mapping relationship is automatically created at the parent array level.
+
+ ![Screenshot showing loop mapping between the Name array items plus the source and target arrays, Employee and Person, respectively.](media/create-maps-data-transformation-visual-studio-code/loop-example-automap-arrays.png)
+
+ The code view window reflects the mapping relationship that you created:
+
+ ![Screenshot showing code view with looping relationship between source and target arrays, Employee and Person, respectively.](media/create-maps-data-transformation-visual-studio-code/loop-example-code-view.png)
+
+1. Continue mapping the other array elements.
+
+ ![Screenshot showing continue looping mapping between other array items in source and target arrays.](media/create-maps-data-transformation-visual-studio-code/loop-example-continue-mapping.png)
+
+<a name="add-condition"></a>
+
+## Set up a condition and task to perform between elements
+
+To add a mapping relationship that evaluates a condition and performs a task when the condition is met, you can use multiple functions, such as the **If** function, a comparison function such as **Greater**, and the task to perform such as **Multiply**.
+
+The example in this section calculates a discount to apply when the purchase quantity exceeds 20 items by using the following functions:
+
+- **Greater**: Check whether item quantity is greater than 20.
+- **If**: Check whether the **Greater** function returns true.
+- **Multiply**: Calculate the discount by multiplying the item price by 10% and the item quantity.
+
+1. To review what happens in code while you create the mapping, in the map's upper right corner, select **Show code**.
+
+1. If you haven't already, on the map, [select the target elements and then the source elements that you want to map](#select-elements).
+
+ This example selects the following elements:
+
+ ![Screenshot showing the data map and elements to map.](media/create-maps-data-transformation-visual-studio-code/if-condition-example-elements.png)
+
+1. In the map's upper left corner, select **Show functions** (![Icon for Show functions.](media/create-maps-data-transformation-visual-studio-code/function-icon.png)).
+
+1. Add the following functions to the map: **Greater**, **If**, and **Multiply**
+
+1. Expand all the function shapes to show the function details and connection points.
+
+1. Connect the source elements, functions, and target elements as follows:
+
+ * The source schema's **ItemPrice** element to the target schema's **ItemPrice** element
+ * The source schema's **ItemQuantity** element to the **Greater** function's **Value** field
+ * The **Greater** function's output to the **If** function's **Condition** field
+ * The source schema's **ItemPrice** element to the **Multiply** function's **Multiplicand 0*** field
+ * The **Multiply** function's output to the **If** function's **Value** field
+ * The **If** function's output to the target schema's **ItemDiscount** element
+
+ > [!NOTE]
+ >
+ > In the **If** function, the word **ANY** appears to the right of the function name,
+ > indicating that you can assign the output value to anything.
+
+1. In the following functions, on the **Properties** tab, specify the following values:
+
+ | Function | Input parameter and value |
+ |-||
+ | **Greater** | - **Value** #1: The source element named **ItemQuantity** <br>- **Value** #2: **20** |
+ | **Multiply** | - **Multiplicand** #1: The source element named **ItemPrice** <br>- **Multiplicand** #2: **.10** |
+ | **If** | - **Condition**: **is-greater-than(ItemQuantity,20)** <br>- **Value**: **multiply(ItemPrice, .10)** |
+
+ The following map shows the finished example:
+
+ ![Screenshot showing finished condition example.](media/create-maps-data-transformation-visual-studio-code/if-condition-example-complete.png)
+
+ The code view window reflects the mapping relationship that you created:
+
+ ![Screenshot showing code view with conditional mapping between source and target elements using the functions, Greater, If, and Multiply.](media/create-maps-data-transformation-visual-studio-code/if-condition-example-code-view.png)
+
+## Save your map
+
+When you're done, on the map toolbar, select **Save**.
+
+Visual Studio Code saves your map as the following artifacts:
+
+- A **<*your-map-name*>.yml** file in the **Artifacts** > **MapDefinitions** project folder
+- An **<*your-map-name*>.xslt** file in the **Artifacts** > **Maps** project folder
+
+## Test your map
+
+To confirm that the transformation works as you expect, you'll need sample input data.
+
+1. On your map toolbar, select **Test**.
+
+1. On the **Test map** pane, in the **Input** window, paste your sample input data, and then select **Test**.
+
+ The test pane switches to the **Output** tab and shows the test's status code and response body.
+
+## Call your map from a workflow in your project
+
+1. On the Visual Studio Code left menu, select **Explorer** (files icon) to view your logic app project structure.
+
+1. Expand the folder that has your workflow name. From the **workflow.json** file's shortcut menu, select **Open Designer**.
+
+1. On the workflow designer, either after the step or between the steps where you want to perform the transformation, select the plus sign (**+**) > **Add an action**.
+
+1. On the **Add an action** pane, in the search box, enter **data mapper**. Select the **Data Mapper Operations** action named **Transform using Data Mapper XSLT**.
+
+1. In the action information box, specify the **Content** value, and leave **Map Source** set to **Logic App**. From the **Map Name** list, select the map file (.xslt) that you want to use.
+
+To use the same **Transform using Data Mapper XSLT** action in the Azure portal, add the map to either of the following resources:
+
+- An integration account for a Consumption or Standard logic app resource
+- The Standard logic app resource itself
+
+## Next steps
+
+- For data transformations using B2B operations in Azure Logic Apps, see [Add maps for transformations in workflows with Azure Logic Apps](logic-apps-enterprise-integration-maps.md)
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
In some situations, you may want to allow someone to connect to your secured wor
> [!WARNING] > When connecting over the public endpoint while the workspace uses a private endpoint to communicate with other resources:
-> * __Some features of studio will fail to access your data__. This problem happens when the _data is stored on a service that is secured behind the VNet_. For example, an Azure Storage Account.
+> * __Some features of studio will fail to access your data__. This problem happens when the _data is stored on a service that is secured behind the VNet_. For example, an Azure Storage Account. To resolve this problem, add your client device's IP address to the [Azure Storage Account's firewall](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range).
> * Using Jupyter, JupyterLab, RStudio, or Posit Workbench (formerly RStudio Workbench) on a compute instance, including running notebooks, __is not supported__. To enable public access, use the following steps:
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Use the following steps to deploy an MLflow model with a custom scoring script.
__conda.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-diabetes/environment/conda.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-diabetes/environment/conda.yaml":::
> [!NOTE] > Note how the package `azureml-inference-server-http` has been added to the original conda dependencies file.
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
You must also specify a base Docker image for this environment. Azure Machine Le
The following example is a YAML specification file for an environment defined from a conda specification. Here the relative path to the conda file from the Azure Machine Learning environment YAML file is specified via the `conda_file` property. You can alternatively define the conda specification inline using the `conda_file` property, rather than defining it in a separate file. To create the environment:
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
The next sections show you how to secure the network scenario described above. T
## Public workspace and secured resources
+> [!IMPORTANT]
+> While this is a supported configuration for Azure Machine Learning, Microsoft doesn't recommend it. The data in the Azure Storage Account behind the virtual network can be exposed on the public workspace. You should verify this configuration with your security team before using it in production.
+ If you want to access the workspace over the public internet while keeping all the associated resources secured in a virtual network, use the following steps: 1. Create an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) that will contain the resources used by the workspace.
If you want to access the workspace over the public internet while keeping all t
:::moniker range="azureml-api-2" * Create an Azure Machine Learning workspace that __does not__ use the virtual network. For more information, see [Manage Azure Machine Learning workspaces](how-to-manage-workspace.md).+
+ OR
+ * Create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. Then [enable public access to the workspace](#optional-enable-public-access). :::moniker-end :::moniker range="azureml-api-1"
Microsoft Sentinel is a security solution that can integrate with Azure Machine
### Public access
-Microsoft Sentinel can automatically create a workspace for you if you are OK with a public endpoint. In this configuration, the security operations center (SOC) analysts and system administrators connect to notebooks in your workspace through Sentinel.
+Microsoft Sentinel can automatically create a workspace for you if you're OK with a public endpoint. In this configuration, the security operations center (SOC) analysts and system administrators connect to notebooks in your workspace through Sentinel.
For information on this process, see [Create an Azure Machine Learning workspace from Microsoft Sentinel](../sentinel/notebooks-hunt.md?tabs=public-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
The following diagram shows how communications flow through private endpoints to
## Prerequisites
-* To use Azure machine learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
* You must install and configure the Azure CLI and `ml` extension or the Azure Machine Learning Python SDK v2. For more information, see the following articles:
The following diagram shows how communications flow through private endpoints to
* Secure outbound communication creates three private endpoints per deployment. One to the Azure Blob storage, one to the Azure Container Registry, and one to your workspace.
+ > [!IMPORTANT]
+ > * Each managed online endpoint deployment has it's own independent Azure Machine Learning managed VNet. If the endpoint has multiple deployments, each deployment has it's own managed VNet.
+ > * We do not support peering between a deployment's managed VNet and your client VNet. For secure access to resources needed by the deployment, we use private endpoints to communicate with the resources.
+ * When you use network isolation with a deployment, Azure Log Analytics is partially supported. All metrics and the `AMLOnlineEndpointTrafficLog` table are supported via Azure Log Analytics. `AMLOnlineEndpointConsoleLog` and `AMLOnlineEndpointEventLog` tables are currently not supported. As a workaround, you can use the [az ml online-deployment get_logs](/cli/azure/ml/online-deployment#az-ml-online-deployment-get-logs) CLI command, the [OnlineDeploymentOperations.get_logs()](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations#azure-ai-ml-operations-onlinedeploymentoperations-get-logs) Python SDK, or the Deployment log tab in the Azure Machine Learning studio instead. For more information, see [Monitoring online endpoints](how-to-monitor-online-endpoints.md). * You can configure public access to a __managed online endpoint__ (_inbound_ and _outbound_). You can also configure [public access to an Azure Machine Learning workspace](how-to-configure-private-link.md#enable-public-access).
The following diagram shows the overall architecture of this example:
:::image type="content" source="./media/how-to-secure-online-endpoint/endpoint-network-isolation-diagram.png" alt-text="Diagram of the services created.":::
-To create the resources, use the following Azure CLI commands. To create a resource group. Replace `<my-resource-group>` and `<my-location>` with the desierd values.
+To create the resources, use the following Azure CLI commands. To create a resource group. Replace `<my-resource-group>` and `<my-location>` with the desired values.
```azurecli # create resource group
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
### Azure Container Instances
-When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet is not supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md).
+When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet isn't supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md).
### Azure Container Registry
Azure Container Registry can be configured to use a private endpoint. Use the fo
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
- If you've [installed the Machine Learning extension v2 for Azure CLI](how-to-configure-cli.md), you can use the `az ml workspace show` command to show the workspace information. The v1 extension does not return this information.
+ If you've [installed the Machine Learning extension v2 for Azure CLI](how-to-configure-cli.md), you can use the `az ml workspace show` command to show the workspace information. The v1 extension doesn't return this information.
```azurecli-interactive az ml workspace show -n yourworkspacename -g resourcegroupname --query 'container_registry'
To enable network isolation for Azure Monitor and the Application Insights insta
[!INCLUDE [machine-learning-workspace-diagnostics](../../includes/machine-learning-workspace-diagnostics.md)]
+## Public access to workspace
+
+> [!IMPORTANT]
+> While this is a supported configuration for Azure Machine Learning, Microsoft doesn't recommend it. You should verify this configuration with your security team before using it in production.
+
+In some cases, you may need to allow access to the workspace from the public network (without connecting through the VNet using the methods detailed the [Securely connect to your workspace](#securely-connect-to-your-workspace) section). Access over the public internet is secured using TLS.
+
+To enable public network access to the workspace, use the following steps:
+
+1. [Enable public access](how-to-configure-private-link.md#enable-public-access) to the workspace after configuring the workspace's private endpoint.
+1. [Configure the Azure Storage firewall](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet. You may need to change the allowed IP address if the clients don't have a static IP. For example, if one of your Data Scientists is working from home and can't establish a VPN connection to the VNet.
+ ## Next steps This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
All Azure Machine Learning environments already have MLflow installed for you, s
1. Create a `conda.yml` file with the dependencies you need:
- :::code language="yaml" source="~/azureml-examples-main//sdk/python/using-mlflow/deploy/environment/conda.yml" highlight="7-8" range="1-12":::
+ :::code language="yaml" source="~/azureml-examples-main//sdk/python/using-mlflow/deploy/environment/conda.yaml" highlight="7-8" range="1-12":::
1. Reference the environment in the job you're using.
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-environment.md
Examples are available in the [examples GitHub repository](https://github.com/Az
## YAML: Docker image plus conda file ## Next steps
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
First, create a directory to store the file in.
Now, create the file in the dependencies directory.
-[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=conda.yml)]
+[!Notebook-python[] (~/azureml-examples-main/tutorials/e2e-ds-experience/e2e-ml-workflow.ipynb?name=conda.yaml)]
The specification contains some usual packages, that you'll use in your pipeline (numpy, pip), together with some Azure Machine Learning specific packages (azureml-defaults, azureml-mlflow).
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-vmware.md
After you create the appliance, check if the appliance can connect to Azure Migr
To set up the appliance by using an OVA template, you'll complete these steps, which are described in detail in this section:
+> [!NOTE]
+> OVA templates are not available for soverign clouds.
+ 1. Provide an appliance name and generate a project key in the portal. 1. Download an OVA template file, and import it to vCenter Server. Verify that the OVA is secure. 1. Create the appliance from the OVA file. Verify that the appliance can connect to Azure Migrate.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Previously updated : 11/17/2022 Last updated : 03/28/2023 #Customer intent: I need to understand the Azure Red Hat OpenShift support policies for OpenShift 4.0.
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Dsv3|Standard_D8s_v3|8|32| |Dsv3|Standard_D16s_v3|16|64| |Dsv3|Standard_D32s_v3|32|128|
-|Eiv3*|Standard_E64i_v3|64|432|
+|Dsv4|Standard_D8s_v4|8|32|
+|Dsv4|Standard_D16s_v4|16|64|
+|Dsv4|Standard_D32s_v4|32|128|
+|Dsv5|Standard_D8s_v5|8|32|
+|Dsv5|Standard_D16s_v5|16|64|
+|Dsv5|Standard_D32s_v5|32|128|
+|Dasv4|Standard_D8as_v4|8|32|
+|Dasv4|Standard_D16as_v4|16|64|
+|Dasv4|Standard_D32as_v4|32|128|
+|Dasv5|Standard_D8as_v5|8|32|
+|Dasv5|Standard_D16as_v5|16|64|
+|Dasv5|Standard_D32as_v5|32|128|
+|Easv4|Standard_E4as_v4|4|32|
+|Easv4|Standard_E8as_v4|8|64|
+|Easv4|Standard_E16as_v4|16|128|
+|Easv4|Standard_E20as_v4|20|160|
+|Easv4|Standard_E32as_v4|32|256|
+|Easv4|Standard_E48as_v4|48|384|
+|Easv4|Standard_E64as_v4|64|512|
+|Easv4|Standard_E96as_v4|96|672|
+|Easv5|Standard_E8as_v5|8|64|
+|Easv5|Standard_E16as_v5|16|128|
+|Easv5|Standard_E20as_v5|20|160|
+|Easv5|Standard_E32as_v5|32|256|
+|Easv5|Standard_E48as_v5|48|384|
+|Easv5|Standard_E64as_v5|64|512|
+|Easv5|Standard_E96as_v5|96|672|
|Eisv3|Standard_E64is_v3|64|432| |Eis4|Standard_E80is_v4|80|504| |Eids4|Standard_E80ids_v4|80|504|
-|Eiv5*|Standard_E104i_v5|104|672|
|Eisv5|Standard_E104is_v5|104|672|
-|Eidv5*|Standard_E104id_v5|104|672|
|Eidsv5|Standard_E104ids_v5|104|672|
+|Esv4|Standard_E8s_v4|8|64|
+|Esv4|Standard_E16s_v4|16|128|
+|Esv4|Standard_E20s_v4|20|160|
+|Esv4|Standard_E32s_v4|32|256|
+|Esv4|Standard_E48s_v4|48|384|
+|Esv4|Standard_E64s_v4|64|504|
+|Esv5|Standard_E8s_v5|8|64|
+|Esv5|Standard_E16s_v5|16|128|
+|Esv5|Standard_E20s_v5|20|160|
+|Esv5|Standard_E32s_v5|32|256|
+|Esv5|Standard_E48s_v5|48|384|
+|Esv5|Standard_E64s_v5|64|512|
+|Esv5|Standard_E96s_v5|96|672|
|Fsv2|Standard_F72s_v2|72|144|
-|G*|Standard_G5|32|448|
-|G|Standard_GS5|32|448|
-|Mms|Standard_M128ms|128|3892|
+|Mms*|Standard_M128ms|128|3892|
+
+\*Standard_M128ms' does not support encryption at host
-\*Does not support Premium_LRS OS Disk, StandardSSD_LRS is used instead
### General purpose
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Dasv4|Standard_D8as_v4|8|32| |Dasv4|Standard_D16as_v4|16|64| |Dasv4|Standard_D32as_v4|32|128|
+|Dasv4|Standard_D64as_v4|64|256|
+|Dasv4|Standard_D96as_v4|96|384|
+|Dasv5|Standard_D4as_v5|4|16|
+|Dasv5|Standard_D8as_v5|8|32|
+|Dasv5|Standard_D16as_v5|16|64|
+|Dasv5|Standard_D32as_v5|32|128|
+|Dasv5|Standard_D64as_v5|64|256|
+|Dasv5|Standard_D96as_v5|96|384|
+|Dsv2|Standard_D2s_v3|2|8|
|Dsv3|Standard_D4s_v3|4|16| |Dsv3|Standard_D8s_v3|8|32| |Dsv3|Standard_D16s_v3|16|64| |Dsv3|Standard_D32s_v3|32|128|
+|Dsv4|Standard_D4s_v4|4|16|
+|Dsv4|Standard_D8s_v4|8|32|
+|Dsv4|Standard_D16s_v4|16|64|
+|Dsv4|Standard_D32s_v4|32|128|
+|Dsv4|Standard_D64s_v4|64|256|
+|Dsv4|Standard_D96s_v4|96|384|
+|Dsv5|Standard_D4s_v5|4|16|
+|Dsv5|Standard_D8s_v5|8|32|
+|Dsv5|Standard_D16s_v5|16|64|
+|Dsv5|Standard_D32s_v5|32|128|
+|Dsv5|Standard_D64s_v5|64|256|
+|Dsv5|Standard_D96s_v5|96|384|
+ ### Memory optimized |Series|Size|vCPU|Memory: GiB| |-|-|-|-|
+|Easv4|Standard_E4as_v4|4|32|
+|Easv4|Standard_E8as_v4|8|64|
+|Easv4|Standard_E16as_v4|16|128|
+|Easv4|Standard_E20as_v4|20|160|
+|Easv4|Standard_E32as_v4|32|256|
+|Easv4|Standard_E48as_v4|48|384|
+|Easv4|Standard_E64as_v4|64|512|
+|Easv4|Standard_E96as_v4|96|672|
+|Easv5|Standard_E8as_v5|8|64|
+|Easv5|Standard_E16as_v5|16|128|
+|Easv5|Standard_E20as_v5|20|160|
+|Easv5|Standard_E32as_v5|32|256|
+|Easv5|Standard_E48as_v5|48|384|
+|Easv5|Standard_E64as_v5|64|512|
+|Easv5|Standard_E96as_v5|96|672|
|Esv3|Standard_E4s_v3|4|32| |Esv3|Standard_E8s_v3|8|64| |Esv3|Standard_E16s_v3|16|128| |Esv3|Standard_E32s_v3|32|256|
-|Eiv3*|Standard_E64i_v3|64|432|
+|Esv4|Standard_E2s_v4|2|16|
+|Esv4|Standard_E4s_v4|4|32|
+|Esv4|Standard_E8s_v4|8|64|
+|Esv4|Standard_E16s_v4|16|128|
+|Esv4|Standard_E20s_v4|20|160|
+|Esv4|Standard_E32s_v4|32|256|
+|Esv4|Standard_E48s_v4|48|384|
+|Esv4|Standard_E64s_v4|64|504|
+|Esv4|Standard_E96s_v4|96|672|
+|Esv5|Standard_E2s_v5|2|16|
+|Esv5|Standard_E4s_v5|4|32|
+|Esv5|Standard_E8s_v5|8|64|
+|Esv5|Standard_E16s_v5|16|128|
+|Esv5|Standard_E20s_v5|20|160|
+|Esv5|Standard_E32s_v5|32|256|
+|Esv5|Standard_E48s_v5|48|384|
+|Esv5|Standard_E64s_v5|64|512|
+|Esv5|Standard_E96s_v5|96|672|
+|Edsv5|Standard_E96ds_v5|96|672|
|Eisv3|Standard_E64is_v3|64|432| |Eis4|Standard_E80is_v4|80|504| |Eids4|Standard_E80ids_v4|80|504|
-|Eiv5*|Standard_E104i_v5|104|672|
|Eisv5|Standard_E104is_v5|104|672|
-|Eidv5|Standard_E104id_v5|104|672|
|Eidsv5|Standard_E104ids_v5|104|672|
-\*Does not support Premium_LRS OS Disk, StandardSSD_LRS is used instead
### Compute optimized
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Fsv2|Standard_F32s_v2|32|64| |Fsv2|Standard_F72s_v2|72|144| + ### Memory and compute optimized |Series|Size|vCPU|Memory: GiB| |-|-|-|-|
-|Mms|Standard_M128ms|128|3892|
+|Mms*|Standard_M128ms|128|3892|
+
+\*Standard_M128ms' does not support encryption at host
+ ### Storage optimized |Series|Size|vCPU|Memory: GiB|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|L8s_v2|Standard_L8s_v2|8|64| |L16s_v2|Standard_L16s_v2|16|128| |L32s_v2|Standard_L32s_v2|32|256|
-|L48s_v2|Standard_L48s_v2|32|384|
-|L64s_v2|Standard_L48s_v2|64|512|
+|L48s_v2|Standard_L48s_v2|48|384|
+|L64s_v2|Standard_L64s_v2|64|512|
+|L8s_v3|Standard_L8s_v3|8|64|
+|L16s_v3|Standard_L16s_v3|16|128|
+|L32s_v3|Standard_L32s_v3|32|256|
+|L48s_v3|Standard_L48s_v3|48|384|
+|L64s_v3|Standard_L64s_v3|64|512|
+ ### GPU workload |Series|Size|vCPU|Memory: GiB|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|NC24rsV3|Standard_NC24rs_v3|24|448| |NC64asT4v3|Standard_NC64as_T4_v3|64|440|
+<!--
### Memory and storage optimized |Series|Size|vCPU|Memory: GiB|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|G*|Standard_G5|32|448| |G|Standard_GS5|32|448| + \*Does not support Premium_LRS OS Disk, StandardSSD_LRS is used instead
+-->
operator-nexus Howto Baremetal Run Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-read.md
For a command with multiple arguments, provide as a list to `arguments` paramete
These commands can be long running so the recommendation is to set `--limit-time-seconds` to at least 600 seconds (10 minutes). Running multiple extracts might take longer that 10 minutes. This command runs synchronously. If you wish to skip waiting for the command to complete, specify the `--no-wait --debug` options. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
-/home/priya/azure-docs-pr-pshet/articles/import-export
+ When an optional argument `--output-directory` is provided, the output result is downloaded and extracted to the local directory. ### This example executes the `hostname` command and a `ping` command.
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \
--cluster-type "$CLUSTER_TYPE" --cluster-version "$CLUSTER_VERSION" \ --tags $TAG_KEY1="$TAG_VALUE1" $TAG_KEY2="$TAG_VALUE2"
-az networkcloud cluster wait --created --name "$CLUSTER_NAME" --resource-group
-"$CLUSTER_RG"
``` You can instead create a Cluster with ARM template/parameter files in
You can instead create a Cluster with ARM template/parameter files in
| Parameter name | Description | | - | | | CLUSTER_NAME | Resource Name of the Cluster |
-| LOCATION | The Azure Region where the Cluster is deployed |
+| LOCATION | The Azure Region where the Cluster is deployed |
| CL_NAME | The Cluster Manager Custom Location from Azure portal | | CLUSTER_RG | The cluster resource group name | | LAW_ID | Log Analytics Workspace ID for the Cluster |
az networkcloud cluster deploy \
--no-wait --debug ```
-This command runs synchronously. If you wish to skip waiting for the command to complete, specify the `--no-wait --debug` options. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md).
+> [!TIP]
+> To check the status of the `az networkcloud cluster deploy` command, it can be executed using the `--debug` flag.
+> This will allow you to obtain the `Azure-AsyncOperation` or `Location` header used to query the `operationStatuses` resource.
+> See the section [Cluster Deploy Failed](#cluster-deploy-failed) for more detailed steps.
+> Optionally, the command can run asynchronously using the `--no-wait` flag.
+
+### Cluster Deploy with hardware validation
+
+During a Cluster deploy process, one of the steps executed is hardware validation.
+The hardware validation procedure runs various test and checks against the machines
+provided through the Cluster's rack definition. Based on the results of these checks
+and any user skipped machines, a determination is done on whether sufficient nodes
+passed and/or are available to meet the thresholds necessary for deployment to continue.
+
+#### Cluster Deploy Action with skipping specific bare-metal-machine
+
+A parameter can be passed in to the deploy command that represents the names of
+bare metal machines in the cluster that should be skipped during hardware validation.
+Nodes skipped aren't validated and aren't added to the node pool.
+Additionally, nodes skipped don't count against the total used by threshold calculations.
+
+```azurecli
+az networkcloud cluster deploy \
+ --name "$CLUSTER_NAME" \
+ --resource-group "$CLUSTER_RESOURCE_GROUP" \
+ --subscription "$SUBSCRIPTION_ID" \
+ --skip-validations-for-machines "$COMPX_SVRY_SERVER_NAME"
+```
+
+#### Cluster Deploy failed
+
+To track the status of an asynchronous operation, run with a `--debug` flag enabled.
+When `--debug` is specified, the progress of the request can be monitored.
+The operation status URL can be found by examining the debug output looking for the
+`Azure-AsyncOperation` or `Location` header on the HTTP response to the creation request.
+The headers can provide the `OPERATION_ID` field used in the HTTP API call.
+
+```azurecli
+OPERATION_ID="12312312-1231-1231-1231-123123123123*99399E995..."
+az rest -m GET -u "https://management.azure.com/subscriptions/${SUBSCRIPTION_ID}/providers/Microsoft.NetworkCloud/locations/${LOCATION}/operationStatuses/${OPERATION_ID}?api-version=2022-12-12-preview"
+```
+
+The output is similar to the JSON struct example. When the error code is
+`HardwareValidationThresholdFailed`, then the error message contains a list of bare
+metal machine(s) that failed the hardware validation (for example, `COMP0_SVR0_SERVER_NAME`,
+`COMP1_SVR1_SERVER_NAME`). These names can be used to parse the logs for further details.
+
+```json
+{
+ "endTime": "2023-03-24T14:56:59.0510455Z",
+ "error": {
+ "code": "HardwareValidationThresholdFailed",
+ "message": "HardwareValidationThresholdFailed error hardware validation threshold for cluster layout plan is not met for cluster $CLUSTER_NAME in namespace nc-system with listed failed devices $COMP0_SVR0_SERVER_NAME, $COMP1_SVR1_SERVER_NAME"
+ },
+ "id": "/subscriptions/$SUBSCRIPTION_ID/providers/Microsoft.NetworkCloud/locations/$LOCATION/operationStatuses/12312312-1231-1231-1231-123123123123*99399E995...",
+ "name": "12312312-1231-1231-1231-123123123123*99399E995...",
+ "resourceId": "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$CLUSTER_RESOURCE_GROUP/providers/Microsoft.NetworkCloud/clusters/$CLUSTER_NAME",
+ "startTime": "2023-03-24T14:56:26.6442125Z",
+ "status": "Failed"
+}
+```
+
+See the article [Tracking Asynchronous Operations Using Azure CLI](./howto-track-async-operations-cli.md) for another example.
## Cluster deployment validation
orbital Modem Chain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/modem-chain.md
We currently support the following named modem configurations.
|--|--|--| | Aqua Direct Broadcast | aqua_direct_broadcast | This is NASA AQUA's 15-Mbps direct broadcast service | | Aqua Direct Playback | aqua_direct_playback | This is NASA's AQUA's 150-Mbps direct broadcast service |
-| Aura Direct Broadcast | aura_direct_broadcast | This is NASA Aura's 15-Mbps direct broadcast service |
| Terra Direct Broadcast | terra_direct_broadcast | This is NASA Terra's 8.3-Mbps direct broadcast service | | SNPP Direct Broadcast | snpp_direct_broadcast | This is NASA SNPP 15-Mbps direct broadcast service | | JPSS-1 Direct Broadcast | jpss-1_direct_broadcast | This is NASA JPSS-1 15-Mbps direct broadcast service |
orbital Organize Stac Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/organize-stac-data.md
The following sections describe the four stages in the architecture.
- STAC API is based on open source STAC FastAPI. - STAC API layer is implemented on Azure Kubernetes Service and the APIs are exposed using [API Management Service](https://azure.microsoft.com/products/api-management/).-- STAC APIs are used to discover the geospatial data in your Catalog. These APIs are based on STAC specifications and understand the STAC metadata defined and indexed in the STAC Catalog database (PostgresSQL server).
+- STAC APIs are used to discover the geospatial data in your Catalog. These APIs are based on STAC specifications and understand the STAC metadata defined and indexed in the STAC Catalog database (PostgreSQL server).
- Based on the search criteria, you can quickly locate your data from a large dataset. - Querying the STAC Collection, Items & Assets: - A query is submitted by a user to look up one or more STAC Collection, Items & Assets through the STAC FastAPI.
If you want to start building this, we have put together a [sample solution](htt
|STAC Item|The core atomic unit, representing a single spatiotemporal asset as a GeoJSON feature plus metadata like datetime and reference links.| |STAC Catalog|A simple, flexible JSON that provides a structure and organized the metadata like STAC items, collections and other catalogs.| |STAC Collection|Provides additional information such as the extents, license, keywords, providers, and so forth, that describe STAC Items within the Collection.|
-|STAC API|Provides a RESTful endpoint that enables search of STAC Items, specified in OpenAPI.|
+|STAC API|Provides a RESTful endpoint that enables search of STAC Items, specified in OpenAPI.|
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| | | | | | | Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Southeast | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: |
+| Brazil South | :heavy_check_mark: (v3 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
private-5g-core Azure Stack Edge Virtual Machine Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-virtual-machine-sizing.md
The following resources are available within ASE after deploying AP5GC. You can
|--|--| | vCPUs | 16 | | Memory | 56 GB |
-| Storage | ~3.75 GB |
+| Storage | ~3.75 TB |
private-5g-core Default Service Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/default-service-sim-policy.md
Title: Default service and SIM policy
+ Title: Default service and allow-all SIM policy
-description: Information on the default service and SIM policy that can be created as part of deploying a private mobile network.
+description: Information on the default service and allow-all SIM policy that can be created as part of deploying a private mobile network.
Last updated 03/18/2022
-# Default service and SIM policy
+# Default service and allow-all SIM policy
-You're given the option of creating a default service and SIM policy when you first create a private mobile network using the instructions in [Deploy a private mobile network through Azure Private 5G Core - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).
+You're given the option of creating a default service and allow-all SIM policy when you first create a private mobile network using the instructions in [Deploy a private mobile network through Azure Private 5G Core - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).
- The default service allows all traffic in both directions. -- The default SIM policy is automatically assigned to all SIMs you provision as part of creating the private mobile network, and applies the default service to these SIMs.
+- The allow-all SIM policy is automatically assigned to all SIMs you provision as part of creating the private mobile network, and applies the default service to these SIMs.
They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
-The following sections provide the settings for the default service and SIM policy. You can use these to decide whether they're suitable for the initial deployment of your private mobile network. If you need more information on any of these settings, see [Collect the required information for a service](collect-required-information-for-service.md) and [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md).
+The following sections provide the settings for the default service and allow-all SIM policy. You can use these to decide whether they're suitable for the initial deployment of your private mobile network. If you need more information on any of these settings, see [Collect the required information for a service](collect-required-information-for-service.md) and [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md).
## Default service
The following tables provide the settings for the default service and its associ
## Default SIM policy
-The following tables provide the settings for the default SIM policy and its associated network scope.
+The following tables provide the settings for the allow-all SIM policy and its associated network scope.
### SIM policy settings |Setting |Value | |||
-|The SIM policy name. | *Default-policy* |
+|The SIM policy name. | *allow-all-policy* |
|The UE Aggregated Maximum Bit Rate (UE-AMBR) for uplink traffic (traveling away from SIMs) across all Non-GBR QoS Flows for a SIM to which this SIM policy is assigned. | *2 Gbps* | |The UE Aggregated Maximum Bit Rate (UE-AMBR) for downlink traffic (traveling towards SIMs) across all Non-GBR QoS Flows for a SIM to which this SIM policy is assigned. | *2 Gbps* | |The interval between UE registrations for SIMs to which this SIM policy is assigned, given in seconds. | *3240* |
The following tables provide the settings for the default SIM policy and its ass
## Next steps
-Once you've decided whether the default service and SIM policy are suitable, you can start deploying your private mobile network.
+Once you've decided whether the default service and allow-all SIM policy are suitable, you can start deploying your private mobile network.
- [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md) - [Deploy a private mobile network through Azure Private 5G Core - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md)
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
Azure Private 5G Core is an Azure cloud service for deploying and managing 5G co
- A private mobile network. - A site.-- The default service and SIM policy (as described in [Default service and SIM policy](default-service-sim-policy.md)).
+- The default service and allow-all SIM policy (as described in [Default service and allow-all SIM policy](default-service-sim-policy.md)).
- Optionally, one or more SIMs, and a SIM group. [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
The following Azure resources are defined in the template.
- A **Packet Core Data Plane** resource representing the data plane function of the packet core instance in the site. - An **Attached Data Network** resource representing the site's view of the data network. - A **Service** resource representing the default service.
- - A **SIM Policy** resource representing the default SIM policy.
+ - A **SIM Policy** resource representing the allow-all SIM policy.
- A **SIM Group** resource (if you provisioned any SIMs). :::image type="content" source="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing the resources for a full Azure Private 5G Core deployment." lightbox="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png":::
private-5g-core Deploy Private Mobile Network With Site Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-command-line.md
Azure Private 5G Core is an Azure cloud service for deploying and managing 5G co
- A private mobile network. - A site.-- The default service and SIM policy (as described in [Default service and SIM policy](default-service-sim-policy.md)).
+- The default service and allow-all SIM policy (as described in [Default service and allow-all SIM policy](default-service-sim-policy.md)).
- Optionally, one or more SIMs, and a SIM group. [!INCLUDE [include](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
private-5g-core Deploy Private Mobile Network With Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-powershell.md
Azure Private 5G Core is an Azure cloud service for deploying and managing 5G co
- A private mobile network. - A site.-- The default service and SIM policy (as described in [Default service and SIM policy](default-service-sim-policy.md)).
+- The default service and allow-all SIM policy (as described in [Default service and allow-all SIM policy](default-service-sim-policy.md)).
- Optionally, one or more SIMs, and a SIM group. [!INCLUDE [azure-ps-prerequisites-include.md](../../includes/azure-ps-prerequisites-include.md)]
private-5g-core How To Guide Deploy A Private Mobile Network Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md
Private mobile networks provide high performance, low latency, and secure connec
- Collect all of the information listed in [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). You may also need to take the following steps based on the decisions you made when collecting this information. - If you decided you want to provision SIMs using a JSON file, ensure you've prepared this file and made it available on the machine you'll use to access the Azure portal. For more information on the file format, see [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims) or [Encrypted JSON file format for provisioning vendor provided SIMs](collect-required-information-for-private-mobile-network.md#encrypted-json-file-format-for-provisioning-vendor-provided-sims).
- - If you decided you want to use the default service and SIM policy, identify the name of the data network to which you want to assign the policy.
+ - If you decided you want to use the default service and allow-all SIM policy, identify the name of the data network to which you want to assign the policy.
## Deploy your private mobile network
-In this step, you'll create the Mobile Network resource representing your private mobile network as a whole. You can also provision one or more SIMs, and / or create the default service and SIM policy.
+In this step, you'll create the Mobile Network resource representing your private mobile network as a whole. You can also provision one or more SIMs, and / or create the default service and allow-all SIM policy.
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. In the **Search** bar, type *mobile networks* and then select the **Mobile Networks** service from the results that appear.
In this step, you'll create the Mobile Network resource representing your privat
:::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-vendor-sims-notice.png" alt-text="Screenshot of the Azure portal showing a notice on the SIMs configuration tab stating: At the moment, you will not be able to upload the encrypted SIMs under this SIM group. However, you will be able upload the encrypted SIMs under the SIM group section, once the above named SIM group gets created."::: 1. If you're provisioning SIMs at this point, you'll need to take the following additional steps.
- 1. If you want to use the default service and SIM policy, set **Do you wish to create a basic, default SIM policy and assign it to these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears.
+ 1. If you want to use the default service and allow-all SIM policy, set **Do you wish to create a basic, allow-all SIM policy and assign it to these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears.
1. Under **Enter SIM group information**, set **SIM group name** to your chosen name for the SIM group to which your SIMs will be added. 1. Under **Enter encryption details for SIM group**, set **Encryption type** to your chosen encryption type. Once the SIM group is created, you cannot change the encryption type. 1. If you selected **Customer-managed keys (CMK)**, set the **Key URI** and **User-assigned identity** to those the SIM group will use for encryption.
In this step, you'll create the Mobile Network resource representing your privat
1. Select **Go to resource group**, and then check that your new resource group contains the correct **Mobile Network** and **Slice** resources. It may also contain the following, depending on the choices you made during the procedure. - A **SIM group** resource (if you provisioned SIMs).
- - **Service**, **SIM Policy**, and **Data Network** resources (if you decided to use the default service and SIM policy).
+ - **Service**, **SIM Policy**, and **Data Network** resources (if you decided to use the default service and allow-all SIM policy).
:::image type="content" source="media/pmn-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing Mobile Network, SIM, SIM group, Service, SIM policy, Data Network, and Slice resources.":::
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
For more information on older-generation virtual machines, see [Previous generat
The following tables provide a summary of the current offering of zonal, zone-redundant, and always-available Azure services. They list Azure offerings according to the regional availability of each.
+>[!IMPORTANT]
+>To learn more about availability zones support and available services in your region, contact your Microsoft sales or customer representative.
+ ##### Legend ![Legend containing icons and meaning of each with respect to service category and regional availability of each service in the table.](media/legend.png)
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
You are not limited to using services within your regional pairs. Although an Az
Regions are paired for cross-region replication based on proximity and other factors.
+>[!IMPORTANT]
+>To learn more about your region's architecture, please contact your Microsoft sales or customer representative.
+ **Azure regional pairs** | Geography | Regional pair A | Regional pair B |
sap Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/register-existing-system.md
In this how-to guide, you'll learn how to register an existing SAP system with *
- Allowlist the region-specific IP addresses for Azure Storage. - Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system. - Check that your Azure account has **Azure Center for SAP solutions administrator** and **Managed Identity Operator** or equivalent role access on the subscription or resource groups where you have the SAP system resources.-- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** and **Tag Contributor** role access on the Compute resource group and **Reader** and **Tag Contributor** role access on the Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Compute resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
- Make sure ASCS, Application Server and Database virtual machines of the SAP system are in **Running** state. - sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database. - File path on Linux VMs: /usr/sap/hostctrl/exe
The following SAP system configurations aren't supported in Azure Center for SAP
## Enable resource permissions
-When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Azure Center for SAP solutions service role** and **Tag Contributor** role access on the Compute (VMs, Disks, Load balancers) resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
+When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Compute (VMs, Disks, Load balancers) resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
Azure Center for SAP solutions uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center for SAP solutions to discover the SAP system components, and other SAP system metadata. User-assigned managed identity is required to enable SAP system monitoring and management capabilities.
Azure Center for SAP solutions uses this user-assigned managed identity to insta
To provide permissions to the SAP system resources to a user-assigned managed identity: 1. [Create a new user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) if needed or use an existing one.
-1. [Assign **Azure Center for SAP solutions service role** and **Tag Contributor**](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) role access to the user-assigned managed identity on the resource group(s) which have the Virtual Machines, Disks and Load Balancers of the SAP system and **Reader** role on the resource group(s) which have the Virtual Network components of the SAP system.
+1. [Assign **Azure Center for SAP solutions service role**](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) role access to the user-assigned managed identity on the resource group(s) which have the Virtual Machines, Disks and Load Balancers of the SAP system and **Reader** role on the resource group(s) which have the Virtual Network components of the SAP system.
1. Once the permissions are assigned, this managed identity can be used in Azure Center for SAP solutions to register and manage SAP systems.
-> [!NOTE]
-> User-assigned managed identity requires **Tag Contributor** role on VMs, Disks and Load Balancers of the SAP system to enable [Cost Analysis](view-cost-analysis.md) at SAP SID level.
- ## Register SAP system To register an existing SAP system in Azure Center for SAP solutions:
To register an existing SAP system in Azure Center for SAP solutions:
1. For **SAP product**, select the SAP system product from the drop-down menu. 1. For **Environment**, select the environment type from the drop-down menu. For example, production or non-production environments. 1. For **Managed identity source**, select **Use existing user-assigned managed identity** option.
- 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Azure Center for SAP solutions service role**, **Reader** and **Tag Contributor** role access to the [respective resources of this SAP system.](#enable-resource-permissions)
+ 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Azure Center for SAP solutions service role** and **Reader** role access to the [respective resources of this SAP system.](#enable-resource-permissions)
1. Select **Review + register** to discover the SAP system and begin the registration process. :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of Azure Center for SAP solutions registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png":::
sap Provider Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-linux.md
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to create a Linux OS provider for *Azure Monitor for SAP solutions* resources.
+In this how-to guide, you learn to create a Linux OS provider for *Azure Monitor for SAP solutions* resources.
This content applies to both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*.
wget https://github.com/prometheus/node_exporter/releases/download/v*/node_expor
tar xvfz node_exporter-*.*-amd64.tar.gz if [[ "$(grep '^ID=' /etc/*-release)" == *"rhel"* ]]; then echo "Open firewall port 9100 on the Linux host"
- sudo apt install firewalld -y
+ yum install firewalld -y
systemctl start firewalld firewall-cmd --zone=public --permanent --add-port 9100/tcp else
cd node_exporter-*.*-amd64
nohup ./node_exporter --web.listen-address=":9100" & ```
+### Setting up cron job to start Node exporter on VM restart
+
+1. If the target virtual machine is restarted/stopped, node exporter is also stopped, and needs to be manually started again to continue monitoring.
+1. Run `sudo crontab -e` command to open cron file.
+2. Add the command `@reboot cd /path/to/node/exporter && nohup ./node_exporter &` at the end of cron file. This starts node exporter on VM reboot.
+
+```shell
+# if you do not have a crontab file already, create one by running the command: sudo crontab -e
+sudo crontab -l > crontab_new
+echo "@reboot cd /path/to/node/exporter && nohup ./node_exporter &" >> crontab_new
+sudo crontab crontab_new
+sudo rm crontab_new
+```
+ ## Prerequisites to enable secure communication To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps [mentioned here](https://prometheus.io/docs/guides/tls-encryption/)
When the provider settings validation operation fails with the code ΓÇÿPrometheu
1. Run `nohup ./node_exporter &` command to enable node_exporter. 1. Adding nohup and & to above command decouples the node_exporter from linux machine commandline. If not included node_exporter would stop when the commandline is closed.
-### Setting up cron job to start Node exporter on VM restart
-
-1. If the target virtual machine is restarted/stopped, node exporter is also stopped, and needs to be manually started again to continue monitoring.
-1. Run `sudo crontab -e` command to open cron file.
-1. Add the command `@reboot cd /path/to/node/exporter && nohup ./node_exporter &` at the end of cron file. This will start node exporter on VM reboot.
-
-```shell
-sudo crontab -l > crontab_new
-echo "@reboot cd /path/to/node/exporter && nohup ./node_exporter &" >> crontab_new
-sudo crontab crontab_new
-sudo rm crontab_new
-```
- ## Next steps > [!div class="nextstepaction"]
sap Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-portal.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Azure **Search**, select **Azure Monitor for SAP solutions**.
+2. In Azure **Search**, select **Azure Monitor for SAP solutions**.
![Diagram that shows Azure Monitor for SAP solutions Quick Start.](./media/quickstart-portal/azure-monitor-quickstart-1-new.png)
+3. On the **Basics** tab, provide the required values.
+ 1. **Workload Region** is the region where the monitoring resources are created, make sure to select a region that is same as your virtual network.
+ 2. **Service Region** is where proxy resource gets created which manages monitoring resources deployed in the workload region. Service region is automatically selected based on your Workload Region selection.
+ 3. For **Virtual Network** field select a virtual network, which has connectivity to your SAP systems.
+ 4. For the **Subnet** field, select a subnet that has connectivity to your SAP systems. You can use an existing subnet or create a new subnet. Make sure that you select a subnet, which is an **IPv4/25 block or larger**.
+ 5. For **Log Analytics Workspace**, you can use an existing Log Analytics workspace or create a new one. If you create a new workspace, it will be created inside the managed resource group along with other monitoring resources.
+ 6. When entering **managed resource group** name, make sure to use a unique name. This name is used to create a resource group, which will contain all the monitoring resources. Managed Resource Group name cannot be changed once the resource is created.
-1. On the **Basics** tab, provide the required values. If applicable, you can use an existing Log Analytics workspace.
-
+ <br/>
![Diagram that shows Azure Monitor for SAP solutions Quick Start 2.](./media/quickstart-portal/azure-monitor-quickstart-2-new.png)
+4. On the **Providers** tab, you can start creating providers along with the monitoring resource. You can also create providers later by navigating to the **Providers** tab in the Azure Monitor for SAP solutions resource.
+5. On the **Tags** tab, you can add tags to the monitoring resource. Make sure to add all the mandatory tags in case you have a tag policy in place.
+6. On the **Review + create** tab, review the details and click **Create**.
+ ## Create Azure Monitor for SAP solutions (classic) monitoring resource
If you don't have an Azure subscription, create a [free](https://azure.microsoft
1. In Azure **Marketplace** or **Search**, select **Azure Monitor for SAP solutions (classic)**.
- ![Diagram shows Azure Monitor for SAP solutions classic quick start page.](./media/quickstart-portal/azure-monitor-quickstart-classic.png)
+ ![Diagram shows Azure Monitor for SAP solutions classic quick start page.](./media/quickstart-portal/azure-monitor-quickstart-classic.png)
1. On the **Basics** tab, provide the required values. If applicable, you can use an existing Log Analytics workspace. :::image type="content" source="./media/quickstart-portal/azure-monitor-quickstart-2.png" alt-text="Screenshot that shows configuration options on the Basics tab." lightbox="./media/quickstart-portal/azure-monitor-quickstart-2.png":::
- When you're selecting a virtual network, ensure that the systems you want to monitor are reachable from within that virtual network.
+ When you're selecting a virtual network, ensure that the systems you want to monitor are reachable from within that virtual network.
> [!IMPORTANT] > Selecting **Share** for **Share data with Microsoft support** enables our support teams to help you with troubleshooting. This feature is available only for Azure Monitor for SAP solutions (classic) - ## Next steps Learn more about Azure Monitor for SAP solutions.
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
Choose a syslog daemon to see the appropriate description.
Contents of the `security-config-omsagent.conf` file: ```bash
- filter f_oms_filter {match(\"CEF\|ASA\" ) ;};destination oms_destination {tcp(\"127.0.0.1\" port(25226));};
+ filter f_oms_filter {match("CEF\|ASA" ) ;};destination oms_destination {tcp("127.0.0.1" port(25226));};
log {source(s_src);filter(f_oms_filter);destination(oms_destination);}; ```
sentinel Mitre Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/mitre-coverage.md
This article describes how to use the **MITRE** page in Microsoft Sentinel to vi
:::image type="content" source="media/mitre-coverage/mitre-coverage.png" alt-text="Screenshot of the MITRE coverage page with both active and simulated indicators selected.":::
-Microsoft Sentinel is currently aligned to The MITRE ATT&CK framework, version 9.
+Microsoft Sentinel is currently aligned to The MITRE ATT&CK framework, version 11.
## View current MITRE coverage
Having a scheduled rule with MITRE techniques applied running regularly in your
For more information, see: - [MITRE | ATT&CK framework](https://attack.mitre.org/)-- [MITRE ATT&CK for Industrial Control Systems](https://www.mitre.org/news-insights/news-release/mitre-releases-framework-cyber-attacks-industrial-control-systems)
+- [MITRE ATT&CK for Industrial Control Systems](https://www.mitre.org/news-insights/news-release/mitre-releases-framework-cyber-attacks-industrial-control-systems)
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md
For the time being, these templates have limited application as outlined below,
## Considerations The following limitations currently govern the use of NRT rules:
-1. No more than 20 rules can be defined per customer at this time.
+1. No more than 50 rules can be defined per customer at this time.
1. By design, NRT rules will only work properly on log sources with an **ingestion delay of less than 12 hours**.
sentinel Normalization About Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-parsers.md
The following table lists the available unifying parsers:
| Schema | Unifying parser | | | - |
-| Audit Event | imAuditEvent |
+| Audit Event | _Im_AuditEvent |
| Authentication | imAuthentication | | Dns | _Im_Dns | | File Event | imFileEvent |
sentinel Normalization Parsers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-overview.md
Each method has advantages over the other:
It is recommended to use built-in parsers for schemas for which built-in parsers are available.
-## Parser hierarchy
+## Parser hierarchy and naming
ASIM includes two levels of parsers: **unifying** parser and **source-specific** parsers. The user usually uses the **unifying** parser for the relevant schema, ensuring all data relevant to the schema is queried. The **unifying** parser in turn calls **source-specific** parsers to perform the actual parsing and normalization, which is specific for each source.
-The unifying parser name is `_Im_<schema>` for built-in parsers and `im<schema>` for workspace deployed parsers, where `<schema>` stands for the specific schema it serves. sSource-specific parsers can also be used independently. For example, in an Infoblox-specific workbook, use the `vimDnsInfobloxNIOS` source-specific parser. You can find a list of source-specific parsers in the [ASIM parsers list](normalization-parsers-list.md).
+The unifying parser name is `_Im_<schema>` for built-in parsers and `im<schema>` for workspace deployed parsers, where `<schema>` stands for the specific schema it serves. Source-specific parsers can also be used independently. Use `_Im_<schema>_<source>` for built-in parsers and `vim<schema><source>` for workspace deployed parsers. For example, in an Infoblox-specific workbook, use the `_Im_Dns_InfobloxNIOS` source-specific parser. You can find a list of source-specific parsers in the [ASIM parsers list](normalization-parsers-list.md).
+
+>[!TIP]
+> A corresponding set of parsers that use `_ASim_<schema>` and `ASim<Schema>` are also available. Theses parsers do not support filtering parameters and are provided to help mitigate the [Time picker set to a custom range](normalization-known-issues.md#time-picker-set-to-a-custom-range) issue. Use those parsers only interactively in the logs screen, but not elsewhere, for example in analytic rules or workbooks. This parsers may not be removed when the issue is resolves.
>[!TIP]
service-bus-messaging Service Bus Async Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-async-messaging.md
Service Bus contains a number of mitigations for these issues. The following sec
### Throttling With Service Bus, throttling enables cooperative message rate management. Each individual Service Bus node houses many entities. Each of those entities makes demands on the system in terms of CPU, memory, storage, and other facets. When any of these facets detects usage that exceeds defined thresholds, Service Bus can deny a given request. The caller receives a server busy exception and retries after 10 seconds.
-As a mitigation, the code must read the error and halt any retries of the message for at least 10 seconds. Since the error can happen across pieces of the customer application, it is expected that each piece independently executes the retry logic. The code can reduce the probability of being throttled by enabling partitioning on a namespace, queue or topic.
+As a mitigation, the code must read the error and halt any retries of the message for at least 10 seconds. Since the error can happen across pieces of the customer application, it is expected that each piece independently executes the [retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus). The code can reduce the probability of being throttled by enabling partitioning on a namespace, queue or topic.
For more information on how application code should handle throttling concerns, see the [documentation on the Throttling Pattern](/azure/architecture/patterns/throttling).
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md
Note the following considerations to keep in mind with this release:
## Availability Zones
-The Service Bus Premium SKU supports [availability zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of the messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If the applications see transient disconnects from Service Bus, the retry logic in the SDK will automatically reconnect to Service Bus.
+The Service Bus Premium SKU supports [availability zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of the messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If the applications see transient disconnects from Service Bus, the [retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus) in the SDK will automatically reconnect to Service Bus.
When you use availability zones, both metadata and data (messages) are replicated across data centers in the availability zone.
service-bus-messaging Service Bus Messaging Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-exceptions.md
The messaging APIs generate exceptions that can fall into the following categori
4. Other exceptions ([System.Transactions.TransactionException](/dotnet/api/system.transactions.transactionexception), [System.TimeoutException](/dotnet/api/system.timeoutexception), [Microsoft.ServiceBus.Messaging.MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception), [Microsoft.ServiceBus.Messaging.SessionLockLostException](/dotnet/api/microsoft.azure.servicebus.sessionlocklostexception)). General action: specific to the exception type; refer to the table in the following section: > [!IMPORTANT]
-> Azure Service Bus doesn't retry an operation in case of an exception when the operation is in a transaction scope.
+> - Azure Service Bus doesn't retry an operation in case of an exception when the operation is in a transaction scope.
+> - For retry guidance specific to Azure Service Bus, see [Retry guidance for Service Bus](/azure/architecture/best-practices/retry-service-specific#service-bus).
## Exception types
service-bus-messaging Service Bus Migrate Standard Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md
Here is a list of features not supported by Premium and their mitigation -
### Express entities
- Express entities that don't commit any message data to storage are not supported in Premium. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system.
+Express entities that don't commit any message data to storage are not supported in the **Premium** tier. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system.
- During migration, any of your express entities in your Standard namespace will be created on the Premium namespace as a non-express entity.
+During migration, any of your express entities in your Standard namespace will be created on the Premium namespace as a non-express entity.
- If you utilize Azure Resource Manager (ARM) templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors.
+If you utilize Azure Resource Manager (ARM) templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors.
### RBAC settings The role-based access control (RBAC) settings on the namespace aren't migrated to the premium namespace. You'll need to add them manually after the migration.
service-bus-messaging Service Bus Outages Disasters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-outages-disasters.md
Service Bus **premium** tier supports geo-disaster recovery, at the namespace le
### Availability zones
-The Service Bus **premium** tier supports [availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If applications see transient disconnects from Service Bus, the retry logic in the SDK will automatically reconnect to Service Bus.
+The Service Bus **premium** tier supports [availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If applications see transient disconnects from Service Bus, the [retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus) in the SDK will automatically reconnect to Service Bus.
When you use availability zones, **both metadata and data (messages)** are replicated across data centers in the availability zone.
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
Service Bus offers various pricing tiers. It's recommended to pick the appropria
> [!NOTE] > If the right tier is not picked, there is a risk of overwhelming the Service Bus namespace which may lead to [throttling](service-bus-throttling.md). >
-> Throttling does not lead to loss of data. Applications leveraging the Service Bus SDK can utilize the default retry policy to ensure that the data is eventually accepted by Service Bus.
+> Throttling does not lead to loss of data. Applications leveraging the Service Bus SDK can utilize the default [retry policy](/azure/architecture/best-practices/retry-service-specific#service-bus) to ensure that the data is eventually accepted by Service Bus.
> ### Calculating throughput for Premium
service-bus-messaging Service Bus Resource Manager Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-exceptions.md
Here are the various exceptions/errors that are surfaced through the Azure Resou
| Error code | Error sub code | Error message | Description | Recommendation | | - | - | - | -- | -- |
-| Bad Request | 40000 | Sub code=40000. The property *'property name'* can't be set when creating a Queue because the namespace *'namespace name'* is using the 'Basic' Tier. This operation is only supported in 'Standard' or 'Premium' tier. | On Azure Service Bus Basic Tier, the below properties can't be set or updated - <ul> <li> RequiresDuplicateDetection </li> <li> AutoDeleteOnIdle </li> <li>RequiresSession</li> <li>DefaultMessageTimeToLive </li> <li> DuplicateDetectionHistoryTimeWindow </li> <li> EnableExpress </li> <li> ForwardTo </li> <li> Topics </li> </ul> | Consider upgrading from Basic to Standard or Premium tier to use this functionality. |
+| Bad Request | 40000 | Sub code=40000. The property *'property name'* can't be set when creating a Queue because the namespace *'namespace name'* is using the 'Basic' Tier. This operation is only supported in 'Standard' or 'Premium' tier. | On Azure Service Bus Basic Tier, the below properties can't be set or updated - <ul> <li> RequiresDuplicateDetection </li> <li> AutoDeleteOnIdle </li> <li>RequiresSession</li> <li>DefaultMessageTimeToLive </li> <li> DuplicateDetectionHistoryTimeWindow </li> <li> EnableExpress (not supported in Premium too)</li> <li> ForwardTo </li> <li> Topics </li> </ul> | Consider upgrading from Basic to Standard or Premium tier to use this functionality. |
| Bad Request | 40000 | Sub code=40000. The value for the 'requiresDuplicateDetection' property of an existing Queue(or Topic) can't be changed. | Duplicate detection must be enabled/disabled at the time of entity creation. The duplicate detection configuration parameter can't be changed after creation. | To enable duplicate detection on a previously created queue/topic, you can create a new queue/topic with duplicate detection and then forward from the original queue to the new queue/topic. | | Bad Request | 40000 | Sub code=40000. The specified value 16384 is invalid. The property 'MaxSizeInMegabytes', must be one of the following values: 1024;2048;3072;4096;5120. | The MaxSizeInMegabytes value is invalid. | Ensure that the MaxSizeInMegabytes is one of the following - 1024, 2048, 3072, 4096, 5120. | | Bad Request | 40000 | Sub code=40000. Partitioning can't be changed for Queue/Topic. | Partitioning can't be changed for entity. | Create a new entity (queue or topic) and enable partitions. |
Here are the various exceptions/errors that are surfaced through the Azure Resou
| Bad Request | 40000 | Sub code=40000. 'URI_PATH' contains character(s) that isn't allowed by Service Bus. Entity segments can contain only letters, numbers, periods(.), hyphens(-), and underscores(_). | Entity segments can contain only letters, numbers, periods(.), hyphens(-), and underscores(_). Any other characters cause the request to fail. | Ensure that there are no invalid characters in the URI Path. | | Bad Request | 40000 | Sub code=40000. Bad request. To know more visit `https://aka.ms/sbResourceMgrExceptions`. TrackingId:00000000-0000-0000-0000-00000000000000_000, SystemTracker:contososbusnamesapce.servicebus.windows.net:myqueue, Timestamp:yyyy-mm-ddThh:mm:ss | This error occurs when you try to create a queue in a non-premium tier namespace with a value set to the property `maxMessageSizeInKilobytes`. This property can only be set for queues in the premium namespace. | | Bad Request | 40300 | Sub code=40300. The maximum number of resources of type `EnablePartioning == true` has been reached or exceeded. | There's a limit on number of partitioned entities per namespace. See [Quotas and limits](service-bus-quotas.md). | |
-| Bad Request | 40400 | Sub code=40400. The auto forwarding destination entity doesn't exist. | The destination for the autoforwarding destination entity doesn't exist. | The destination entity (queue or topic), must exist before the source is created. Retry after creating the destination entity. |
+| Bad Request | 40400 | Sub code=40400. The auto forwarding destination entity doesn't exist. | The destination for the autoforwarding destination entity doesn't exist. | The destination entity (queue or topic), must exist before the source is created. [Retry](/azure/architecture/best-practices/retry-service-specific#service-bus) after creating the destination entity. |
## Error code: 429
service-bus-messaging Service Bus Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-throttling.md
The request was terminated because the entity is being throttled. Error code: 50
### How can I avoid being throttled?
-With shared resources, it's important to enforce some sort of fair usage across various Service Bus namespaces that share those resources. Throttling ensures that any spike in a single workload doesn't cause other workloads on the same resources to be throttled. As mentioned later in the article, there's no risk in being throttled because the client SDKs (and other Azure PaaS offerings) have the default retry policy built into them. Any throttled requests will be retried with exponential backoff and will eventually go through when the credits are replenished.
+With shared resources, it's important to enforce some sort of fair usage across various Service Bus namespaces that share those resources. Throttling ensures that any spike in a single workload doesn't cause other workloads on the same resources to be throttled. As mentioned later in the article, there's no risk in being throttled because the client SDKs (and other Azure PaaS offerings) have the default [retry policy](/azure/architecture/best-practices/retry-service-specific#service-bus) built into them. Any throttled requests will be retried with exponential backoff and will eventually go through when the credits are replenished.
Understandably, some applications may be sensitive to being throttled. In that case, it's recommended to [migrate your current Service Bus standard namespace to premium](service-bus-migrate-standard-premium.md). On migration, you can allocate dedicated resources to your Service Bus namespace and appropriately scale up the resources if there's a spike in your workload and reduce the likelihood of being throttled. Additionally, when your workload reduces to normal levels, you can scale down the resources allocated to your namespace.
Azure Service Bus is optimized for persistence, we ensure that all the data sent
Once the request is successfully acknowledged by Service Bus, it implies that Service Bus has successfully processed the request. If Service Bus returns a failure, then it implies that Service Bus hasn't been able to process the request and the client application must retry the request.
-However, when a request is throttled, the service is implying that it can't accept and process the request right now because of resource limitations. It **does not** imply any sort of data loss because Service Bus simply hasn't looked at the request. In this case, relying on the default retry policy of the Service Bus SDK ensures that the request is eventually processed.
+However, when a request is throttled, the service is implying that it can't accept and process the request right now because of resource limitations. It **does not** imply any sort of data loss because Service Bus simply hasn't looked at the request. In this case, relying on the default [retry policy](/azure/architecture/best-practices/retry-service-specific#service-bus) of the Service Bus SDK ensures that the request is eventually processed.
## Next steps
service-bus-messaging Service Bus Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-troubleshooting-guide.md
The following steps may help you with troubleshooting connectivity/certificate/t
Backend service upgrades and restarts may cause these issues in your applications. ### Resolution
-If the application code uses SDK, the retry policy is already built in and active. The application will reconnect without significant impact to the application/workflow.
+If the application code uses SDK, the [retry policy](/azure/architecture/best-practices/retry-service-specific#service-bus) is already built in and active. The application will reconnect without significant impact to the application/workflow.
## Unauthorized access: Send claims are required
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
az spring app create \
--service <your-Azure-Spring-Apps-instance-name> \ --name "springapp" \ --assign-endpoint true \
+ --runtime-version Java_17 \
--system-assigned export SERVICE_IDENTITY=$(az spring app show \ --resource-group "<your-resource-group-name>" \
az spring app create \
--service <your-Azure-Spring-Apps-instance-name> \ --name "springapp" \ --user-assigned $USER_IDENTITY_RESOURCE_ID \
+ --runtime-version Java_17 \
--assign-endpoint true az spring app show \ --resource-group <your-resource-group-name> \
This app has access to get secrets from Azure Key Vault. Use the Azure Key Vault
1. Use the following command to generate a sample project from `start.spring.io` with Azure Key Vault Spring Starter. ```bash
- curl https://start.spring.io/starter.tgz -d dependencies=web,azure-keyvault -d baseDir=springapp -d bootVersion=2.7.2 -d javaVersion=1.8 | tar -xzvf -
+ curl https://start.spring.io/starter.tgz -d dependencies=web,azure-keyvault -d baseDir=springapp -d bootVersion=2.7.9 -d javaVersion=17 -d type=maven-project | tar -xzvf -
``` 1. Specify your Key Vault in your app.
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
return connectionString; }
- public void run(String... varl) throws Exception {
+ public void run(String... args) throws Exception {
System.out.println(String.format("\nConnection String stored in Azure Key Vault:\n%s\n",connectionString)); } }
storage Storage Files Identity Multiple Forests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-multiple-forests.md
description: Configure on-premises Active Directory Domain Services (AD DS) auth
Previously updated : 02/16/2023 Last updated : 04/17/2023
Once authentication passes, the trust is established, and you should be able to
Once the trust is established, follow these steps to create a storage account and SMB file share for each domain, enable AD DS authentication on the storage accounts, and create hybrid user accounts synced to Azure AD. 1. Log in to the Azure portal and create two storage accounts such as **onprem1sa** and **onprem2sa**. For optimal performance, we recommend that you deploy the storage accounts in the same region as the clients from which you plan to access the shares.
+
+ > [!NOTE]
+ > Creating a second storage account isn't necessary. These instructions are meant to show an example of how to access storage accounts that belong to different forests. If you only have one storage account, you can ignore the second storage account setup instructions.
+
1. [Create an SMB Azure file share](storage-files-identity-ad-ds-assign-permissions.md) on each storage account. 1. [Sync your on-premises AD to Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) using [Azure AD Connect sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application. 1. Domain-join an Azure VM in **Forest 1** to your on-premises AD DS. For information about how to domain-join, refer to [Join a Computer to a Domain](/windows-server/identity/ad-fs/deployment/join-a-computer-to-a-domain).
virtual-machines Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/custom-data.md
Azure currently supports two provisioning agents:
## FAQ ### Can I update custom data after the VM has been created?
-For single VMs, you can't update custom data in the VM model. But for Virtual Machine Scale Sets, you can update custom data via the [REST API](/rest/api/compute/virtualmachinescalesets/update), the [Azure CLI](/cli/azure/vmss#az-vmss-update), or [Azure PowerShell](/powershell/module/az.compute/update-azvmss). When you update custom data in the model for a Virtual Machine Scale Set:
+For single VMs, you can't update custom data in the VM model. But for Virtual Machine Scale Sets, you can update custom data. For more information, see [Modify a Scale Set](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-update-global-scale-set-properties). When you update custom data in the model for a Virtual Machine Scale Set:
* Existing instances in the scale set don't get the updated custom data until they're reimaged. * Existing instances in the scale set that is upgraded don't get the updated custom data.
virtual-machines Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-template.md
Title: 'Quickstart: Use a Resource Manager template to create an Ubuntu Linux VM'
-description: In this quickstart, you learn how to use a Resource Manager template to create a Linux virtual machine
+description: Learn how to use an Azure Resource Manager template to create and deploy an Ubuntu Linux virtual machine with this quickstart.
Previously updated : 06/04/2020 Last updated : 04/13/2023
-# Quickstart: Create an Ubuntu Linux virtual machine using an ARM template
+# Quickstart: Create an Ubuntu Linux virtual machine by using an ARM template
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
This quickstart shows you how to use an Azure Resource Manager template (ARM template) to deploy an Ubuntu Linux virtual machine (VM) in Azure. [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-linux%2fazuredeploy.json)
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/vm-simple-linux/).
+For more information on this template, see [Deploy a simple Ubuntu Linux VM 18.04-LTS](https://azure.microsoft.com/resources/templates/vm-simple-linux/).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.compute/vm-simple-linux/azuredeploy.json"::: - Several resources are defined in the template: -- [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/Microsoft.Network/virtualNetworks/subnets): create a subnet.-- [**Microsoft.Storage/storageAccounts**](/azure/templates/Microsoft.Storage/storageAccounts): create a storage account.-- [**Microsoft.Network/networkInterfaces**](/azure/templates/Microsoft.Network/networkInterfaces): create a NIC.-- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/Microsoft.Network/networkSecurityGroups): create a network security group.-- [**Microsoft.Network/virtualNetworks**](/azure/templates/Microsoft.Network/virtualNetworks): create a virtual network.-- [**Microsoft.Network/publicIPAddresses**](/azure/templates/Microsoft.Network/publicIPAddresses): create a public IP address.-- [**Microsoft.Compute/virtualMachines**](/azure/templates/Microsoft.Compute/virtualMachines): create a virtual machine.
+- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/Microsoft.Network/virtualNetworks/subnets): create a subnet.
+- [Microsoft.Storage/storageAccounts](/azure/templates/Microsoft.Storage/storageAccounts): create a storage account.
+- [Microsoft.Network/networkInterfaces](/azure/templates/Microsoft.Network/networkInterfaces): create a NIC.
+- [Microsoft.Network/networkSecurityGroups](/azure/templates/Microsoft.Network/networkSecurityGroups): create a network security group.
+- [Microsoft.Network/virtualNetworks](/azure/templates/Microsoft.Network/virtualNetworks): create a virtual network.
+- [Microsoft.Network/publicIPAddresses](/azure/templates/Microsoft.Network/publicIPAddresses): create a public IP address.
+- [Microsoft.Compute/virtualMachines](/azure/templates/Microsoft.Compute/virtualMachines): create a virtual machine.
## Deploy the template
Several resources are defined in the template:
1. Select or enter the following values. Use the default values, when available. - **Subscription**: select an Azure subscription.
- - **Resource group**: select an existing resource group from the drop-down, or select **Create new**, enter a unique name for the resource group, and then click **OK**.
- - **Location**: select a location. For example, **Central US**.
+ - **Resource group**: select an existing resource group from the drop-down, or select **Create new**, enter a unique name for the resource group, and select **OK**.
+ - **Region**: select a region. For example, **Central US**.
- **Admin username**: provide a username, such as *azureuser*.
- - **Authentication type**: You can choose between using an SSH key or a password.
+ - **Authentication type**: You can choose between an SSH key or a password.
- **Admin Password Or Key** depending on what you choose for authentication type: - If you choose **password**, the password must be at least 12 characters long and meet the [defined complexity requirements](faq.yml#what-are-the-password-requirements-when-creating-a-vm-). - If you choose **sshPublicKey**, paste in the contents of your public key.
Several resources are defined in the template:
- **Network Security Group Name**: name for the NSG. 1. Select **Review + create**. After validation completes, select **Create** to create and deploy the VM. - The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure CLI, Azure PowerShell, and REST API. To learn other deployment methods, see [Deploy templates](../../azure-resource-manager/templates/deploy-cli.md). ## Review deployed resources
-You can use the Azure portal to check on the VM and other resource that were created. After the deployment is finished, select **Go to resource group** to see the VM and other resources.
-
+You can use the Azure portal to check on the VM and other resource that were created. After the deployment is finished, select **Resource groups** to see the VM and other resources.
## Clean up resources When no longer needed, delete the resource group, which deletes the VM and all of the resources in the resource group. 1. Select the **Resource group**.
-1. On the page for the resource group, select **Delete**.
+1. On the page for the resource group, select **Delete resource group**.
1. When prompted, type the name of the resource group and then select **Delete**. - ## Next steps
-In this quickstart, you deployed a simple virtual machine using an ARM template. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
-
+In this quickstart, you deployed a virtual machine by using an ARM template. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
> [!div class="nextstepaction"]
-> [Azure Linux virtual machine tutorials](./tutorial-manage-vm.md)
+> [Create and Manage Linux VMs with the Azure CLI](./tutorial-manage-vm.md)
virtual-machines Tutorial Create Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-create-vmss.md
Use the Azure portal to create a Flexible scale set.
:::image type="content" source="media/tutorial-create-vmss/flex-details.png" alt-text="Name and region."::: 1. Leave **Availability zone** as blank for this example. 1. For **Orchestration mode**, select **Flexible**.
-1. For **Image**, select *Ubuntu 18.04 LTS*.
+1. For **Image**, select `Ubuntu 18.04 LTS`.
1. For **Size**, leave the default value or select a size like *Standard_E2s_V3*. 1. In **Username** type *azureuser*. 1. For **SSH public key source**, leave the default of **Generate new key pair**, and then type *myKey* for the **Key pair name**.
Open port 80 on your scale set by adding an inbound rule to your network securit
1. On the page for your scale set, select **Networking** from the left menu. The **Networking** page will open. 1. Select **Add inbound port rule**. The **Add inbound security rule** page will open.
-1. Under **Service**, select *HTTP* and then select **Add** at the bottom of the page.
+1. Under **Service**, select **HTTP** and then select **Add** at the bottom of the page.
## Test your scale set
virtual-machines Tutorial Elasticsearch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-elasticsearch.md
This article walks you through how to deploy [Elasticsearch](https://www.elastic.co/products/elasticsearch), [Logstash](https://www.elastic.co/products/logstash), and [Kibana](https://www.elastic.co/products/kibana), on an Ubuntu VM in Azure. To see the Elastic Stack in action, you can optionally connect to Kibana and work with some sample logging data.
+Additionally, you can follow the [Deploy Elastic on Azure Virtual Machines](https://learn.microsoft.com/training/modules/deploy-elastic-azure-virtual-machines/) module for a more guided tutorial on deploying Elastic on Azure Virtual Machines.
+ In this tutorial you learn how to: > [!div class="checklist"]
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
Sharing images to the community is a new capability in [Azure Compute Gallery](.
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
->To publish a community gallery, you'll need to enable the preview feature using the azure CLI: `az feature register --name CommunityGallery --namespace Microsoft.Compute` or PowerShell: `Register-AzProviderFeature -FeatureName "CommunityGallery" -ProviderNamespace "Microsoft.Compute"`. For more information on enabling preview features and checking the status, see [Set up preview features in your Azure subscription](../azure-resource-manager/management/preview-features.md). Creating VMs from community gallery images is open to all Azure users.
+>To publish a community gallery, you'll need to enable the preview feature using the azure CLI: `az feature register --name CommunityGalleries --namespace Microsoft.Compute` or PowerShell: `Register-AzProviderFeature -FeatureName "CommunityGalleries" -ProviderNamespace "Microsoft.Compute"`. For more information on enabling preview features and checking the status, see [Set up preview features in your Azure subscription](../azure-resource-manager/management/preview-features.md). Creating VMs from community gallery images is open to all Azure users.
> > During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery. >
virtual-machines Ssh Keys Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ssh-keys-azure-cli.md
Title: Create SSH keys with the Azure CLI
-description: Learn how to generate and store SSH keys with the Azure CLI for connecting to Linux VMs.
+description: Learn how to generate and store SSH keys, before creating a VM, with the Azure CLI for connecting to Linux VMs.
Previously updated : 11/17/2021 Last updated : 04/13/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-You can create SSH keys before creating a VM, and store them in Azure. Each newly created SSH key is also stored locally.
+You can create SSH keys before creating a VM and store them in Azure. Each newly created SSH key is also stored locally.
If you have existing SSH keys, you can upload and store them in Azure for reuse.
-For a more detailed overview of SSH, see [Detailed steps: Create and manage SSH keys for authentication to a Linux VM in Azure](./linux/create-ssh-keys-detailed.md).
+For more information, see [Detailed steps: Create and manage SSH keys for authentication to a Linux VM in Azure](./linux/create-ssh-keys-detailed.md).
-For more detailed information about creating and using SSH keys with Linux VMs, see [Use SSH keys to connect to Linux VMs](./linux/ssh-from-windows.md).
+For more information on how to create and use SSH keys with Linux VMs, see [Use SSH keys to connect to Linux VMs](./linux/ssh-from-windows.md).
## Generate new keys
For example, enter: `ssh -i /home/user/.ssh/mySSHKey azureuser@123.45.67.890`
## Upload an SSH key
-You can upload a public SSH key to store in Azure.
+You can upload a public SSH key to store in Azure.
Use the [az sshkey create](/cli/azure/sshkey#az-sshkey-create) command to upload an SSH public key by specifying its file:
az sshkey show --name "mySSHKey" --resource-group "myResourceGroup"
## Next steps
-To learn more about using SSH keys with Azure VMs, see [Use SSH keys to connect to Linux VMs](./linux/ssh-from-windows.md).
+To learn more about how to use SSH keys with Azure VMs, see [Use SSH keys to connect to Linux VMs](./linux/ssh-from-windows.md).
vpn-gateway Nat Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-howto.md
Each part of this article helps you form a basic building block for configuring
### <a name="diagram"></a>Diagram 1 ### Prerequisites
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-overview.md
Once a NAT rule is defined for a connection, the effective address space for the
The following diagram shows an example of Azure VPN NAT configurations: The diagram shows an Azure VNet and two on-premises networks, all with address space of 10.0.1.0/24. To connect these two networks to the Azure VNet and VPN gateway, create the following rules: