Updates from: 11/25/2022 02:08:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
Configure Conditional Access through the Azure portal or Microsoft Graph APIs to
### Enable template 1 with Conditional Access APIs (optional)
-Create a sign-in risk-based Conditional Access policy with MS Graph APIs. For more information, see [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api).
+Create a sign-in risk-based Conditional Access policy with MS Graph APIs. For more information, see [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#microsoft-graph-apis).
The following template can be used to create a Conditional Access policy with display name "Template 1: Require MFA for medium+ sign-in risk" in report-only mode. ```json
To configure your user based conditional access:
### Enable template 2 with Conditional Access APIs (optional)
-To create a user risk-based Conditional Access policy with Conditional Access APIs, refer to the documentation for [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api).
+To create a user risk-based Conditional Access policy with Conditional Access APIs, refer to the documentation for [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#microsoft-graph-apis).
The following template can be used to create a Conditional Access policy with display name "Template 2: Require secure password change for medium+ user risk" in report-only mode.
To enable with condition access policy:
### Enable template 3 with Conditional Access APIs (optional)
-To create a location-based Conditional Access policy with Conditional Access APIs, refer to the documentation for [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api). To set up Named Locations, refer to the documentations for [Named Locations](/graph/api/resources/namedlocation).
+To create a location-based Conditional Access policy with Conditional Access APIs, refer to the documentation for [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#microsoft-graph-apis). To set up Named Locations, refer to the documentations for [Named Locations](/graph/api/resources/namedlocation).
The following template can be used to create a Conditional Access policy with display name "Template 3: Block unallowed locations" in report-only mode.
active-directory-b2c Partner Akamai Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai-secure-hybrid-access.md
+
+ Title: Configure Azure Active Directory B2C with Akamai for secure hybrid access
+
+description: Learn how to integrate Azure AD B2C authentication with Akamai for secure hybrid access
++++++ Last updated : 11/23/2022++
+zone_pivot_groups: b2c-policy-type
++
+# Configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access
+
+In this sample tutorial, learn to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Akamai Enterprise Application Access](https://www.akamai.com/products/enterprise-application-access). Akamai Enterprise Application Access is a Zero Trust Network Access (ZTNA) solution that enables secure remote access to modern and legacy applications that reside in private datacenters. Akamai Enterprise Application Access federates with Identity Provider (IdP) Azure AD B2C to authenticate users and then uses its authorization policies to perform continuous evaluation of the identity, device, application, and request context before allowing access to private applications.
++
+This feature is available only for custom policies. For setup steps, select **Custom policy** in the preceding selector.
+++
+## Prerequisites
+
+To get started, you'll need:
+
+- An Akamai Enterprise Access contract. If you donΓÇÖt have one, get a [free trial](https://www.akamai.com/products/enterprise-application-access).
+
+- An Azure subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
+
+- A virtual appliance deployed behind the firewall in your datacenter or in hybrid cloud environments to deploy the Akamai Enterprise Application Access [connector](https://techdocs.akamai.com/eaa/docs/conn)
+
+- An application that uses headers for authentication. In this sample, we'll use an application that displays headers [docker header-demo-app](https://hub.docker.com/r/mistermik/header-demo-app).
+
+- **OR** an OpenID Connect (OIDC) application. In this sample, we'll use an [ASP.NET MVC web app](https://learn.microsoft.com/azure/active-directory/develop/tutorial-v2-asp-webapp) that signs in users by using the Open Web Interface for .NET (OWIN) middleware and the Microsoft identity platform.
+
+## Scenario description
+
+In this scenario, you'll enable Azure AD B2C authentication for end users while they try to access private applications secured by Akamai Enterprise Application Access.
+
+The components involved in this integration are:
+
+- **Azure AD B2C**: The SAML identity provider that is responsible for authenticating end users.
+
+- **Akamai Enterprise Application Access**: The ZTNA cloud service that is responsible for securing access to private applications with continuous ZTNA policy enforcement.
+
+- **Akamai Enterprise Application Access Connector**: A virtual appliance deployed in the private datacenter. It enables secure connectivity to private apps without opening any datacenter inbound firewall ports.
+
+- **Application**: A service or application deployed in your private datacenter which end users need access to.
+
+The user authenticates to Azure AD B2C (as SAML IdP) that will respond to Akamai Enterprise Application Access (the Service Provider) with a SAML assertion. Akamai Enterprise Application Access maps information from the SAML assertion and it constructs OpenID Claims or injects HTTP Headers containing information about the user. Akamai Enterprise Application Access will then pass this to the Application accessible through the Akamai Enterprise Application Access connector. In our sample, the application will display the content of these headers. In the use case of OIDC Application it will display the user's claims.
+
+The following diagram shows how Akamai Enterprise Application Access (EAA) integrates with Azure AD B2C.
+
+![Screenshot shows the integration architecture.](./media/partner-akamai-secure-hybrid-access/integration-architecture.png)
+
+1. An end user tries to access an application hosted in the private datacenter using the applicationΓÇÖs external URL that is registered in Akamai Enterprise Application Access.
+
+1. Akamai Enterprise Application Access redirects the unauthenticated end user to Azure AD B2C for authentication.
+
+1. After successful authentication Azure AD B2C redirects the user back to Akamai Enterprise Application Access with a SAML assertion.
+
+1. Akamai Enterprise Application Access uses the identity information from the SAML assertion to identify the user and determine if the user is allowed to access the requested application.
+
+1. Akamai Enterprise Application Access constructs OIDC Claims or injects HTTP Headers, which are sent to the application.
+
+1. The application uses this information to identify the authenticated user and creates an application session for the end user.
+
+## Onboard with Akamai Enterprise Application Access
+
+To get started with Akamai Enterprise Application Access, refer to the [Akamai Enterprise Application Access getting started guide](https://techdocs.akamai.com/eaa/docs/welcome-guide).
+
+### Step 1 - Add Azure AD B2C as a SAML IdP in Akamai Enterprise Application Access
+
+Akamai Enterprise Application Access supports SAML federation with cloud IdPs like Azure AD B2C. Add Azure AD B2C as a [Third party SAML IdP](https://techdocs.akamai.com/eaa/docs/add-new-idp#add-a-new-identity-provider) in Akamai Enterprise Application Access.
+
+1. Sign in to Enterprise Center https://control.akamai.com/
+
+2. In the Enterprise Center navigation menu, select **Application Access > Identity & Users > Identity Providers**.
+
+3. Select **Add Identity provider (+)**.
+
+4. Enter a name, description, and select the provider type as **Third Party SAML**.
+
+5. Select **Continue**. The Identity Provider configuration page appears.
+
+6. In **Settings** > **General** enter a URL for the **Identity Server**. You can select **Use Akamai domain** or Use your domain. If you use your own domain use a self-signed certificate, or use the uploaded custom certificate.
+
+7. In **Authentication** enter the same URL as defined in the previous step in General and Select **Save**.
+
+ [ ![Screenshot shows the akamai settings.](./media/partner-akamai-secure-hybrid-access/akamai-settings.png)](./media/partner-akamai-secure-hybrid-access/akamai-settings.png#lightbox)
+
+### Step 2 - Register a SAML application in Azure AD B2C
+
+1. Get the custom policy starter packs from GitHub, then update the XML files in the LocalAccounts starter pack with your Azure AD B2C tenant name:
+
+ - [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository:
+
+ ```
+ git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
+ ```
+ - In all of the files in the **LocalAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is `fabrikam`, all instances of `yourtenant.onmicrosoft.com` become `fabrikam.onmicrosoft.com`.
+
+2. Create a signing certificate for Azure AD B2C to sign the SAML response sent to Akamai Enterprise Application Access:
+
+ a. [**Obtain a certificate**](https://learn.microsoft.com/azure/active-directory-b2c/saml-service-provider?tabs=windows&pivots=b2c-custom-policy#obtain-a-certificate). If you don't already have a certificate, you can use a self-signed certificate.
+
+ b. [**Upload the certificate**](https://learn.microsoft.com/azure/active-directory-b2c/saml-service-provider?tabs=windows&pivots=b2c-custom-policy#upload-the-certificate) in your Azure AD B2C tenant. Take note of the name as it will be needed in the `TechnicalProfile` mentioned in the next steps.
+
+3. Enable your policy to connect with a SAML application.
+
+ a. Open `LocalAccounts\TrustFrameworkExtensions.xml` in the custom policy starter pack. Find the **ClaimsProviders** element. If it doesn't exist, add it under the root element, `TrustFrameworkPolicy` and add the following XML snippet to implement your SAML response generator:
+
+ ```XML
+ <ClaimsProvider>
+ <DisplayName>Akamai</DisplayName>
+ <TechnicalProfiles>
+ <!-- SAML Token Issuer technical profile -->
+ <TechnicalProfile Id="AkamaiSaml2AssertionIssuer">
+ <DisplayName>Token Issuer</DisplayName>
+ <Protocol Name="SAML2" />
+ <OutputTokenFormat>SAML2</OutputTokenFormat>
+ <Metadata>
+ <Item Key="IssuerUri">https://<REPLACE>.login.go.akamai-access.com/saml/sp/response</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="SamlAssertionSigning" StorageReferenceId="B2C_1A_AkamaiSAMLSigningCert" />
+ <Key Id="SamlMessageSigning" StorageReferenceId="B2C_1A_AkamaiSAMLSigningCert" />
+ </CryptographicKeys>
+ <InputClaims />
+ <OutputClaims />
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Saml-issuerAkamai" />
+ </TechnicalProfile>
+ <!-- Session management technical profile for SAML-based tokens -->
+ <TechnicalProfile Id="SM-Saml-issuerAkamai">
+ <DisplayName>Session Management Provider</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.SamlSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="IncludeSessionIndex">false</Item>
+ <Item Key="RegisterServiceProviders">false</Item>
+ </Metadata>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+ b. Replace the `issuerUri` with the Akamai URL defined in Akamai Enterprise Application Access **Setting > General** in [**Step 1**](#step-1add-azure-ad-b2c-as-a-saml-idp-in-akamai-enterprise-application-access)
+ - Example, `<Item Key="IssuerUri">https://fabrikam.login.go.akamai-access.com/saml/sp/response</Item>`
+
+ - Replace **B2C_1A_AkamaiSAMLSigningCert** with the name of the policy key uploaded.
+
+### Step 3 - Create a sign-up or sign-in policy configured for SAML
+
+1. Create a copy of the `SignUpOrSignin.xml` file in your starter pack's working directory and save it with a new name. This article uses `SignUpOrSigninSAML.xml` as an example. This file is your policy file for the relying party. It's configured to issue a JWT response by default.
+
+1. Open the `SignUpOrSigninSAML.xml` file in your preferred editor.
+
+2. Update `tenant-name` with the name of your Azure AD B2C tenant, change the `PolicyId` and `PublicPolicyUri` values of the policy to `B2C_1A_signup_signin_saml` and `http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml`.
+
+ ```xml
+ <TrustFrameworkPolicy
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xmlns:xsd="http://www.w3.org/2001/XMLSchema"
+ xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06"
+ PolicySchemaVersion="0.3.0.0"
+ TenantId="tenant-name.onmicrosoft.com"
+ PolicyId="B2C_1A_signup_signin_saml"
+ PublicPolicyUri="http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml">
+ ```
+3. At the end of the user journey, Azure AD B2C contains a `SendClaims` step. This step references the Token Issuer technical profile. To issue a SAML response rather than the default JWT response, modify the `SendClaims` step to reference the new SAML Token Issuer technical profile, `Saml2AssertionIssuer`.
+
+ Add the following XML snippet just before the `<RelyingParty>` element. This XML overwrites orchestration step 4 in the `SignUpOrSignIn` user journey assuming you're using the `LocalAccount` Custom policy starter packs.
+
+ If you started from a different folder in the starter pack or you customized the user journey by adding or removing orchestration steps, make sure the number in the `order` element corresponds to the number specified in the user journey for the token issuer step. For example, in the other starter pack folders, the corresponding step number is 7 for `SocialAndLocalAccounts`, 6 for `SocialAccounts`, and 9 for `SocialAndLocalAccountsWithMfa`.
+
+ ```xml
+ <UserJourneys>
+ <UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>
+ <OrchestrationStep Order="4" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="AkamaiSaml2AssertionIssuer"/>
+ </OrchestrationSteps>
+ </UserJourney>
+ </UserJourneys>
+ ```
+
+ The relying party element determines which protocol your application uses. The default is `OpenId`. The `Protocol` element must be changed to `SAML`. The output claims will create the claims mapping to the SAML assertion.
+
+ Replace the entire `<TechnicalProfile>` element in the `<RelyingParty>` element with the following technical profile XML.
+
+ ```xml
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="SAML2"/>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" DefaultValue="" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="objectId"/>
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="objectId" ExcludeAsClaim="true"/>
+ </TechnicalProfile>
+ ```
+
+ Your final policy file for the relying party should look like the following XML code:
+
+ ```xml
+ <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+ <TrustFrameworkPolicy
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xmlns:xsd="http://www.w3.org/2001/XMLSchema"
+ xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06"
+ PolicySchemaVersion="0.3.0.0"
+ TenantId="fabrikam.onmicrosoft.com"
+ PolicyId="B2C_1A_signup_signin_saml"
+ PublicPolicyUri="http://fabrikam.onmicrosoft.com/B2C_1A_signup_signin_saml">
+ <BasePolicy>
+ <TenantId>fabrikam.onmicrosoft.com</TenantId>
+ <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
+ </BasePolicy>
+
+ <UserJourneys>
+ <UserJourney Id="SignUpOrSignIn">
+ <OrchestrationSteps>
+ <OrchestrationStep Order="7" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="AkamaiSaml2AssertionIssuer"/>
+ </OrchestrationSteps>
+ </UserJourney>
+ </UserJourneys>
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="SAML2"/>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" DefaultValue="" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="objectId"/>
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="objectId" ExcludeAsClaim="true"/>
+ </TechnicalProfile>
+ </RelyingParty>
+ </TrustFrameworkPolicy>
+ ```
+ >[!NOTE]
+ >You can follow this same process to implement other types of flows, for example, sign-in, password reset, or profile editing flows.
+
+### Step 4 - Upload your policy
+
+Save your changes and upload the `TrustFrameworkBase.xml`, the new `TrustFrameworkExtensions.xml` and `SignUpOrSigninSAML.xml` policy files to the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+1. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+1. Under Policies, select **Identity Experience Framework**.
+Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order:
+
+ - The base file, for example `TrustFrameworkBase.xml`
+ - The extension policy, for example `TrustFrameworkExtensions.xml`
+ - Then the relying party policy, such as `SignUpOrSigninSAML.xml`.
+
+### Step 5 - Download the Azure AD B2C IdP SAML metadata
+
+After the policy files are uploaded, Azure AD B2C uses the configuration information to generate the identity provider's SAML metadata document that the application will use. The SAML metadata document contains the locations of services, such as sign-in methods, sign out methods, and certificates.
+
+- The Azure AD B2C policy metadata is available at the following URL:
+`https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/samlp/metadata`
+
+- Replace `<tenant-name>` with the name of your Azure AD B2C tenant. Replace `<policy-name>` with the name (ID) of the policy. Here's an example:
+`https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/B2C_1A_signup_signin_saml/samlp/metadata`
+
+Download the SAML metadata and save it locally on your device. This is needed with the following step to complete the configuration in Akamai Enterprise Application Access.
+
+### Step 6 - Register Akamai Enterprise Application Access application in Azure AD B2C
+
+For Azure AD B2C to trust Akamai Enterprise Application Access, create an Azure AD B2C application registration. The registration contains configuration information, such as the application's metadata endpoint.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+1. On the left menu, select **Azure AD B2C**. Or, select **All services** and then search for and select **Azure AD B2C**.
+
+1. Select **App registrations**, and then select **New registration**.
+
+1. Enter a **Name** for the application. For example, enter **Akamai B2C Enterprise Application Access**.
+
+1. Under **Supported account types**, select **Accounts in this organizational directory only (B2C only - Single tenant)**.
+
+1. Under **Redirect URI**, select **Web**, and then enter the Akamai URL defined in Akamai Enterprise Application Access **Setting\General** in [**Step 1**](#step-1add-azure-ad-b2c-as-a-saml-idp-in-akamai-enterprise-application-access). For example, `https://fabrikam.login.go.akamai-access.com/saml/sp/response`.
+
+1. Select **Register**.
+
+### Step 7 - Configure your Akamai Enterprise Application Access application in Azure AD B2C
+
+For SAML, you need to configure several properties in the application registration's manifest.
+
+1. In the [Azure portal](https://portal.azure.com), go to the application registration that you created in [**Step 3**](#step-3create-a-sign-up-or-sign-in-policy-configured-for-saml).
+
+2. Under **Manage**, select **Manifest** to open the manifest editor. Then modify the properties described in the following section.
+
+#### Add the identifier
+
+When Akamai Enterprise Application Access SAML application makes a request to Azure AD B2C, the SAML authentication request includes an `Issuer` attribute. The value of this attribute is typically the same as the application's metadata `entityID` value. Azure AD B2C uses this value to look up the application registration in the directory and read the configuration. For this lookup to succeed, `identifierUri` in the application registration manifest must be populated with a value that matches the `Issuer` attribute.
+
+ [ ![Screenshot shows the b2c saml configuration.](./media/partner-akamai-secure-hybrid-access/akamai-b2c-saml-configuration.png)](./media/partner-akamai-secure-hybrid-access/akamai-b2c-saml-configuration.png#lightbox)
+
+In the registration manifest, find the `identifierURIs` parameter and add the **IssuerURI** value defined in [**Step 2**](#step-2register-a-saml-application-in-azure-ad-b2c), Azure AD B2C ClaimsProvider.
+
+Example:
+```XML
+"identifierUris": [
+ "https://fabrikam.login.go.akamai-access.com/saml/sp/response"
+ ],
+ ```
+
+ This value will be the same value that's configured in the SAML AuthN requests for `EntityId` at the application, and the `entityID` value in the application's metadata. You'll also need to find the `accessTokenAcceptedVersion` parameter and set the value to `2`.
+
+>[!IMPORTANT]
+>If you do not update the `accessTokenAcceptedVersion` to `2` you will receive an error message requiring a verified domain.
+
+### Step 8 - Configure authentication settings for the Azure AD B2C IdP in Akamai Enterprise Application Access
+
+Update your Akamai Enterprise Application Access Azure AD B2C IdP with authentication information like relying party URLs.
+
+1. Sign in to Enterprise Center https://control.akamai.com/
+
+1. In the Enterprise Center navigation menu, select **Application Access > Identity & Users > Identity Providers**.
+
+1. Select the Identity provider name created in [**Step 1**](#step-1add-azure-ad-b2c-as-a-saml-idp-in-akamai-enterprise-application-access).
+
+1. Upload the Azure AD B2C SAML metadata file you downloaded in [**Step 5**](#step-5download-the-azure-ad-b2c-idp-saml-metadata).
+
+1. To upload the metadata.xml file, select **Choose file**.
+
+ [ ![Screenshot shows the metadata file.](./media/partner-akamai-secure-hybrid-access/akamai-metadata.png)](./media/partner-akamai-secure-hybrid-access/akamai-metadata.png#lightbox)
+
+6. Select **Save and Deploy**.
+
+### Step 9 - Deploy Akamai Enterprise Application Access Connectors in your private datacenter
+
+To enable access to a private application, deploy one or more [Akamai Enterprise Application Access connectors](https://techdocs.akamai.com/eaa/docs/conn) in the private datacenter where your application resides. Ensure the connectors can reach your private application and have outbound access to the Akamai Cloud.
+
+### Step 10 - Define an Access Application in Akamai Enterprise Application Access for the private application
+
+1. [Define and Deploy an Access Application](https://techdocs.akamai.com/eaa/docs/add-app-eaa) in Akamai Enterprise Application Access.
+
+1. When you define the Access Application
+
+ - Associate it to the **Enterprise Application Access Azure AD B2C IdP** definition that you created with the previous steps.
+
+ - Configure Application Facing Authentication to enable SSO into the private application:
+ - **Option 1**: [Configure Custom HTTP Headers for an Access Application](https://techdocs.akamai.com/eaa/docs/custom-http-headers)
+ - **Option 2**: [Configure OpenID Connect for an Access Application](https://techdocs.akamai.com/eaa/docs/config-openid#configure-openid-connect-for-an-access-application)
+
+#### Option 1: HTTP Headers
+
+In this sample, we'll use an application that displays headers [docker header-demo-app](https://hub.docker.com/r/mistermik/header-demo-app).
+Once the Application is deployed in a private environment and a connector is capable to access the application, create a Custom HTTP type application following Akamai documentation [Configure custom HTTP headers for an access application](https://techdocs.akamai.com/eaa/docs/custom-http-headers).
+
+1. In Authentication select the Azure AD B2C SAML IdP created in the previous steps.
+
+ [ ![Screenshot shows the akamai authn application.](./media/partner-akamai-secure-hybrid-access/akamai-authn-app.png)](./media/partner-akamai-secure-hybrid-access/akamai-authn-app.png#lightbox)
+
+2. In the **Advanced** section of the application, map the HTTP header to the SAML attributes issued by Azure AD B2C in the SAML response upon a successful authentication.
+
+ Example:
+
+ | Header Name | Attribute |
+ |--|--|
+ | ps-sso-first | http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name |
+ | ps-sso-last | http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname |
+ | ps-sso-EmailAddress | emailaddress |
+ | ps-sso-uid | objectId |
+
+ ![Screenshot shows the akamai header app mapping.](./media/partner-akamai-secure-hybrid-access/akamai-header-app-mapping.png)
+
+ Test the application by selecting the Akamai URL for the Custom HTTP type Web application you created.
+
+ ![Screenshot shows the akamai header app results.](./media/partner-akamai-secure-hybrid-access/akamai-header-app-results.png)
+
+#### Option 2: OpenID Connect
+
+In this sample, we'll use a [ASP.NET MVC web app](https://learn.microsoft.com/azure/active-directory/develop/tutorial-v2-asp-webapp) that signs in users by using the Open Web Interface for .NET (OWIN) middleware and the Microsoft identity platform.
+
+1. Configure the OIDC to SAML bridging in the **AZURE AD B2C SAML IdP** created with the previous steps.
+
+ [ ![Screenshot shows the akamai oidc app oidc settings.](./media/partner-akamai-secure-hybrid-access/akamai-oidc-idp-settings.png)](./media/partner-akamai-secure-hybrid-access/akamai-oidc-idp-settings.png#lightbox)
+
+2. Create a Custom HTTP type application following [Configure OpenID Connect for an Access Application.](https://techdocs.akamai.com/eaa/docs/config-openid#configure-openid-connect-for-an-access-application)
+
+3. In Authentication select the Azure AD B2C SAML IdP created in the previous steps as per the HTTP Header application.
+
+ [ ![Screenshot shows the akamai authn app settings.](./media/partner-akamai-secure-hybrid-access/akamai-authn-app.png)](./media/partner-akamai-secure-hybrid-access/akamai-authn-app.png#lightbox)
+
+4. In **Advanced** select **OpenID Connect 1.0** as authentication mechanism and then select **Save**.
+
+ [ ![Screenshot shows the akamai oidc app authentication settings.](./media/partner-akamai-secure-hybrid-access/akamai-oidc-app-authentication.png)](./media/partner-akamai-secure-hybrid-access/akamai-oidc-app-authentication.png#lightbox)
+
+5. A new **OpenID** tab appears, copy the Discovery URL that is needed later in the steps when configuring the OWIN component for testing application.
+
+ [ ![Screenshot shows the akamai oidc app settings.](./media/partner-akamai-secure-hybrid-access/akamai-oidc-app-settings.png)](./media/partner-akamai-secure-hybrid-access/akamai-oidc-app-settings.png#lightbox)
+
+6. In the **Claims** section, define the claims that Akamai will issue for the OIDC application, mapping their values to the SAML attributes provided by Azure AD B2C in the SAML response upon a successful authentication. These claims have to map what you defined in the previous step when [configuring the OIDC to SAML bridging in the Azure AD B2C SAML IdP](#option-2-openid-connect).
+
+ [ ![Screenshot shows the akamai oidc app claim settings.](./media/partner-akamai-secure-hybrid-access/akamai-oidc-claims-settings.png)](./media/partner-akamai-secure-hybrid-access/akamai-oidc-claims-settings.png#lightbox)
+
+7. Replace startup class with the following code in the [ASP.NET MVC web app](https://learn.microsoft.com/azure/active-directory/develop/tutorial-v2-asp-webapp).
+
+ These few changes configure the Authorization code flow grant, the authorization code will be redeemed for tokens at the token endpoint for the application, and it introduces the Metadata Address to set the discovery endpoint for obtaining metadata from Akamai.
+
+ ```csharp
+ public class Startup
+ {
+ // The Client ID is used by the application to uniquely identify itself to Azure AD.
+ string clientId = System.Configuration.ConfigurationManager.AppSettings["ClientId"];
+
+ //App Client Secret to redeem the code for an access token
+ string ClientSecret = System.Configuration.ConfigurationManager.AppSettings["ClientSecret"];
+
+ // RedirectUri is the URL where the user will be redirected to after they sign in.
+ string redirectUri = System.Configuration.ConfigurationManager.AppSettings["RedirectUri"];
+
+ // PostLogoutRedirectUri is the URL where the user will be redirected to after they sign out
+ string PostLogoutRedirectUri = System.Configuration.ConfigurationManager.AppSettings["PostLogoutRedirectUri"];
+
+ //Authority is the URL for authority
+ string authority = System.Configuration.ConfigurationManager.AppSettings["Authority"];
+
+ //discovery endpoint for obtaining metadata
+ string MetadataAddress = System.Configuration.ConfigurationManager.AppSettings["MetadataAddress"];
++
+ /// <summary>
+ /// Configure OWIN to use OpenIdConnect
+ /// </summary>
+ /// <param name="app"></param>
+ public void Configuration(IAppBuilder app)
+ {
+ app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
+
+ app.UseCookieAuthentication(new CookieAuthenticationOptions());
+ app.UseOpenIdConnectAuthentication(
+ new OpenIdConnectAuthenticationOptions
+ {
+ // Sets the ClientId, authority, RedirectUri as obtained from web.config
+ ClientId = clientId,
+ Authority = authority,
+ RedirectUri = redirectUri,
+ MetadataAddress = MetadataAddress,
+ // PostLogoutRedirectUri is the page that users will be redirected to after sign-out. In this case, it is using the home page
+ PostLogoutRedirectUri = redirectUri,
+ RedeemCode = true,
+ Scope = OpenIdConnectScope.OpenIdProfile,
+ // ResponseType is set to request the code id_token - which contains basic information about the signed-in user
+ ResponseType = OpenIdConnectResponseType.Code,
+ // OpenIdConnectAuthenticationNotifications configures OWIN to send notification of failed authentications to OnAuthenticationFailed method
+ Notifications = new OpenIdConnectAuthenticationNotifications
+ {
+ AuthenticationFailed = OnAuthenticationFailed
+ }
+ }
+ );
+ }
+
+ /// <summary>
+ /// Handle failed authentication requests by redirecting the user to the home page with an error in the query string
+ /// </summary>
+ /// <param name="context"></param>
+ /// <returns></returns>
+ private Task OnAuthenticationFailed(AuthenticationFailedNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> context)
+ {
+ context.HandleResponse();
+ context.Response.Redirect("/?errormessage=" + context.Exception.Message);
+ return Task.FromResult(0);
+ }
+ }
+ ```
+
+8. In the `web.config` file add the Metadata address, replace clientId, clientsecret, authority, redirectUri and PostLogoutRedirectUri with the values from the Akamai application in `appSettings`.
+
+ You can find these values in the previous step 5 in the OpenID tab for the HTTP Akamai Application, where you created `Discovery URL=MetadataAddress`. `redirectUri` is the local address for the Akamai connector to resolve to the local OIDC application. `Authority` is the authorization_endpoint you can find from your `.well-known/openid-configuration` [document](https://learn.microsoft.com/azure/active-directory/develop/v2-protocols-oidc).
+
+ Discovery URL: `https://fabrikam.login.go.akamai-access.com/.well-known/openid-configuration`
+
+ ```xml
+ <appSettings>
+ <add key="ClientId" value="xxxxxxxxxxxxxxxxxx" />
+ <add key="ClientSecret" value="xxxxxxxxxxxxxxxxxx" />
+ <add key="Authority" value="https://fabrikam.login.go.akamai-access.com/oidc/oauth" />
+ <add key="redirectUri" value="http://oidcapp.identity.mistermik.com/" />
+ <add key="PostLogoutRedirectUri" value="https://oidc-test.go.akamai-access.com/" />
+ <add key="MetadataAddress" value="https://fabrikam.login.go.akamai-access.com/.well-known/openid-configuration" />
+ </appSettings>
+ ```
+ Test the application by selecting the Akamai URL for the custom http type web application created.
+
+ [ ![Screenshot shows the akamai oidc app results.](./media/partner-akamai-secure-hybrid-access/akamai-oidc-app-results.png)](./media/partner-akamai-secure-hybrid-access/akamai-oidc-app-results.png#lightbox)
+
+## Test the solution
+
+1. Navigate to the application URL, using the external URL specified in the Akamai Enterprise Application Access.
+
+2. Unauthenticated user is redirected to the Azure AD B2C sign-in page.
+
+3. Select the IdP from the list on the page.
+
+3. Sign-in as an end user using credentials linked to Azure AD B2C.
+
+4. After successful authentication, the end user will be redirected back to the application and signed into the application as the end user.
++
+## Additional resources
+
+- [Akamai Enterprise Application Access getting started documentation](https://techdocs.akamai.com/eaa/docs/welcome-guide)
+
+- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+
+- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
+
+- [Register a SAML application in Azure AD B2C](https://learn.microsoft.com/azure/active-directory-b2c/saml-service-provider?tabs=windows&pivots=b2c-custom-policy)
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs to provide secure hybrid access to on
| ISV partner | Description and integration walkthroughs | |:-|:--|
+| ![Screenshot of an Akamai logo.](./medi) is a Zero Trust Network Access (ZTNA) solution that enables secure remote access to modern and legacy applications that reside in private datacenters. |
| ![Screenshot of a Datawiza logo](./medi) enables SSO and granular access control for your applications and extends Azure AD B2C to protect on-premises legacy applications. | | ![Screenshot of a F5 logo](./medi) enables legacy applications to securely expose to the internet through BIG-IP security combined with Azure AD B2C pre-authentication, Conditional Access (CA) and SSO. | | ![Screenshot of a Ping logo](./medi) enables secure hybrid access to on-premises legacy applications across multiple clouds. |
active-directory-b2c Roles Resource Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/roles-resource-access-control.md
When planning your access control strategy, it's best to assign users the least
|Resource |Description |Role | |||| |[Application registrations](tutorial-register-applications.md) | Create and manage all aspects of your web, mobile, and native application registrations within Azure AD B2C.|[Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator)|
+|Tenant Creator| Create new Azure AD or Azure AD B2C tenants.||
|[Identity providers](add-identity-provider.md)| Configure the [local identity provider](identity-provider-local.md) and external social or enterprise identity providers. | [External Identity Provider Administrator](../active-directory/roles/permissions-reference.md#external-identity-provider-administrator)| |[API connectors](add-api-connector.md)| Integrate your user flows with web APIs to customize the user experience and integrate with external systems.|[External ID User Flow Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-administrator)| |[Company branding](customize-ui.md#configure-company-branding)| Customize your user flow pages.| [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator)|
When planning your access control strategy, it's best to assign users the least
|Roles and administrators| Manage role assignments in Azure AD B2C directory. Create and manage groups that can be assigned to Azure AD B2C roles. |[Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator), [Privileged Role Administrator](../active-directory/roles/permissions-reference.md#privileged-role-administrator)| |[User flows](user-flow-overview.md)|For quick configuration and enablement of common identity tasks, like sign-up, sign-in, and profile editing.| [External ID User Flow Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-administrator)| |[Custom policies](user-flow-overview.md)| Create, read, update, and delete all custom policies in Azure AD B2C.| [B2C IEF Policy Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-policy-administrator)|
-|[Policy keys](policy-keys-overview.md)|Add and manage encryption keys for signing and validating tokens, client secrets, certificates, and passwords used in custom policies.|[B2C IEF Keyset Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-keyset-administrator)|
+|[Policy keys](policy-keys-overview.md)|Add and manage encryption keys for signing and validating tokens, client secrets, certificates, and passwords used in custom policies.|[B2C IEF Keyset Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-keyset-administrator)|
active-directory-b2c Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management.md
Previously updated : 04/20/2022 Last updated : 11/24/2022
It's recommended that you protect all administrator accounts with multifactor au
If you're not using [Conditional Access](conditional-access-user-flow.md), you can enable [Azure AD security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md) to force all administrative accounts to use MFA.
+## Check tenant creation permission
+
+Before you create an Azure AD B2C tenant, make sure that you've the permission to do so. Use these steps to check that you've the permission to create a tenant:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure Active Directory**.
+1. Under **Manage**, select **User Settings**.
+1. Review your Tenant Creation setting. If it says **No**, then contact your administrator to assign the tenant creator role to you. The setting is greyed out if you're not an administrator in the tenant.
+ ## Get your tenant name To get your Azure AD B2C tenant name, follow these steps:
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
You learn how to register an application in the next tutorial.
- An Azure account that's been assigned at least the [Contributor](../role-based-access-control/built-in-roles.md) role within the subscription or a resource group within the subscription is required. ## Create an Azure AD B2C tenant
+>[!NOTE]
+>If you're unable to create Azure AD B2C tenant, review your user settings page to ensure that tenant creation isn't switched off. If tenant creation is switched off, ask your _Global Administrator_ to assign you a _Tenant Creator_ role.
1. Sign in to the [Azure portal](https://portal.azure.com/).
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Previously updated : 03/25/2022 Last updated : 11/17/2022
Customers who have configured CAE settings under Security before have to migrate
1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. 1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**. 1. You'll then see the option to **Migrate** your policy. This action is the only one that youΓÇÖll have access to at this point.
-1. Browse to **Conditional Access** and you'll find a new policy named **CA policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it.
+1. Browse to **Conditional Access** and you'll find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it.
The following table describes the migration experience of each customer group based on previously configured CAE settings.
When Conditional Access policy or group membership changes need to be applied to
- Run the [revoke-mgusersign PowerShell command](/powershell/module/microsoft.graph.users.actions/revoke-mgusersign) to revoke all refresh tokens of a specified user. - Select "Revoke Session" on the user profile page in the Azure portal to revoke the user's session to ensure that the updated policies will be applied immediately.
-### IP address variation
+### IP address variation and networks with IP address shared or unknown egress IPs
-Your identity provider and resource providers may see different IP addresses. This mismatch may happen because of:
+Modern networks often optimize connectivity and network paths for applications differently. This optimization frequently causes variations of the routing and source IP addresses of connections, as seen by your identity provider and resource providers. You may observe this split path or IP address variation in multiple network topologies, including, but not limited to:
-- Network proxy implementations in your organization-- Incorrect IPv4/IPv6 configurations between your identity provider and resource provider
-
-Examples:
+- On-premises and cloud-based proxies.
+- Virtual private network (VPN) implementations, like split tunneling.
+- Software defined wide area network (SD-WAN) deployments.
+- Load balanced or redundant network egress network topologies, like those using [SNAT](https://wikipedia.org/wiki/Network_address_translation#SNAT).
+- Branch office deployments that allow direct internet connectivity for specific applications.
+- Networks that support IPv6 clients.
+- Other topologies, which handle application or resource traffic differently from traffic to the identity provider.
-- Your identity provider sees one IP address from the client while your resource provider sees a different IP address from the client after passing through a proxy.-- The IP address your identity provider sees is part of an allowed IP range in policy but the IP address from the resource provider isn't.
+In addition to IP variations, customers also may employ network solutions and services that:
-To avoid infinite loops because of these scenarios, Azure AD issues a one hour CAE token and won't enforce client location change. In this case, security is improved compared to traditional one hour tokens since we're still evaluating the [other events](#critical-event-evaluation) besides client location change events.
+- Use IP addresses that may be shared with other customers. For example, cloud-based proxy services where egress IP addresses are shared between customers.
+- Use easily varied or undefinable IP addresses. For example, topologies where there are large, dynamic sets of egress IP addresses used, like large enterprise scenarios or split VPN and local egress network traffic.
+
+Networks where egress IP addresses may change frequently or are shared may affect Azure AD Conditional Access and Continues Access Evaluation (CAE). This variability can affect how these features work, and their recommended configurations.
+
+The following table summarizes Conditional Access and CAE feature behaviors and recommendations for different types of network deployments:
+
+| Network Type | Example | IPs seen by Azure AD | IPs seen by RP | Applicable CA Configuration (Trusted Named Location) | CAE enforcement | CAE access token | Recommendations |
+|||||||||
+| 1. Egress IPs are dedicated and enumerable for both Azure AD and all RPs traffic | All to network traffic to Azure AD and RPs egresses through 1.1.1.1 and/or 2.2.2.2 | 1.1.1.1 | 2.2.2.2 | 1.1.1.1 <br> 2.2.2.2 | Critical Events <br> IP location Changes | Long lived ΓÇô up to 28 hours | If CA Named Locations are defined, ensure that they contain all possible egress IPs (seen by Azure AD and all RPs) |
+| 2. Egress IPs are dedicated and enumerable for Azure AD, but not for RPs traffic | Network traffic to Azure AD egresses through 1.1.1.1. RP traffic egresses through x.x.x.x | 1.1.1.1 | x.x.x.x | 1.1.1.1 | Critical Events | Default access token lifetime ΓÇô 1 hour | Do not add non dedicated or non-enumerable egress IPs (x.x.x.x) into Trusted Named Location CA rules as it can weaken security |
+| 3. Egress IPs are non-dedicated/shared or not enumerable for both Azure AD and RPs traffic | Network traffic to Azure AD egresses through y.y.y.y. RP traffic egresses through x.x.x.x | y.y.y.y | x.x.x.x | N/A -no IP CA policies/Trusted Locations configured | Critical Events | Long lived ΓÇô up to 28 hours | Do not add non dedicated or non-enumerable egress IPs (x.x.x.x/y.y.y.y) into Trusted Named Location CA rules as it can weaken security |
+
+Networks and network services used by clients connecting to identity and resource providers continue to evolve and change in response to modern trends. These changes may affect Conditional Access and CAE configurations that rely on the underlying IP addresses. When deciding on these configurations, factor in future changes in technology and upkeep of the defined list of addresses in your plan.
### Supported location policies
active-directory Howto Conditional Access Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md
For many administrators, PowerShell is already an understood scripting tool. The
- [Configure Conditional Access policies with Azure AD PowerShell commands](https://github.com/Azure-Samples/azure-ad-conditional-access-apis/tree/main/01-configure/powershell)
-### Graph API
+### Microsoft Graph APIs
-This example shows the basic Create, Read, Update, and Delete (CRUD) options available in the Conditional Access Graph APIs. The example also includes some JSON templates you can use to create some sample policies.
+This example shows the basic Create, Read, Update, and Delete (CRUD) options available in the Conditional Access APIs in Microsoft Graph. The example also includes some JSON templates you can use to create some sample policies.
- [Configure Conditional Access policies with Microsoft Graph API calls](https://github.com/Azure-Samples/azure-ad-conditional-access-apis/tree/main/01-configure/graphapi)
Things don't always work the way you want, when that happens you need a way to g
## Community contribution
-These samples are available in our [GitHub repository](https://github.com/Azure-Samples/azure-ad-conditional-access-apis). We are happy to support community contributions thorough GitHub Issues and Pull Requests.
+These samples are available in our [GitHub repository](https://github.com/Azure-Samples/azure-ad-conditional-access-apis). We are happy to support community contributions through GitHub Issues and Pull Requests.
## Next steps
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
The first step is to add code to handle a response from the resource API rejecti
For example: ```console
+// Line breaks for legibility only
+ HTTP 401; Unauthorized
-WWW-Authenticate=Bearer
- authorization_uri="https://login.windows.net/common/oauth2/authorize",
+
+Bearer authorization_uri="https://login.windows.net/common/oauth2/authorize",
error="insufficient_claims", claims="eyJhY2Nlc3NfdG9rZW4iOnsibmJmIjp7ImVzc2VudGlhbCI6dHJ1ZSwgInZhbHVlIjoiMTYwNDEwNjY1MSJ9fX0=" ```
active-directory Multi Service Web App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-app.md
-+ Last updated 08/19/2022
ms.devlang: csharp, javascript #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.+ # Tutorial: Access Microsoft Graph from a secured app as the app
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
-+ Last updated 04/25/2022
ms.devlang: csharp, javascript #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph from a web app for a signed-in user.+ # Tutorial: Access Microsoft Graph from a secured app as the user
active-directory Multi Service Web App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-storage.md
description: In this tutorial, you learn how to access Azure Storage for an app
-+ Last updated 04/25/2021
ms.devlang: csharp, javascript #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.+ # Tutorial: Access Azure Storage from a web app
active-directory Multi Service Web App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-authentication-app-service.md
-+ Last updated 04/25/2022
#Customer intent: As an application developer, enable authentication and authorization for a web app running on Azure App Service.+ # Tutorial: Add authentication to your web app running on Azure App Service
active-directory Multi Service Web App Clean Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-clean-up-resources.md
-+ Last updated 04/25/2022
#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app using managed identities.+ # Tutorial: Clean up resources
active-directory Multi Service Web App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-overview.md
-+ Last updated 04/25/2022
#Customer intent: As an application developer, I want to learn how to secure access to a web app running on Azure App Service.+ # Tutorial: Sign in users in App Service and access storage and Microsoft Graph
active-directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/claims-mapping.md
Previously updated : 04/06/2018 Last updated : 11/24/2022 -+ # B2B collaboration user claims mapping in Azure Active Directory
-Azure Active Directory (Azure AD) supports customizing the claims that are issued in the SAML token for B2B collaboration users. When a user authenticates to the application, Azure AD issues a SAML token to the app that contains information (or claims) about the user that uniquely identifies them. By default, this includes the user's user name, email address, first name, and last name.
+Azure Active Directory (Azure AD) supports customizing the claims that are issued in the SAML token for [B2B collaboration](what-is-b2b.md) users. When a user authenticates to the application, Azure AD issues a SAML token to the app that contains information (or claims) about the user that uniquely identifies them. By default, this claim includes the user's user name, email address, first name, and last name.
In the [Azure portal](https://portal.azure.com), you can view or edit the claims that are sent in the SAML token to the application. To access the settings, select **Azure Active Directory** > **Enterprise applications** > the application that's configured for single sign-on > **Single sign-on**. See the SAML token settings in the **User Attributes** section.
-![Shows the SAML token attributes in the UI](media/claims-mapping/view-claims-in-saml-token.png)
There are two possible reasons why you might need to edit the claims that are issued in the SAML token: 1. The application requires a different set of claim URIs or claim values.
-2. The application requires the NameIdentifier claim to be something other than the user principal name (UPN) that's stored in Azure AD.
+2. The application requires the NameIdentifier claim to be something other than the user principal name [(UPN)](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname) that's stored in Azure AD.
For information about how to add and edit claims, see [Customizing claims issued in the SAML token for enterprise applications in Azure Active Directory](../develop/active-directory-saml-claims-customization.md).
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
If you don't have an Azure subscription, create a [free account](https://azure.m
After you sign in to the Azure portal, you can create a new tenant for your organization. Your new tenant represents your organization and helps you to manage a specific instance of Microsoft cloud services for your internal and external users.
+>[!Note]
+>If you're unable to create Azure AD B2C tenant, review your user settings page to ensure that tenant creation isn't switched off. If tenant creation is switched off, ask your _Global Administrator_ to assign you a _Tenant Creator_ role.
+ ### To create a new tenant 1. Sign in to your organization's [Azure portal](https://portal.azure.com/).
active-directory Concept Learn About Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-learn-about-groups.md
# Learn about groups and access rights in Azure Active Directory
-Azure Active Directory (Azure AD) provides several ways to manage access to resources, applications, and tasks. With Azure AD groups, you can grant access and permissions to a group of users instead of for each individual user. Limiting access to Azure AD resources to only those users who need access is one of the core security principals of [Zero Trust](/security/zero-trust/zero-trust-overview). This article provides an overview of how groups and access rights can be used together to make managing your Azure AD users easier while also applying security best practices.
+Azure Active Directory (Azure AD) provides several ways to manage access to resources, applications, and tasks. With Azure AD groups, you can grant access and permissions to a group of users instead of for each individual user. Limiting access to Azure AD resources to only those users who need access is one of the core security principles of [Zero Trust](/security/zero-trust/zero-trust-overview). This article provides an overview of how groups and access rights can be used together to make managing your Azure AD users easier while also applying security best practices.
Azure AD lets you use groups to manage access to applications, data, and resources. Resources can be:
After a user requests to join a group, the request is forwarded to the group own
- [Manage dynamic rules for users in a group](../enterprise-users/groups-create-rule.md) -- [Learn about Privileged Identity Management for Azure AD roles](../../active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)
+- [Learn about Privileged Identity Management for Azure AD roles](../../active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
You can restrict default permissions for member users in the following ways:
| Permission | Setting explanation | | - | |
-| **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals, by adding them to the application developer role. |
+| **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can then grant the ability back to specific individuals, by adding them to the application developer role. |
+| **Create tenants** | By default all of your users can create new tenants. If you set this option to **No**, you prevent users from creating new Azure AD or Azure AD B2C tenants. You can grant the ability back to specific individuals by adding them to tenant creator role. |
| **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). | | **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
Previously updated : 06/20/2022 Last updated : 11/24/2022
You can further restrict permissions by assigning roles at smaller scopes or by
> | View a Temporary Access Pass details for a user (without reading the code itself) | [Global Reader](permissions-reference.md#global-reader) | | > | Configure or update the Temporary Access Pass authentication method policy | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+## Tenant Creation
+
+> [!div class="mx-tableFixed"]
+> | Task | Least privileged role | Additional roles |
+> | - | | - |
+> | Create Azure AD or Azure AD B2C Tenant | [Tenant Creator](permissions-reference.md#tenant-creator) | [Global Administrator](permissions-reference.md#global-administrator) |
+ ## Users > [!div class="mx-tableFixed"]
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Teams Communications Support Engineer](#teams-communications-support-engineer) | Can troubleshoot communications issues within Teams using advanced tools. | f70938a0-fc10-4177-9e90-2178f8765737 | > | [Teams Communications Support Specialist](#teams-communications-support-specialist) | Can troubleshoot communications issues within Teams using basic tools. | fcf91098-03e3-41a9-b5ba-6f0ec8188a12 | > | [Teams Devices Administrator](#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | 3d762c5a-1b6c-493f-843e-55a3b42923d4 |
+> | [Tenant Creator](#tenant-creator) | Create new Azure AD or Azure AD B2C tenants. | 112ca1a2-15ad-4102-995e-45b0bc479a6a |
> | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e | > | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins. | fe930be7-5e62-47db-91af-98c3a49a38b1 | > | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 |
Users with this role can manage [Teams-certified devices](https://www.microsoft.
> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.teams/devices/standard/read | Manage all aspects of Teams-certified devices including configuration policies |
+ ## Tenant Creator
+
+Assign the Teant Creator role to users who need to do the following tasks:
+- Create both Azure Active Directory and Azure Active Directory B2C tenants even if the tenant creation toggle is turned off in the user settings
+> [!NOTE]
+>The tenant creators will be assigned the Global administrator role on the new tenants they create.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/tenantManagement/tenants/create | Create new tenants in Azure Active Directory |
+
## Usage Summary Reports Reader Users with this role can access tenant level aggregated data and associated insights in Microsoft 365 admin center for Usage and Productivity Score but cannot access any user level details or insights. In Microsoft 365 admin center for the two reports, we differentiate between tenant level aggregated data and user level details. This role gives an extra layer of protection on individual user identifiable data, which was requested by both customers and legal teams.
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
You can also configure more granular details of the cluster autoscaler by changi
| max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds | | balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false | | expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
-| skip-nodes-with-local-storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath | true |
+| skip-nodes-with-local-storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath | false |
| skip-nodes-with-system-pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true | | max-empty-bulk-delete | Maximum number of empty nodes that can be deleted at the same time | 10 nodes | | new-pod-scale-up-delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. | 0 seconds |
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Use existing views and reports in Container Insights to monitor cluster level co
:::image type="content" source="media/monitor-aks/container-insights-cluster-view.png" alt-text="Container insights cluster view" lightbox="media/monitor-aks/container-insights-cluster-view.png":::
-Use **Node** workbooks in Container Insights to analyze disk capacity and IO in addition to GPU usage. See [Node workbooks](../azure-monitor/containers/container-insights-reports.md#node-workbooks) for a description of these workbooks.
+Use **Node** workbooks in Container Insights to analyze disk capacity and IO in addition to GPU usage. See [Node Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#node-monitoring-workbooks) for a description of these workbooks.
:::image type="content" source="media/monitor-aks/container-insights-node-workbooks.png" alt-text="Container insights node workbooks" lightbox="media/monitor-aks/container-insights-node-workbooks.png":::
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
-+ Last updated 08/19/2022
ms.devlang: csharp #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.+ # Tutorial: Access Microsoft Graph from a secured .NET app as the app
app-service Scenario Secure App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md
-+ Last updated 03/08/2022
ms.devlang: csharp #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user.+ # Tutorial: Access Microsoft Graph from a secured .NET app as the user
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
description: In this tutorial, you learn how to access Azure Storage for a .NET
-+ Last updated 02/16/2022
ms.devlang: csharp, azurecli #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.+ # Tutorial: Access Azure services from a .NET web app
app-service Scenario Secure App Authentication App Service As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service-as-user.md
-+ Last updated 02/25/2022
#Customer intent: As an application developer, enable authentication and authorization for a web app running on Azure App Service.+ # Tutorial: Add user authentication to your web app running on Azure App Service
app-service Scenario Secure App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service.md
-+ Last updated 02/25/2022
#Customer intent: As an application developer, enable authentication and authorization for a web app running on Azure App Service.+ # Tutorial: Add app authentication to your web app running on Azure App Service
app-service Scenario Secure App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-overview.md
-+ Last updated 12/10/2021
#Customer intent: As an application developer, I want to learn how to secure access to a web app running on Azure App Service.+ # Tutorial: Enable authentication in App Service and access storage and Microsoft Graph
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
-+ Last updated 01/21/2022
ms.devlang: javascript #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.+ # Tutorial: Access Microsoft Graph from a secured JavaScript app as the app
app-service Tutorial Connect App Access Microsoft Graph As User Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-user-javascript.md
-+ Last updated 03/08/2022
ms.devlang: csharp #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user.+ # Tutorial: Access Microsoft Graph from a secured JavaScript app as the user
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
description: In this tutorial, you learn how to access Azure Storage for a JavaS
-+ Last updated 02/16/2022
ms.devlang: javascript, azurecli #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.+ # Tutorial: Access Azure services from a JavaScript web app
azure-functions Durable Functions Cloud Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-cloud-backup.md
After awaiting from `Task.WhenAll`, we know that all function calls have complet
The function uses the standard *function.json* for orchestrator functions.
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E2_BackupSiteContent/function.json)]
Here is the code that implements the orchestrator function:
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E2_BackupSiteContent/index.js)]
Notice the `yield context.df.Task.all(tasks);` line. All the individual calls to the `E2_CopyFileToBlob` function were *not* yielded, which allows them to run in parallel. When we pass this array of tasks to `context.df.Task.all`, we get back a task that won't complete *until all the copy operations have completed*. If you're familiar with [`Promise.all`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) in JavaScript, then this is not new to you. The difference is that these tasks could be running on multiple virtual machines concurrently, and the Durable Functions extension ensures that the end-to-end execution is resilient to process recycling.
The helper activity functions, as with other samples, are just regular functions
The *function.json* file for `E2_GetFileList` looks like the following:
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E2_GetFileList/function.json)]
And here is the implementation:
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E2_GetFileList/index.js)]
The function uses the `readdirp` module (version 2.x) to recursively read the directory structure.
The function uses some advanced features of Azure Functions bindings (that is, t
The *function.json* file for `E2_CopyFileToBlob` is similarly simple:
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E2_CopyFileToBlob/function.json)]
The JavaScript implementation uses the [Azure Storage SDK for Node](https://github.com/Azure/azure-storage-node) to upload the files to Azure Blob Storage.
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E2_CopyFileToBlob/index.js)]
# [Python](#tab/python)
azure-functions Durable Functions Http Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-features.md
The [orchestration client binding](durable-functions-bindings.md#orchestration-c
**index.js**
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/HttpStart/index.js)]
**function.json**
-[!code-json[Main](~/samples-durable-functions/samples/javascript/HttpStart/function.json)]
# [Python](#tab/python)
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
Here is an example HTTP-trigger function that demonstrates how to use this API:
# [JavaScript](#tab/javascript)
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/HttpSyncStart/index.js)]
See [Start instances](#javascript-function-json) for the function.json configuration.
azure-functions Durable Functions Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-monitor.md
The orchestrator requires a location to monitor and a phone number to send a mes
The **E3_Monitor** function uses the standard *function.json* for orchestrator functions.
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E3_Monitor/function.json)]
Here is the code that implements the function:
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_Monitor/index.js)]
As with other samples, the helper activity functions are regular functions that
The *function.json* is defined as follows:
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E3_GetIsClear/function.json)]
And here is the implementation.
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_GetIsClear/index.js)]
The **E3_SendGoodWeatherAlert** function uses the Twilio binding to send an SMS
Its *function.json* is simple:
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E3_SendGoodWeatherAlert/function.json)]
And here is the code that sends the SMS message:
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E3_SendGoodWeatherAlert/index.js)]
azure-functions Durable Functions Phone Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-phone-verification.md
This article walks through the following functions in the sample app:
The **E4_SmsPhoneVerification** function uses the standard *function.json* for orchestrator functions.
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E4_SmsPhoneVerification/function.json)]
Here is the code that implements the function:
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E4_SmsPhoneVerification/index.js)]
> [!NOTE] > It may not be obvious at first, but this orchestrator does not violate the [deterministic orchestration constraint](durable-functions-code-constraints.md). It is deterministic because the `currentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `context.df.Task.any`.
The **E4_SendSmsChallenge** function uses the Twilio binding to send the SMS mes
The *function.json* is defined as follows:
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E4_SendSmsChallenge/function.json)]
And here is the code that generates the four-digit challenge code and sends the SMS message:
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E4_SendSmsChallenge/index.js)]
# [Python](#tab/python)
azure-functions Durable Functions Sequence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sequence.md
The code calls `E1_SayHello` three times in sequence with different parameter va
If you use Visual Studio Code or the Azure portal for development, here's the content of the *function.json* file for the orchestrator function. Most orchestrator *function.json* files look almost exactly like this.
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E1_HelloSequence/function.json)]
The important thing is the `orchestrationTrigger` binding type. All orchestrator functions must use this trigger type.
The important thing is the `orchestrationTrigger` binding type. All orchestrator
Here is the orchestrator function:
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E1_HelloSequence/index.js)]
All JavaScript orchestration functions must include the [`durable-functions` module](https://www.npmjs.com/package/durable-functions). It's a library that enables you to write Durable Functions in JavaScript. There are three significant differences between an orchestrator function and other JavaScript functions:
Instead of binding to an `IDurableActivityContext`, you can bind directly to the
The *function.json* file for the activity function `E1_SayHello` is similar to that of `E1_HelloSequence` except that it uses an `activityTrigger` binding type instead of an `orchestrationTrigger` binding type.
-[!code-json[Main](~/samples-durable-functions/samples/javascript/E1_SayHello/function.json)]
> [!NOTE] > All activity functions called by an orchestration function must use the `activityTrigger` binding.
The implementation of `E1_SayHello` is a relatively trivial string formatting op
#### E1_SayHello/index.js
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/E1_SayHello/index.js)]
Unlike the orchestration function, an activity function needs no special setup. The input passed to it by the orchestrator function is located on the `context.bindings` object under the name of the `activityTrigger` binding - in this case, `context.bindings.name`. The binding name can be set as a parameter of the exported function and accessed directly, which is what the sample code does.
To interact with orchestrators, the function must include a `DurableClient` inpu
#### HttpStart/function.json
-[!code-json[Main](~/samples-durable-functions/samples/javascript/HttpStart/function.json?highlight=16-20)]
To interact with orchestrators, the function must include a `durableClient` input binding. #### HttpStart/index.js
-[!code-javascript[Main](~/samples-durable-functions/samples/javascript/HttpStart/index.js)]
Use `df.getClient` to obtain a `DurableOrchestrationClient` object. You use the client to start an orchestration. It can also help you return an HTTP response containing URLs for checking the status of the new orchestration.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
See [Structure of a data collection rule in Azure Monitor (preview)](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the text log DCR. > [!IMPORTANT]
- > Custom tables have a suffix of *_CL*; for example, *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name in the Log Analytics workspace.
+ > Custom data collection rules have a suffix of *Custom-*; for example, *Custom-rulename*. The *Custom-rulename* in the stream declaration must match the *Custom-rulename* name in the Log Analytics workspace.
```json {
azure-monitor Container Insights Deployment Hpa Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-deployment-hpa-metrics.md
Title: Deployment & HPA metrics with Container insights | Microsoft Docs
-description: This article describes what deployment & HPA (Horizontal pod autoscaler) metrics are collected with Container insights.
+ Title: Deployment and HPA metrics with Container insights | Microsoft Docs
+description: This article describes what deployment and HPA metrics are collected with Container insights.
Last updated 08/29/2022
-# Deployment & HPA metrics with Container insights
+# Deployment and HPA metrics with Container insights
-Starting with agent version *ciprod08072020*, Container insights-integrated agent now collects metrics for Deployments & HPAs.
+The Container insights integrated agent now collects metrics for deployments and horizontal pod autoscalers (HPAs) starting with agent version *ciprod08072020*.
## Deployment metrics
-Container insights automatically starts monitoring Deployments, by collecting the following metrics at 60 sec intervals and storing them in the **InsightMetrics** table:
+Container insights automatically starts monitoring deployments by collecting the following metrics at 60-second intervals and storing them in the **InsightMetrics** table.
|Metric name |Metric dimension (tags) |Description | ||||
-|kube_deployment_status_replicas_ready |container.azm.ms/clusterId, container.azm.ms/clusterName, creationTime, deployment, deploymentStrategy, k8sNamespace, spec_replicas, status_replicas_available, status_replicas_updated (status.updatedReplicas) | Total number of ready pods targeted by this deployment (status.readyReplicas). Below are dimensions of this metric. <ul> <li> deployment - name of the deployment </li> <li> k8sNamespace - Kubernetes namespace for the deployment </li> <li> deploymentStrategy - Deployment strategy to use to replace pods with new ones (spec.strategy.type)</li><li> creationTime - deployment creation timestamp </li> <li> spec_replicas - Number of desired pods (spec.replicas) </li> <li>status_replicas_available - Total number of available pods (ready for at least minReadySeconds) targeted by this deployment (status.availableReplicas)</li><li>status_replicas_updated - Total number of non-terminated pods targeted by this deployment that have the desired template spec (status.updatedReplicas) </li></ul>|
+|kube_deployment_status_replicas_ready |container.azm.ms/clusterId, container.azm.ms/clusterName, creationTime, deployment, deploymentStrategy, k8sNamespace, spec_replicas, status_replicas_available, status_replicas_updated (status.updatedReplicas) | Total number of ready pods targeted by this deployment (status.readyReplicas). The dimensions of this metric are: <ul> <li> deployment - name of the deployment </li> <li> k8sNamespace - Kubernetes namespace for the deployment </li> <li> deploymentStrategy - Deployment strategy to use to replace pods with new ones (spec.strategy.type)</li><li> creationTime - deployment creation timestamp </li> <li> spec_replicas - Number of desired pods (spec.replicas) </li> <li>status_replicas_available - Total number of available pods (ready for at least minReadySeconds) targeted by this deployment (status.availableReplicas)</li><li>status_replicas_updated - Total number of non-terminated pods targeted by this deployment that have the desired template spec (status.updatedReplicas) </li></ul>|
## HPA metrics
-Container insights automatically starts monitoring HPAs, by collecting the following metrics at 60 sec intervals and storing them in the **InsightMetrics** table:
+Container insights automatically starts monitoring HPAs by collecting the following metrics at 60-second intervals and storing them in the **InsightMetrics** table.
|Metric name |Metric dimension (tags) |Description | ||||
-|kube_hpa_status_current_replicas |container.azm.ms/clusterId, container.azm.ms/clusterName, creationTime, hpa, k8sNamespace, lastScaleTime, spec_max_replicas, spec_min_replicas, status_desired_replicas, targetKind, targetName | Current number of replicas of pods managed by this autoscaler (status.currentReplicas). Below are dimensions of this metric. <ul> <li> hpa - name of the HPA </li> <li> k8sNamespace - Kubernetes namespace for the HPA </li> <li> lastScaleTime - Last time the HPA scaled the number of pods (status.lastScaleTime)</li><li> creationTime - HPA creation timestamp </li> <li> spec_max_replicas - Upper limit for the number of pods that can be set by the autoscaler (spec.maxReplicas) </li> <li> spec_min_replicas - Lower limit for the number of replicas to which the autoscaler can scale down (spec.minReplicas) </li><li>status_desired_replicas - Desired number of replicas of pods managed by this autoscaler (status.desiredReplicas)</li><li>targetKind - Kind of the HPA's target(spec.scaleTargetRef.kind) </li><li>targetName - Name of the HPA's target (spec.scaleTargetRef.name) </li></ul>|
+|kube_hpa_status_current_replicas |container.azm.ms/clusterId, container.azm.ms/clusterName, creationTime, hpa, k8sNamespace, lastScaleTime, spec_max_replicas, spec_min_replicas, status_desired_replicas, targetKind, targetName | Current number of replicas of pods managed by this autoscaler (status.currentReplicas). The dimensions of this metric are: <ul> <li> hpa - name of the HPA </li> <li> k8sNamespace - Kubernetes namespace for the HPA </li> <li> lastScaleTime - Last time the HPA scaled the number of pods (status.lastScaleTime)</li><li> creationTime - HPA creation timestamp </li> <li> spec_max_replicas - Upper limit for the number of pods that can be set by the autoscaler (spec.maxReplicas) </li> <li> spec_min_replicas - Lower limit for the number of replicas to which the autoscaler can scale down (spec.minReplicas) </li><li>status_desired_replicas - Desired number of replicas of pods managed by this autoscaler (status.desiredReplicas)</li><li>targetKind - Kind of the HPA's target (spec.scaleTargetRef.kind) </li><li>targetName - Name of the HPA's target (spec.scaleTargetRef.name) </li></ul>|
-## Deployment & HPA charts
+## Deployment and HPA charts
-Container insights includes pre-configured charts for the metrics listed earlier in the table as a workbook for every cluster. You can find the deployments & HPA workbook **Deployments & HPA** directly from an AKS cluster by selecting **Workbooks** from the left-hand pane, and from the **View Workbooks** drop-down list in the Insight.
+Container insights includes preconfigured charts for the metrics listed earlier in the table as a workbook for every cluster. You can find the deployments and HPA workbook **Deployments & HPA** directly from an Azure Kubernetes Service cluster. On the left pane, select **Workbooks** and select **View Workbooks** from the dropdown list in the insight.
## Next steps -- Review [Kube-state metrics in Kubernetes](https://github.com/kubernetes/kube-state-metrics/tree/master/docs) to learn more about Kube-state metrics.
+Review [Kube-state metrics in Kubernetes](https://github.com/kubernetes/kube-state-metrics/tree/master/docs) to learn more about Kube-state metrics.
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
Title: Configure Hybrid Kubernetes clusters with Container insights | Microsoft Docs
-description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environment.
+ Title: Configure hybrid Kubernetes clusters with Container insights | Microsoft Docs
+description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environments.
Last updated 06/30/2020
# Configure hybrid Kubernetes clusters with Container insights
-Container insights provides rich monitoring experience for the Azure Kubernetes Service (AKS) and [AKS Engine on Azure](https://github.com/Azure/aks-engine), which is a self-managed Kubernetes cluster hosted on Azure. This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience.
+Container insights provides a rich monitoring experience for the Azure Kubernetes Service (AKS) and [AKS Engine on Azure](https://github.com/Azure/aks-engine), which is a self-managed Kubernetes cluster hosted on Azure. This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience.
## Supported configurations
-The following configurations are officially supported with Container insights. If you have a different version of Kubernetes and operating system versions, please send a mail to askcoin@microsoft.com.
+The following configurations are officially supported with Container insights. If you have a different version of Kubernetes and operating system versions, send an email to askcoin@microsoft.com.
- Environments:-
- - Kubernetes on-premises
- - AKS Engine on Azure and Azure Stack. For more information, see [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview)
- - [OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4 and higher, on-premises or other cloud environments.
-
+ - Kubernetes on-premises.
+ - AKS Engine on Azure and Azure Stack. For more information, see [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
+ - [OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4 and higher, on-premises or in other cloud environments.
- Versions of Kubernetes and support policy are the same as versions of [AKS supported](../../aks/supported-kubernetes-versions.md).- - The following container runtimes are supported: Docker, Moby, and CRI compatible runtimes such CRI-O and ContainerD.--- Linux OS release for master and worker nodes supported are: Ubuntu (18.04 LTS and 16.04 LTS), and Red Hat Enterprise Linux CoreOS 43.81.--- Access control supported: Kubernetes RBAC and non-RBAC
+- The Linux OS release for main and worker nodes supported are Ubuntu (18.04 LTS and 16.04 LTS) and Red Hat Enterprise Linux CoreOS 43.81.
+- Azure Access Control service supported: Kubernetes role-based access control (RBAC) and non-RBAC.
## Prerequisites
-Before you start, make sure that you have the following:
+Before you start, make sure that you meet the following prerequisites:
-- [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
+- You have a [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or the [Azure portal](../logs/quick-create-workspace.md).
>[!NOTE]
- >Enable monitoring of multiple clusters with the same cluster name to same Log Analytics workspace is not supported. Cluster names must be unique.
+ >Enabling the monitoring of multiple clusters with the same cluster name to the same Log Analytics workspace isn't supported. Cluster names must be unique.
> -- You are a member of the **Log Analytics contributor role** to enable container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage access to workspace and log data](../logs/manage-access.md).--- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.--- [HELM client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster.-
+- You're a member of the Log Analytics contributor role to enable container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage access to workspace and log data](../logs/manage-access.md).
+- To view the monitoring data, you must have the [Log Analytics reader](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
+- You have a [Helm client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster.
- The following proxy and firewall configuration information is required for the containerized version of the Log Analytics agent for Linux to communicate with Azure Monitor:
- |Agent Resource|Ports |
+ |Agent resource|Ports |
||| |*.ods.opinsights.azure.com |Port 443 | |*.oms.opinsights.azure.com |Port 443 | |*.dc.services.visualstudio.com |Port 443 | -- The containerized agent requires Kubelet's `cAdvisor secure port: 10250` or `unsecure port :10255` to be opened on all nodes in the cluster to collect performance metrics. We recommend you configure `secure port: 10250` on the Kubelet's cAdvisor if it's not configured already.--- The containerized agent requires the following environmental variables to be specified on the container in order to communicate with the Kubernetes API service within the cluster to collect inventory data - `KUBERNETES_SERVICE_HOST` and `KUBERNETES_PORT_443_TCP_PORT`.
+- The containerized agent requires the Kubelet `cAdvisor secure port: 10250` or `unsecure port :10255` to be opened on all nodes in the cluster to collect performance metrics. We recommend that you configure `secure port: 10250` on the Kubelet cAdvisor if it isn't configured already.
+- The containerized agent requires the following environmental variables to be specified on the container to communicate with the Kubernetes API service within the cluster to collect inventory data: `KUBERNETES_SERVICE_HOST` and `KUBERNETES_PORT_443_TCP_PORT`.
>[!IMPORTANT]
->The minimum agent version supported for monitoring hybrid Kubernetes clusters is ciprod10182019 or later.
+>The minimum agent version supported for monitoring hybrid Kubernetes clusters is *ciprod10182019* or later.
## Enable monitoring
-Enabling Container insights for the hybrid Kubernetes cluster consists of performing the following steps in order.
+To enable Container insights for the hybrid Kubernetes cluster:
-1. Configure your Log Analytics workspace with Container Insights solution.
+1. Configure your Log Analytics workspace with the Container insights solution.
-2. Enable the Container insights HELM chart with Log Analytics workspace.
+1. Enable the Container insights Helm chart with a Log Analytics workspace.
-For additional information on Monitoring solutions in Azure Monitor refer [here](../../azure-monitor/insights/solutions.md).
+For more information on monitoring solutions in Azure Monitor, see [Monitoring solutions in Azure Monitor](../../azure-monitor/insights/solutions.md).
-### How to add the Azure Monitor Containers solution
+### Add the Azure Monitor Containers solution
-You can deploy the solution with the provided Azure Resource Manager template by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with Azure CLI.
+You can deploy the solution with the provided Azure Resource Manager template by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with the Azure CLI.
-If you are unfamiliar with the concept of deploying resources by using a template, see:
+If you're unfamiliar with the concept of deploying resources by using a template, see:
- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)- - [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md) If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.59 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-This method includes two JSON templates. One template specifies the configuration to enable monitoring, and the other contains parameter values that you configure to specify the following:
+This method includes two JSON templates. One template specifies the configuration to enable monitoring. The other template contains parameter values that you configure to specify:
-- **workspaceResourceId** - the full resource ID of your Log Analytics workspace.-- **workspaceRegion** - the region the workspace is created in, which is also referred to as **Location** in the workspace properties when viewing from the Azure portal.
+- `workspaceResourceId`: The full resource ID of your Log Analytics workspace.
+- `workspaceRegion`: The region the workspace is created in, which is also referred to as **Location** in the workspace properties when you view them from the Azure portal.
-To first identify the full resource ID of your Log Analytics workspace required for the `workspaceResourceId` parameter value in the **containerSolutionParams.json** file, perform the following steps and then run the PowerShell cmdlet or Azure CLI command to add the solution.
+To first identify the full resource ID of your Log Analytics workspace that's required for the `workspaceResourceId` parameter value in the *containerSolutionParams.json* file, perform the following steps. Then run the PowerShell cmdlet or Azure CLI command to add the solution.
-1. List all the subscriptions that you have access to using the following command:
+1. List all the subscriptions to which you have access by using the following command:
```azurecli az account list --all -o table ```
- The output will resemble the following:
+ The output will resemble the following example:
```azurecli Name CloudName SubscriptionId State IsDefault
To first identify the full resource ID of your Log Analytics workspace required
Copy the value for **SubscriptionId**.
-2. Switch to the subscription hosting the Log Analytics workspace using the following command:
+1. Switch to the subscription hosting the Log Analytics workspace by using the following command:
```azurecli az account set -s <subscriptionId of the workspace> ```
-3. The following example displays the list of workspaces in your subscriptions in the default JSON format.
+1. The following example displays the list of workspaces in your subscriptions in the default JSON format:
```azurecli az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json ```
- In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **ID**.
+ In the output, find the workspace name. Then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-4. Copy and paste the following JSON syntax into your file:
+1. Copy and paste the following JSON syntax into your file:
```json {
To first identify the full resource ID of your Log Analytics workspace required
} ```
-5. Save this file as containerSolution.json to a local folder.
+1. Save this file as **containerSolution.json** to a local folder.
-6. Paste the following JSON syntax into your file:
+1. Paste the following JSON syntax into your file:
```json {
To first identify the full resource ID of your Log Analytics workspace required
} ```
-7. Edit the values for **workspaceResourceId** using the value you copied in step 3, and for **workspaceRegion** copy the **Region** value after running the Azure CLI command [az monitor log-analytics workspace show](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-list&preserve-view=true).
+1. Edit the values for **workspaceResourceId** by using the value you copied in step 3. For **workspaceRegion**, copy the **Region** value after running the Azure CLI command [az monitor log-analytics workspace show](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-list&preserve-view=true).
-8. Save this file as containerSolutionParams.json to a local folder.
+1. Save this file as **containerSolutionParams.json** to a local folder.
-9. You are ready to deploy this template.
+1. You're ready to deploy this template.
- To deploy with Azure PowerShell, use the following commands in the folder that contains the template:
To first identify the full resource ID of your Log Analytics workspace required
New-AzureRmResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <resource group of Log Analytics workspace> -TemplateFile .\containerSolution.json -TemplateParameterFile .\containerSolutionParams.json ```
- The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
```powershell provisioningState : Succeeded ```
- - To deploy with Azure CLI, run the following commands:
+ - To deploy with the Azure CLI, run the following commands:
```azurecli az login
To first identify the full resource ID of your Log Analytics workspace required
az deployment group create --resource-group <resource group of log analytics workspace> --name <deployment name> --template-file ./containerSolution.json --parameters @./containerSolutionParams.json ```
- The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
```azurecli provisioningState : Succeeded
To first identify the full resource ID of your Log Analytics workspace required
After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-## Install the HELM chart
+## Install the Helm chart
-In this section you install the containerized agent for Container insights. Before proceeding, you need to identify the workspace ID required for the `amalogsagent.secret.wsid` parameter, and primary key required for the `amalogsagent.secret.key` parameter. You can identify this information by performing the following steps, and then run the commands to install the agent using the HELM chart.
+In this section, you install the containerized agent for Container insights. Before you proceed, identify the workspace ID required for the `amalogsagent.secret.wsid` parameter and the primary key required for the `amalogsagent.secret.key` parameter. To identify this information, follow these steps and then run the commands to install the agent by using the Helm chart.
1. Run the following command to identify the workspace ID: `az monitor log-analytics workspace list --resource-group <resourceGroupName>`
- In the output, find the workspace name under the field **name**, and then copy the workspace ID of that Log Analytics workspace under the field **customerID**.
+ In the output, find the workspace name under the field **name**. Then copy the workspace ID of that Log Analytics workspace under the field **customerID**.
-2. Run the following command to identify the primary key for the workspace:
+1. Run the following command to identify the primary key for the workspace:
`az monitor log-analytics workspace get-shared-keys --resource-group <resourceGroupName> --workspace-name <logAnalyticsWorkspaceName>`
- In the output, find the primary key under the field **primarySharedKey**, and then copy the value.
+ In the output, find the primary key under the field **primarySharedKey** and then copy the value.
->[!NOTE]
->The following commands are applicable only for Helm version 2. Use of the `--name` parameter is not applicable with Helm version 3.
-
->[!NOTE]
->If your Kubernetes cluster communicates through a proxy server, configure the parameter `amalogsagent.proxy` with the URL of the proxy server. If the cluster does not communicate through a proxy server, then you don't need to specify this parameter. For more information, see [Configure proxy endpoint](#configure-proxy-endpoint) later in this article.
+ >[!NOTE]
+ >The following commands are applicable only for Helm version 2. Use of the `--name` parameter isn't applicable with Helm version 3.
+
+ If your Kubernetes cluster communicates through a proxy server, configure the parameter `amalogsagent.proxy` with the URL of the proxy server. If the cluster doesn't communicate through a proxy server, you don't need to specify this parameter. For more information, see the section [Configure the proxy endpoint](#configure-the-proxy-endpoint) later in this article.
-3. Add the Azure charts repository to your local list by running the following command:
+1. Add the Azure charts repository to your local list by running the following command:
``` helm repo add microsoft https://microsoft.github.io/charts/repo ````
-4. Install the chart by running the following command:
+1. Install the chart by running the following command:
``` $ helm install --name myrelease-1 \
In this section you install the containerized agent for Container insights. Befo
--set amalogsagent.domain=opinsights.azure.us,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers ```
-### Enable the Helm chart using the API Model
+### Enable the Helm chart by using the API model
-You can specify an addon in the AKS Engine cluster specification json file, also referred to as the API Model. In this addon, provide the base64 encoded version of `WorkspaceGUID` and `WorkspaceKey` of the Log Analytics workspace where the collected monitoring data is stored. You can find the `WorkspaceGUID` and `WorkspaceKey` using steps 1 and 2 in the previous section.
+You can specify an add-on in the AKS Engine cluster specification JSON file, which is also referred to as the API model. In this add-on, provide the base64-encoded version of `WorkspaceGUID` and `WorkspaceKey` of the Log Analytics workspace where the collected monitoring data is stored. You can find `WorkspaceGUID` and `WorkspaceKey` by using steps 1 and 2 in the previous section.
-Supported API definitions for the Azure Stack Hub cluster can be found in this example - [kubernetes-container-monitoring_existing_workspace_id_and_key.json](https://github.com/Azure/aks-engine/blob/master/examples/addons/container-monitoring/kubernetes-container-monitoring_existing_workspace_id_and_key.json). Specifically, find the **addons** property in **kubernetesConfig**:
+Supported API definitions for the Azure Stack Hub cluster can be found in the example [kubernetes-container-monitoring_existing_workspace_id_and_key.json](https://github.com/Azure/aks-engine/blob/master/examples/addons/container-monitoring/kubernetes-container-monitoring_existing_workspace_id_and_key.json). Specifically, find the **addons** property in **kubernetesConfig**:
```json "orchestratorType": "Kubernetes",
Supported API definitions for the Azure Stack Hub cluster can be found in this e
## Configure agent data collection
-Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. Refer to documentation about agent data collection settings [here](container-insights-agent-config.md).
+Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. For more information on agent data collection settings, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
-After you have successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal.
+After you've successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal.
>[!NOTE]
->Ingestion latency is around five to ten minutes from agent to commit in the Azure Log Analytics workspace. Status of the cluster show the value **No data** or **Unknown** until all the required monitoring data is available in Azure Monitor.
+>Ingestion latency is around 5 to 10 minutes from the agent to commit in the Log Analytics workspace. Status of the cluster shows the value **No data** or **Unknown** until all the required monitoring data is available in Azure Monitor.
-## Configure proxy endpoint
+## Configure the proxy endpoint
-Starting with chart version 2.7.1, chart will support specifying the proxy endpoint with the `amalogsagent.proxy` chart parameter. This allows it to communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server, and both anonymous and basic authentication (username/password) are supported.
+Starting with chart version 2.7.1, the chart will support specifying the proxy endpoint with the `amalogsagent.proxy` chart parameter. In this way, it can communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server. Both anonymous and basic authentication with a username and password are supported.
-The proxy configuration value has the following syntax: `[protocol://][user:password@]proxyhost[:port]`
+The proxy configuration value has the syntax `[protocol://][user:password@]proxyhost[:port]`.
> [!NOTE]
->If your proxy server does not require authentication, you still need to specify a psuedo username/password. This can be any username or password.
+>If your proxy server doesn't require authentication, you still need to specify a pseudo username and password. It can be any username or password.
|Property| Description | |--|-|
-|Protocol | http or https |
+|protocol | HTTP or HTTPS |
|user | Optional username for proxy authentication | |password | Optional password for proxy authentication | |proxyhost | Address or FQDN of the proxy server | |port | Optional port number for the proxy server |
-For example: `amalogsagent.proxy=http://user01:password@proxy01.contoso.com:8080`
+An example is `amalogsagent.proxy=http://user01:password@proxy01.contoso.com:8080`.
-If you specify the protocol as **http**, the HTTP requests are created using SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols.
+If you specify the protocol as **http**, the HTTP requests are created by using an SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols.
## Troubleshooting
-If you encounter an error while attempting to enable monitoring for your hybrid Kubernetes cluster, copy the PowerShell script [TroubleshootError_nonAzureK8s.ps1](https://aka.ms/troubleshoot-non-azure-k8s) and save it to a folder on your computer. This script is provided to help detect and fix the issues encountered. The issues it is designed to detect and attempt correction of are the following:
+If you encounter an error while you attempt to enable monitoring for your hybrid Kubernetes cluster, copy the PowerShell script [TroubleshootError_nonAzureK8s.ps1](https://aka.ms/troubleshoot-non-azure-k8s) and save it to a folder on your computer. This script is provided to help you detect and fix the issues you encounter. It's designed to detect and attempt correction of the following issues:
-- The specified Log Analytics workspace is valid
+- The specified Log Analytics workspace is valid.
- The Log Analytics workspace is configured with the Container insights solution. If not, configure the workspace.-- Azure Monitor Agent replicaset pods are running-- Azure Monitor Agent daemonset pods are running-- Azure Monitor Agent Health service is running-- The Log Analytics workspace ID and key configured on the containerized agent match with the workspace the Insight is configured with.-- Validate all the Linux worker nodes have `kubernetes.io/role=agent` label to schedule rs pod. If it doesn't exist, add it.-- Validate `cAdvisor secure port:10250` or `unsecure port: 10255` is opened on all nodes in the cluster.
+- The Azure Monitor Agent replicaset pods are running.
+- The Azure Monitor Agent daemonset pods are running.
+- The Azure Monitor Agent Health service is running.
+- The Log Analytics workspace ID and key configured on the containerized agent match with the workspace that the insight is configured with.
+- Validate that all the Linux worker nodes have the `kubernetes.io/role=agent` label to the schedulers pod. If it doesn't exist, add it.
+- Validate that `cAdvisor secure port:10250` or `unsecure port: 10255` is opened on all nodes in the cluster.
To execute with Azure PowerShell, use the following commands in the folder that contains the script:
To execute with Azure PowerShell, use the following commands in the folder that
## Next steps
-With monitoring enabled to collect health and resource utilization of your hybrid Kubernetes cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
+Now that monitoring is enabled to collect health and resource utilization of your hybrid Kubernetes clusters and workloads are running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md
Title: View metrics in real-time with Container insights
+ Title: View metrics in real time with Container insights
description: This article describes the real-time view of metrics without using kubectl with Container insights. Last updated 05/24/2022
-# How to view metrics in real-time
+# View metrics in real time
-Container insights Live Data (preview) feature allows you to visualize metrics about node and pod state in a cluster in real-time. It emulates direct access to the `kubectl top nodes`, `kubectl get pods ΓÇôall-namespaces`, and `kubectl get nodes` commands to call, parse, and visualize the data in performance charts that are included with this Insight.
+With Container insights Live Data (preview), you can visualize metrics about node and pod state in a cluster in real time. The feature emulates direct access to the `kubectl top nodes`, `kubectl get pods ΓÇôall-namespaces`, and `kubectl get nodes` commands to call, parse, and visualize the data in performance charts that are included with this insight.
This article provides a detailed overview and helps you understand how to use this feature. >[!NOTE]
->AKS clusters enabled as [private clusters](https://azure.microsoft.com/updates/aks-private-cluster/) are not supported with this feature. This feature relies on directly accessing the Kubernetes API through a proxy server from your browser. Enabling networking security to block the Kubernetes API from this proxy will block this traffic.
+>Azure Kubernetes Service (AKS) clusters enabled as [private clusters](https://azure.microsoft.com/updates/aks-private-cluster/) aren't supported with this feature. This feature relies on directly accessing the Kubernetes API through a proxy server from your browser. Enabling networking security to block the Kubernetes API from this proxy will block this traffic.
-For help with setting up or troubleshooting the Live Data (preview) feature, review our [setup guide](container-insights-livedata-setup.md).
+For help with setting up or troubleshooting the Live Data (preview) feature, review the [setup guide](container-insights-livedata-setup.md).
-## How it Works
+## How it works
-The Live Data (preview) feature directly access the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
+The Live Data (preview) feature directly accesses the Kubernetes API. For more information about the authentication model, see [The Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
-This feature performs a polling operation against the metrics endpoints (including `/api/v1/nodes`, `/apis/metrics.k8s.io/v1beta1/nodes`, and `/api/v1/pods`), which is every five seconds by default. This data is cached in your browser and charted in the four performance charts included in Container insights on the **Cluster** tab by selecting **Go Live (preview)**. Each subsequent poll is charted into a rolling five-minute visualization window.
+This feature performs a polling operation against the metrics endpoints including `/api/v1/nodes`, `/apis/metrics.k8s.io/v1beta1/nodes`, and `/api/v1/pods`. The interval is every five seconds by default. This data is cached in your browser and charted in four performance charts included in Container insights. Each subsequent poll is charted into a rolling five-minute visualization window. To see the charts, select **Go Live (preview)** and then select the **Cluster** tab.
-![Go Live option in the Cluster view](./media/container-insights-livedata-metrics/cluster-view-go-live-example-01.png)
+![Screenshot that shows the Go Live option in the Cluster view.](./media/container-insights-livedata-metrics/cluster-view-go-live-example-01.png)
-The polling interval is configured from the **Set interval** drop-down allowing you to set polling for new data every 1, 5, 15 and 30 seconds.
+The polling interval is configured from the **Set interval** dropdown list. Use this dropdown list to set polling for new data every 1, 5, 15, and 30 seconds.
-![Go Live drop-down polling interval](./media/container-insights-livedata-metrics/cluster-view-polling-interval-dropdown.png)
+![Screenshot that shows the Go Live dropdown polling interval.](./media/container-insights-livedata-metrics/cluster-view-polling-interval-dropdown.png)
>[!IMPORTANT]
->We recommend setting the polling interval to one second while troubleshooting an issue for a short period of time. These requests may impact the availability and throttling of the Kubernetes API on your cluster. Afterwards, reconfigure to a longer polling interval.
+>We recommend that you set the polling interval to one second while you troubleshoot an issue for a short period of time. These requests might affect the availability and throttling of the Kubernetes API on your cluster. Afterward, reconfigure to a longer polling interval.
->[!IMPORTANT]
->No data is stored permanently during operation of this feature. All information captured during this session is immediately deleted when you close your browser or navigate away from the feature. Data only remains present for visualization inside the five minute window; any metrics older than five minutes are also permanently deleted.
+These charts can't be pinned to the last Azure dashboard you viewed in live mode.
-These charts cannot be pinned to the last Azure dashboard you viewed in live mode.
+>[!IMPORTANT]
+>No data is stored permanently during operation of this feature. All information captured during this session is immediately deleted when you close your browser or navigate away from the feature. Data only remains present for visualization inside the five-minute window. Any metrics older than five minutes are also permanently deleted.
## Metrics captured
-### Node CPU utilization % / Node Memory utilization %
+The following metrics are captured and displayed in four performance charts.
+
+### Node CPU utilization % and Node memory utilization %
These two performance charts map to an equivalent of invoking `kubectl top nodes` and capturing the results of the **CPU%** and **MEMORY%** columns to the respective chart.
-![Kubectl top nodes example results](./media/container-insights-livedata-metrics/kubectl-top-nodes-example.png)
+![Screenshot that shows the kubectl top nodes example results.](./media/container-insights-livedata-metrics/kubectl-top-nodes-example.png)
-![Nodes CPU utilization percent chart](./media/container-insights-livedata-metrics/cluster-view-node-cpu-util.png)
+![Screenshot that shows the Node CPU utilization percent chart.](./media/container-insights-livedata-metrics/cluster-view-node-cpu-util.png)
-![Node Memory utilization percent chart](./media/container-insights-livedata-metrics/cluster-view-node-memory-util.png)
+![Screenshot that shows the Node memory utilization percent chart.](./media/container-insights-livedata-metrics/cluster-view-node-memory-util.png)
-The percentile calculations will function in larger clusters to help identify outlier nodes in your cluster. For example, to understand if nodes are under-utilized for scale down purposes. Utilizing the **Min** aggregation you can see which nodes have low utilization in the cluster. To further investigate, you select the **Nodes** tab and sort the grid by CPU or memory utilization.
+The percentile calculations will function in larger clusters to help identify outlier nodes in your cluster. For example, you can understand if nodes are underutilized for scale-down purposes. By using the **Min** aggregation, you can see which nodes have low utilization in the cluster. To further investigate, select the **Nodes** tab and sort the grid by CPU or memory utilization.
-This also helps you understand which nodes are being pushed to their limits and if scale-out may be required. Utilizing both the **Max** and **P95** aggregations can help you see if there are nodes in the cluster with high resource utilization. For further investigation, you would again switch to the **Nodes** tab.
+This information also helps you understand which nodes are being pushed to their limits and if scale-out might be required. By using both the **Max** and **P95** aggregations, you can see if there are nodes in the cluster with high resource utilization. For further investigation, you would again switch to the **Nodes** tab.
### Node count This performance chart maps to an equivalent of invoking `kubectl get nodes` and mapping the **STATUS** column to a chart grouped by status types.
-![Kubectl get nodes example results](./media/container-insights-livedata-metrics/kubectl-get-nodes-example.png)
+![Screenshot that shows the kubectl get nodes example results.](./media/container-insights-livedata-metrics/kubectl-get-nodes-example.png)
-![Nodes count chart](./media/container-insights-livedata-metrics/cluster-view-node-count-01.png)
+![Screenshot that shows the Node count chart.](./media/container-insights-livedata-metrics/cluster-view-node-count-01.png)
-Nodes are reported either in a **Ready** or **Not Ready** state. They are counted (and a total count is created), and the results of these two aggregations are charted.
-For example, to understand if your nodes are falling into failed states. Utilizing the **Not Ready** aggregation you can quickly see the number of nodes in your cluster currently in the **Not Ready** state.
+Nodes are reported either in a **Ready** or **Not Ready** state and they're counted to create a total count. The results of these two aggregations are charted so that, for example, you can understand if your nodes are falling into failed states. By using the **Not Ready** aggregation, you can quickly see the number of nodes in your cluster currently in the **Not Ready** state.
### Active pod count This performance chart maps to an equivalent of invoking `kubectl get pods ΓÇôall-namespaces` and maps the **STATUS** column the chart grouped by status types.
-![Kubectl get pods example results](./media/container-insights-livedata-metrics/kubectl-get-pods-example.png)
+![Screenshot that shows the kubectl get pods example results.](./media/container-insights-livedata-metrics/kubectl-get-pods-example.png)
-![Nodes pod count chart](./media/container-insights-livedata-metrics/cluster-view-node-pod-count.png)
+![Screenshot that shows the Active pod count chart.](./media/container-insights-livedata-metrics/cluster-view-node-pod-count.png)
>[!NOTE]
->Names of status as interpreted by `kubectl` may not exactly match in the chart.
+>Names of status as interpreted by `kubectl` might not exactly match in the chart.
## Next steps
-View [log query examples](container-insights-log-query.md) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
+View [log query examples](container-insights-log-query.md) to see predefined queries and examples to create alerts and visualizations or perform further analysis of your clusters.
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-persistent-volumes.md
# Configure PV monitoring with Container insights
-Starting with agent version *ciprod10052020*, Container insights integrated agent now supports monitoring PV (persistent volume) usage. With agent version *ciprod01112021*, the agent supports monitoring PV inventory, including information about the status, storage class, type, access modes, and other details.
+Starting with agent version *ciprod10052020*, the Container insights integrated agent now supports monitoring persistent volume (PV) usage. With agent version *ciprod01112021*, the agent supports monitoring PV inventory, including information about the status, storage class, type, access modes, and other details.
+ ## PV metrics Container insights automatically starts monitoring PV usage by collecting the following metrics at 60-second intervals and storing them in the **InsightMetrics** table.
-| Metric name | Metric Dimension (tags) | Metric Description |
+| Metric name | Metric dimension (tags) | Metric description |
|--|--|-|
-| `pvUsedBytes`| `podUID`, `podName`, `pvcName`, `pvcNamespace`, `capacityBytes`, `clusterId`, `clusterName`| Used space in bytes for a specific persistent volume with a claim used by a specific pod. `capacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
+| `pvUsedBytes`| `podUID`, `podName`, `pvcName`, `pvcNamespace`, `capacityBytes`, `clusterId`, `clusterName`| Used space in bytes for a specific persistent volume with a claim used by a specific pod. The `capacityBytes` tag is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
-Learn more about configuring collected PV metrics [here](./container-insights-agent-config.md).
+To learn more about how to configure collected PV metrics, see [Configure agent data collection for Container insights](./container-insights-agent-config.md).
## PV inventory Container insights automatically starts monitoring PVs by collecting the following information at 60-second intervals and storing them in the **KubePVInventory** table.
-|Data |Data Source| Data Type| Fields|
+|Data |Data source| Data type| Fields|
|--|--|-|-| |Inventory of persistent volumes in a Kubernetes cluster |Kube API |`KubePVInventory` | `PVName`, `PVCapacityBytes`, `PVCName`, `PVCNamespace`, `PVStatus`, `PVAccessModes`, `PVType`, `PVTypeInfo`, `PVStorageClassName`, `PVCreationTimestamp`, `TimeGenerated`, `ClusterId`, `ClusterName`, `_ResourceId` |
-## Monitor Persistent Volumes
-
-Container insights includes pre-configured charts for this usage metric and inventory information in workbook templates for every cluster. You can also enable a recommended alert for PV usage, and query these metrics in Log Analytics.
+## Monitor persistent volumes
-### Workload Details Workbook
+Container insights includes preconfigured charts for this usage metric and inventory information in workbook templates for every cluster. You can also enable a recommended alert for PV usage and query these metrics in Log Analytics.
-You can find usage charts for specific workloads in the Persistent Volume tab of the **Workload Details** workbook directly from an AKS cluster by selecting Workbooks from the left-hand pane, from the **View Workbooks** drop-down list in the Insights pane, or from the **Reports (preview) tab** in the Insights pane.
+### Workload Details workbook
+You can find usage charts for specific workloads on the **Persistent Volumes** tab of the **Workload Details** workbook directly from an Azure Kubernetes Service (AKS) cluster. Select **Workbooks** on the left pane, from the **View Workbooks** dropdown list in the Insights pane, or from the **Reports (preview) tab** in the Insights pane.
-### Persistent Volume Details Workbook
+### Persistent Volume Details workbook
-You can find an overview of persistent volume inventory in the **Persistent Volume Details** workbook directly from an AKS cluster by selecting Workbooks from the left-hand pane. You can also open this workbook from the **View Workbooks** drop-down list in the Insights pane or from the **Reports** tab in the Insights pane.
+You can find an overview of persistent volume inventory in the **Persistent Volume Details** workbook directly from an AKS cluster by selecting **Workbooks** from the left pane. You can also open this workbook from the **View Workbooks** dropdown list in the Insights pane or from the **Reports** tab in the Insights pane.
+### Persistent Volume Usage recommended alert
+You can enable a recommended alert to alert you when average PV usage for a pod is above 80%. To learn more about alerting, see [Metric alert rules in Container insights (preview)](./container-insights-metric-alerts.md). To learn how to override the default threshold, see the [Configure alertable metrics in ConfigMaps](./container-insights-metric-alerts.md#configure-alertable-metrics-in-configmaps) section.
-### Persistent Volume Usage Recommended Alert
-You can enable a recommended alert to alert you when average PV usage for a pod is above 80%. Learn more about alerting [here](./container-insights-metric-alerts.md) and how to override the default threshold [here](./container-insights-metric-alerts.md#configure-alertable-metrics-in-configmaps).
## Next steps -- Learn more about collected PV metrics [here](./container-insights-agent-config.md).
+To learn more about collected PV metrics, see [Configure agent data collection for Container insights](./container-insights-agent-config.md).
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
Title: Reports in Container insights
-description: Describes reports available to analyze data collected by Container insights.
+description: This article describes reports that are available to analyze data collected by Container insights.
Last updated 05/24/2022 # Reports in Container insights
-Reports in Container insights are recommended out-of-the-box [Azure workbooks](../visualize/workbooks-overview.md). This article describes the different reports that are available and how to access them.
+Reports in Container insights are recommended out-of-the-box for [Azure workbooks](../visualize/workbooks-overview.md). This article describes the different reports that are available and how to access them.
-## Viewing reports
-From the **Azure Monitor** menu in the Azure portal, select **Containers**. Select **Insights** in the **Monitoring** section, choose a particular cluster, and then select the **Reports** page.
+## View reports
+On the **Azure Monitor** menu in the Azure portal, select **Containers**. In the **Monitoring** section, select **Insights**, choose a particular cluster, and then select the **Reports** tab.
-[![Reports page](media/container-insights-reports/reports-page.png)](media/container-insights-reports/reports-page.png#lightbox)
+[![Screenshot that shows the Reports page.](media/container-insights-reports/reports-page.png)](media/container-insights-reports/reports-page.png#lightbox)
## Create a custom workbook
-To create a custom workbook based on any of these workbooks, select the **View Workbooks** dropdown and then **Go to AKS Gallery** at the bottom of the dropdown. See [Azure Monitor Workbooks](../visualize/workbooks-overview.md) for more information about workbooks and using workbook templates.
+To create a custom workbook based on any of these workbooks, select the **View Workbooks** dropdown list and then select **Go to AKS Gallery** at the bottom of the list. For more information about workbooks and using workbook templates, see [Azure Monitor workbooks](../visualize/workbooks-overview.md).
-[![AKS gallery](media/container-insights-reports/aks-gallery.png)](media/container-insights-reports/aks-gallery.png#lightbox)
+[![Screenshot that shows the AKS gallery.](media/container-insights-reports/aks-gallery.png)](media/container-insights-reports/aks-gallery.png#lightbox)
-## Node workbooks
+## Node Monitoring workbooks
-- **Disk capacity**: Interactive disk usage charts for each disk presented to the node within a container by the following perspectives:
+- **Disk Capacity**: Interactive disk usage charts for each disk presented to the node within a container by the following perspectives:
- Disk percent usage for all disks. - Free disk space for all disks.
To create a custom workbook based on any of these workbooks, select the **View W
- **Disk IO**: Interactive disk utilization charts for each disk presented to the node within a container by the following perspectives:
- - Disk I/O summarized across all disks by read bytes/sec, writes bytes/sec, and read and write bytes/sec trends.
+ - Disk I/O is summarized across all disks by read bytes/sec, writes bytes/sec, and read and write bytes/sec trends.
- Eight performance charts show key performance indicators to help measure and identify disk I/O bottlenecks. - **GPU**: Interactive GPU usage charts for each GPU-aware Kubernetes cluster node. >[!NOTE]
-> As per the Kubernetes [upstream announcement](https://kubernetes.io/blog/2020/12/16/third-party-device-metrics-reaches-g)
+> In accordance with the Kubernetes [upstream announcement](https://kubernetes.io/blog/2020/12/16/third-party-device-metrics-reaches-g).
## Resource Monitoring workbooks -- **Deployments**: Status of your deployments & Horizontal Pod Autoscaler(HPA) including custom HPA.
-
-- **Workload Details**: Interactive charts showing performance statistics of workloads for a namespace. Includes multiple tabs:
+- **Deployments**: Status of your deployments and horizontal pod autoscaler (HPA) including custom HPAs.
+- **Workload Details**: Interactive charts that show performance statistics of workloads for a namespace. Includes the following multiple tabs:
- - Overview of CPU and Memory usage by POD.
- - POD/Container Status showing POD restart trend, container restart trend, and container status for PODs.
- - Kubernetes Events showing summary of events for the controller.
+ - **Overview** of CPU and memory usage by pod.
+ - **POD/Container Status** showing pod restart trend, container restart trend, and container status for pods.
+ - **Kubernetes Events** showing a summary of events for the controller.
- **Kubelet**: Includes two grids that show key node operating statistics: - Overview by node grid summarizes total operation, total errors, and successful operations by percent and trend for each node. - Overview by operation type summarizes for each operation the total operation, total errors, and successful operations by percent and trend.
-## Billing workbooks
-- **Data Usage**: Helps you to visualize the source of your data without having to build your own library of queries from what we share in our documentation. In this workbook, there are charts with which you can view billable data from such perspectives as:
+## Billing workbook
- - Total billable data ingested in GB by solution
- - Billable data ingested by Container logs(application logs)
- - Billable container logs data ingested per by Kubernetes namespace
- - Billable container logs data ingested segregated by Cluster name
- - Billable container log data ingested by log source entry
- - Billable diagnostic data ingested by diagnostic master node logs
+- **Data Usage**: Helps you to visualize the source of your data without having to build your own library of queries from what we share in our documentation. In this workbook, you can view charts that present billable data such as:
+
+ - Total billable data ingested in GB by solution.
+ - Billable data ingested by Container logs (application logs).
+ - Billable container logs data ingested per by Kubernetes namespace.
+ - Billable container logs data ingested segregated by Cluster name.
+ - Billable container log data ingested by log source entry.
+ - Billable diagnostic data ingested by diagnostic main node logs.
## Networking workbooks -- **NPM Configuration**: Monitoring of your Network configurations which are configured through Network policy manager (NPM).
+- **NPM Configuration**: Monitoring of your network configurations, which are configured through the network policy manager (NPM) for the:
- Summary information about overall configuration complexity. - Policy, rule, and set counts over time, allowing insight into the relationship between the three and adding a dimension of time to debugging a configuration. - Number of entries in all IPSets and each IPSet. - Worst and average case performance per node for adding components to your Network Configuration. -- **Network**: Interactive network utilization charts for each node's network adapter, and a grid presents the key performance indicators to help measure the performance of your network adapters.--
+- **Network**: Interactive network utilization charts for each node's network adapter. A grid presents the key performance indicators to help measure the performance of your network adapters.
## Next steps -- See [Azure Monitor Workbooks](../visualize/workbooks-overview.md) for details about workbooks in Azure Monitor.
+For more information about workbooks in Azure Monitor, see [Azure Monitor workbooks](../visualize/workbooks-overview.md).
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
description: Describes the data available to monitor the health and performance
Previously updated : 07/21/2022 Last updated : 11/17/2022
Azure Monitor is based on a [common monitoring data platform](data-platform.md) that includes - [Metrics](essentials/data-platform-metrics.md) - [Logs](logs/data-platform-logs.md)-- Traces -- Changes. This platform allows data from multiple resources to be analyzed together using a common set of tools in Azure Monitor. Monitoring data may also be sent to other locations to support certain scenarios, and some resources may write to other locations before they can be collected into Logs or Metrics.
+- [Traces](app/asp-net-trace-logs.md)
+- [Changes](change/change-analysis.md)
+
+This platform allows data from multiple resources to be analyzed together using a common set of tools in Azure Monitor. Monitoring data may also be sent to other locations to support certain scenarios, and some resources may write to other locations before they can be collected into Logs or Metrics.
This article describes common sources of monitoring data collected by Azure Monitor in addition to the monitoring data created by Azure resources. Links are provided to detailed information on configuration required to collect this data to different locations.
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
Use any of the following methods to install the Azure Monitor agent on your AKS
#### Prerequisites - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/azure/azure-cli-extensions-overview).
+- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
- Azure CLI version 2.41.0 or higher is required for this feature. #### Install metrics addon
When you allow a default Azure Monitor workspace to be created when you install
## Next steps - [See the default configuration for Prometheus metrics](prometheus-metrics-scrape-default.md).-- [Customize Prometheus metric scraping for the cluster](prometheus-metrics-scrape-configuration.md).
+- [Customize Prometheus metric scraping for the cluster](prometheus-metrics-scrape-configuration.md).
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built i
_messages.Add(new Message(name, message, isMine)); // Inform blazor the UI needs updating
- StateHasChanged();
+ InvokeAsync(StateHasChanged);
} private async Task DisconnectAsync()
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The following table provides a detailed list of roles and responsibilities betwe
| -- | - | | Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and HCX | | Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
-| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, RiverMeadow, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, VRops, AVI |
+| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, RiverMeadow, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
## Next steps
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 09/22/2022 Last updated : 11/25/2022
In summary, the **Availability Zone** will only appear when
![Choose availability zone](./media/backup-azure-arm-restore-vms/cross-zonal-restore.png)
+>[!Note]
+>Cross-region restore jobs can't be canceled.
+ ### Monitoring secondary region restore jobs 1. From the portal, go to **Recovery Services vault** > **Backup Jobs**
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
Azure Backup also provides alerts via Azure Monitor that enables you to have a c
Currently, Azure Backup provides two main types of built-in alerts: * **Security Alerts**: For scenarios, such as deletion of backup data, or disabling of soft-delete functionality for vault, security alerts (of severity Sev 0) are fired, and displayed in the Azure portal or consumed via other clients (PowerShell, CLI, and REST API). Security alerts are generated by default and can't be turned off. However, you can control the scenarios for which the notifications (for example, emails) should be fired. For more information on how to configure notifications, see [Action rules](../azure-monitor/alerts/alerts-action-rules.md).
-* **Job Failure Alerts**: For scenarios, such as backup failure and restore failure, Azure Backup provides built-in alerts via Azure Monitor (of Severity Sev 1). Unlike security alerts, you can choose to turn off Azure Monitor alerts for job failure scenarios. For example, you've already configured custom alert rules for job failures via Log Analytics, and don't need built-in alerts to be fired for every job failure. By default, alerts for job failures are turned off. For more information, see the [section on turning on alerts for these scenarios](#turning-on-azure-monitor-alerts-for-job-failure-scenarios).
+* **Job Failure Alerts**: For scenarios, such as backup failure and restore failure, Azure Backup provides built-in alerts via Azure Monitor (of Severity Sev 1). Unlike security alerts, you can choose to turn off Azure Monitor alerts for job failure scenarios. For example, you've already configured custom alert rules for job failures via Log Analytics, and don't need built-in alerts to be fired for every job failure. By default, alerts for job failures are turned on. For more information, see the [section on turning on alerts for these scenarios](#turning-on-azure-monitor-alerts-for-job-failure-scenarios).
The following table summarizes the different backup alerts currently available via Azure Monitor and the supported workload/vault types:
To inactivate/resolve an active alert, you can select the list item correspondin
## Next steps
-[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
+[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk.md
Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 12/02/2021 Last updated : 11/24/2022
Azure Backup allows you to encrypt your backup data using customer-managed keys
The encryption key used for encrypting backups may be different from the one used for the source. The data is protected using an AES 256 based data encryption key (DEK), which in turn, is protected using your key encryption keys (KEK). This provides you with full control over the data and the keys. To allow encryption, you must grant Recovery Services vault the permissions to access the encryption key in the Azure Key Vault. You can change the key when required.
-This article discusses about how to:
+In this article, you'll learn how to:
-- Create a Recovery Services vault-- Configure your Recovery Services vault to encrypt the backup data using customer-managed keys (CMK)-- Back up to vaults encrypted using customer-managed keys-- Restore data from backups
+> [!div class="checklist"]
+>
+> - Create a Recovery Services vault
+> - Configure the Recovery Services vault to encrypt the backup data using customer-managed keys (CMK)
+> - Back up to vaults encrypted using customer-managed keys
+> - Restore data from backups
## Before you start - This feature allows you to encrypt **new Recovery Services vaults only**. Any vaults containing existing items registered or attempted to be registered to it aren't supported. -- Once enabled for a Recovery Services vault, encryption using customer-managed keys can't be reverted to use platform-managed keys (default). You can change the encryption keys as per the requirements.
+- After you enable it for a Recovery Services vault, encryption using customer-managed keys can't be reverted to use platform-managed keys (default). You can change the encryption keys as per the requirements.
- This feature currently **doesn't support backup using MARS agent**, and you may not be able to use a CMK-encrypted vault for the same. The MARS agent uses a user passphrase-based encryption. This feature also doesn't support backup of classic VMs.
To assign the key and follow the steps, choose a client:
2. Select **Update** under **Encryption Settings**.
-3. In the Encryption Settings pane, select **Use your own key** and continue to specify the key using one of the following ways. **Ensure that the key you want to use is an RSA 2048 key, which is in an enabled state.**
+3. In the Encryption Settings pane, select **Use your own key** and continue to specify the key using one of the following ways.
+
+ *Ensure that you use an RSA key, which is in enabled state.*
1. Enter the **Key URI** with which you want to encrypt the data in this Recovery Services vault. You also need to specify the subscription in which the Azure Key Vault (that contains this key) is present. This key URI can be obtained from the corresponding key in your Azure Key Vault. Ensure the key URI is copied correctly. It's recommended that you use the **Copy to clipboard** button provided with the key identifier.
Data stored in the Recovery Services vault can be restored according to the step
#### Restore VM/disk
-1. When recovering disk / VM from a "Snapshot" recovery point, the restored data will be encrypted with the DES used for encrypting the source VM's disks.
+1. When you recover disk / VM from a *Snapshot* recovery point, the restored data will be encrypted with the DES used for encrypting the source VM's disks.
1. When restoring disk / VM from a recovery point with Recovery Type as "Vault", you can choose to have the restored data encrypted using a DES, specified at the time of restore. Alternatively, you can choose to continue with the restore the data without specifying a DES, in which case it will be encrypted using Microsoft-managed keys.
az backup restore restore-disks --container-name MyContainer --disk-encryption-s
#### Restore files
-When performing a file restore, the restored data will be encrypted with the key used for encrypting the target location.
+When you perform a file restore, the restored data will be encrypted with the key used for encrypting the target location.
### Restore SAP HANA/SQL databases in Azure VMs
-When restoring from a backed-up SAP HANA/SQL database running in an Azure VM, the restored data will be encrypted using the encryption key used at the target storage location. It may be a customer-managed key or a platform-managed key used for encrypting the disks of the VM.
+When you restore from a backed-up SAP HANA/SQL database running in an Azure VM, the restored data will be encrypted using the encryption key used at the target storage location. It may be a customer-managed key or a platform-managed key used for encrypting the disks of the VM.
## Additional topics
When your subscription is allow-listed, the **Backup Encryption** tab will displ
1. Specify the user assigned managed identity to manage encryption with customer-managed keys. Click **Select** to browse and select the required identity.
-1. Once done, proceed to add Tags (optional) and continue creating the vault.
+1. Proceed to add Tags (optional) and continue creating the vault.
### Enable auto-rotation of encryption keys
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Alice is a Dynamics 365 contact center agent, who makes an outbound call from Om
**Cost calculations** - One participant on the VoIP leg (Alice) from Omnichannel for Customer Service client application x 10 minutes x $0.004 per participant leg per minute = $0.04-- One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04.-- Omnichannel for Customer Service bot does not introduce additional ACS charges.
+- One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04
+- Omnichannel for Customer Service bot doesn't introduce extra ACS charges.
**Total cost for the call**: $0.04 + $0.04 = $0.08
Alice and Bob are on a VOIP Call. Bob escalated the call to Charlie on Charlie's
- Two participants on the VoIP leg (Alice and Bob) from App to Communication Services servers x 20 minutes x $0.004 per participant leg per minute = $0.16 - One participant on the PSTN outbound leg (Charlie) from Communication Services servers to US Telephone number x 10 minutes x $0.013 per participant leg per minute = $0.13
-Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+Note: USA mixed rate to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
**Total cost for the VoIP + escalation call**: $0.16 + $0.13 = $0.29
Note that the service application that uses Call Automation SDK isn't charged to
**Total cost for the call**: $0.22 + $0.02 = $0.24
+### Pricing example: Inbound PSTN call redirected to another external telephone number using Call Automation SDK
+
+Vlad dials your toll-free number (that you acquired from Communication Service) from his mobile phone. Your service application (built with Call Automation SDK) receives the call, and invokes the logic to redirect the call to a mobile phone number of Abraham using ACS direct routing. Abraham picks up the call and they talk with Vlad for 5 minutes.
+
+- Vlad was on the call as a PSTN endpoint for a total of 5 minutes.
+- Your service application was on the call for the entire 5 minutes of the call.
+- Abraham was on the call as a direct routing endpoint for a total of 5 minutes.
+
+**Cost calculations**
+
+- Inbound PSTN leg by Vlad to toll-free number acquired from Communication Services x 5 minutes x $0.0220 per minute for receiving the call = $0.11
+- One participant on the ACS direct routing outbound leg (Abraham) from the service application to an SBC x 5 minutes x $0.004 per participant leg per minute = $0.02
+
+The service application that uses Call Automation SDK isn't charged to be part of the call. The additional monthly cost of leasing a US toll-free number isn't included in this calculation.
+ ## Call Recording Azure Communication Services allow developers to record PSTN, WebRTC, Conference, or SIP calls. Call Recording supports mixed video MP4, mixed audio MP3/WAV, and unmixed audio WAV output formats. Call Recording SDKs are available for Java and C#. To learn more view Call Recording [concepts](./voice-video-calling/call-recording.md) and [quickstart](../quickstarts/voice-video-calling/get-started-call-recording.md).
databox-online Azure Stack Edge Gpu Enable Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-enable-azure-monitor.md
For more information, see the detailed steps in [Create a Log Analytics workspac
Take the following steps to enable Container Insights on your workspace.
-1. Follow the detailed steps in the [How to add the Azure Monitor Containers solution](../azure-monitor/containers/container-insights-hybrid-setup.md#how-to-add-the-azure-monitor-containers-solution). Use the following template file `containerSolution.json`:
+1. Follow the detailed steps in [Add the Azure Monitor Containers solution](../azure-monitor/containers/container-insights-hybrid-setup.md#add-the-azure-monitor-containers-solution). Use the following template file `containerSolution.json`:
```yml {
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Defender for Cloud includes vulnerability scanners for your machines, containers
Learn more about using these scanners: -- [Find vulnerabilities with Microsoft threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
+- [Find vulnerabilities with Microsoft threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md)
- [Find vulnerabilities with the integrated Qualys scanner](deploy-vulnerability-assessment-vm.md) - [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md) - [Scan your ECR images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
This glossary provides a brief description of important terms and concepts for t
## T | Term | Description | Learn more | |--|--|--|
-|**TVM**|Threat and Vulnerability Management, a built-in module in Microsoft Defender for Endpoint that can discover vulnerabilities and misconfigurations in near real time and prioritize vulnerabilities based on the threat landscape and detections in your organization.|[Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
+|**TVM**|Threat and Vulnerability Management, a built-in module in Microsoft Defender for Endpoint that can discover vulnerabilities and misconfigurations in near real time and prioritize vulnerabilities based on the threat landscape and detections in your organization.|[Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md)
## Z | Term | Description | Learn more |
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Learn more about connecting your [AWS](quickstart-onboard-aws.md) and [GCP](quic
Defender for Cloud includes vulnerability assessment solutions for your virtual machines, container registries, and SQL servers as part of the enhanced security features. Some of the scanners are powered by Qualys. But you don't need a Qualys license, or even a Qualys account - everything's handled seamlessly inside Defender for Cloud.
-Microsoft Defender for Servers includes automatic, native integration with Microsoft Defender for Endpoint. Learn more, [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). With this integration enabled, you'll have access to the vulnerability findings from **Microsoft threat and vulnerability management**. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+Microsoft Defender for Servers includes automatic, native integration with Microsoft Defender for Endpoint. Learn more, [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). With this integration enabled, you'll have access to the vulnerability findings from **Microsoft threat and vulnerability management**. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
Review the findings from these vulnerability scanners and respond to them all from within Defender for Cloud. This broad approach brings Defender for Cloud closer to being the single pane of glass for all of your cloud security efforts.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
The following table summarizes what's included in each plan.
|:|:|::|::| | **Unified view** | The Defender for Cloud portal displays Defender for Endpoint alerts. You can then drill down into Defender for Endpoint portal, with additional information such as the alert process tree, the incident graph, and a detailed machine timeline showing historical data up to six months.| :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Automatic MDE provisioning** | Automatic provisioning of Defender for Endpoint on Azure, AWS, and GCP resources. | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| **Microsoft threat and vulnerability management** | Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, without needing other agents or periodic scans. [Learn more](deploy-vulnerability-assessment-tvm.md). | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **Microsoft Defender Vulnerability Management** | Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, without other agents or periodic scans. [Learn more](deploy-vulnerability-assessment-defender-vulnerability-management.md). | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **Threat detection for OS-level (Agent-based)** | Defender for Servers and Microsoft Defender for Endpoint (MDE) detect threats at the OS level, including VM behavioral detections and **Fileless attack detection**, which generates detailed security alerts that accelerate alert triage, correlation, and downstream response time.<br>[Learn more](alerts-reference.md#alerts-windows) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Threat detection for network-level (Agentless)** | Defender for Servers detects threats directed at the control plane on the network, including network-based detections for Azure virtual machines. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **Microsoft Defender Vulnerability Management Add-on** | See a deeper analysis of the security posture of your protected servers, including risks related to browser extensions, network shares, and digital certificates. [Learn more](deploy-vulnerability-assessment-defender-vulnerability-management.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **Security Policy and Regulatory Compliance** | Customize a security policy for your subscription and also compare the configuration of your resources with requirements in industry standards, regulations, and benchmarks. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Integrated vulnerability assessment powered by Qualys** | Use the Qualys scanner for real-time identification of vulnerabilities in Azure and hybrid VMs. Everything's handled by Defender for Cloud. You don't need a Qualys license or even a Qualys account. [Learn more](deploy-vulnerability-assessment-vm.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Log Analytics 500 MB free data ingestion** | Defender for Cloud leverages Azure Monitor to collect data from Azure VMs and servers, using the Log Analytics agent. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
The following table summarizes what's included in each plan.
| **Just-in-time VM access for management ports** | Defender for Cloud provides [JIT access](just-in-time-access-overview.md), locking down machine ports to reduce the machine's attack surface.| | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Adaptive network hardening** | Filtering traffic to and from resources with network security groups (NSG) improves your network security posture. You can further improve security by [hardening the NSG rules](adaptive-network-hardening.md) based on actual traffic patterns. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Docker host hardening** | Defender for Cloud assesses containers hosted on Linux machines running Docker containers, and compares them with the Center for Internet Security (CIS) Docker Benchmark. [Learn more](harden-docker-hosts.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-<!--
- [Learn more](fileless-attack-detection.md).
-| Future ΓÇô TVM P2 | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Future ΓÇô disk scanning insights | | :::image type="icon" source="./media/icons/yes-icon.png"::: | -->
> [!NOTE] > If you only enable Defender for Cloud at the workspace level, Defender for Cloud won't enable just-in-time VM access, adaptive application controls, and network detections for Azure resources.
defender-for-cloud Deploy Vulnerability Assessment Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md
+
+ Title: Use Microsoft Defender for Endpoint's Defender Vulnerability Management with Microsoft Defender for Cloud
+description: Enable, deploy, and use Microsoft Defender for Endpoint's Defender Vulnerability Management with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines
++ Last updated : 11/24/2022++
+# Investigate weaknesses with Microsoft Defender for Endpoint's Defender Vulnerability Management
+
+[Microsoft's Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management) is a built-in module in Microsoft Defender for Endpoint that can:
+
+- Discover vulnerabilities and misconfigurations in near real time
+- Prioritize vulnerabilities based on the threat landscape and detections in your organization
+
+If you've enabled the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md), you'll automatically get the Defender Vulnerability Management findings without the need for more agents.
+
+As it's a built-in module for Microsoft Defender for Endpoint, **Defender Vulnerability Management** doesn't require periodic scans.
+
+For a quick overview of Defender Vulnerability Management, watch this video:
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4Y1FX]
+
+> [!TIP]
+> As well as alerting you to vulnerabilities, Defender Vulnerability Management also provides functionality for Defender for Cloud's asset inventory tool. Learn more in [Software inventory](asset-inventory.md#access-a-software-inventory).
+
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Microsoft Defender for Servers](episode-five.md)
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|General availability (GA)|
+|Machine types:|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines <br> [Supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans)|
+|Prerequisites:|Enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)|
+|Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
+
+## Onboarding your machines to Defender Vulnerability Management
+
+The integration between Microsoft Defender for Endpoint and Microsoft Defender for Cloud takes place in the background, so it doesn't involve any changes at the endpoint level.
+
+- **To manually onboard one or more machines** to Defender Vulnerability Management, use the security recommendation "[Machines should have a vulnerability assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)":
+
+ :::image type="content" source="media/deploy-vulnerability-assessment-defender-vulnerability-management/deploy-vulnerability-assessment-solutions.png" alt-text="Selecting a vulnerability assessment solution from the recommendation.":::
+
+- **To automatically find and view the vulnerabilities** on existing and new machines without the need to manually remediate the preceding recommendation, see [Automatically configure vulnerability assessment for your machines](auto-deploy-vulnerability-assessment.md).
+
+- **To onboard via the REST API**, run PUT/DELETE using this URL: `https://management.azure.com/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/.../providers/Microsoft.Security/serverVulnerabilityAssessments/mdetvm?api-version=2015-06-01-preview`
++
+The findings for **all** vulnerability assessment tools are in the Defender for Cloud recommendation **Vulnerabilities in your virtual machines should be remediated**. Learn about how to [view and remediate findings from vulnerability assessment solutions on your VMs](remediate-vulnerability-findings-vm.md)
+
+## Learn more
+
+You can check out the following blogs:
+
+- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)
+- [Microsoft Defender for Cloud Server Monitoring Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-server-monitoring-dashboard/ba-p/2869658)
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Remediate the findings from your vulnerability assessment solution](remediate-vulnerability-findings-vm.md)
+
+Defender for Cloud also offers vulnerability analysis for your:
+
+- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Defender for Cloud includes vulnerability scanning for your machines at no extra
> > Defender for Cloud's integrated vulnerability assessment solution works seamlessly with Azure Arc. When you've deployed Azure Arc, your machines will appear in Defender for Cloud and no Log Analytics agent is required.
-If you don't want to use the vulnerability assessment powered by Qualys, you can use [Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md) or [deploy a BYOL solution](deploy-vulnerability-assessment-byol-vm.md) with your own Qualys license, Rapid7 license, or another vulnerability assessment solution.
+If you don't want to use the vulnerability assessment powered by Qualys, you can use [Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md) or [deploy a BYOL solution](deploy-vulnerability-assessment-byol-vm.md) with your own Qualys license, Rapid7 license, or another vulnerability assessment solution.
## Availability
defender-for-cloud Enable Vulnerability Assessment Agentless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment-agentless.md
Previously updated : 09/21/2022 Last updated : 11/14/2022 # Find vulnerabilities and collect software inventory with agentless scanning (Preview)
Agentless vulnerability assessment uses the Defender Vulnerability Management en
## Compatibility with agent-based vulnerability assessment solutions
-Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender for Endpoint (MDE)](deploy-vulnerability-assessment-tvm.md), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices.
+Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender for Endpoint (MDE)](deploy-vulnerability-assessment-defender-vulnerability-management.md), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices.
When you enable agentless vulnerability assessment:
To enable agentless vulnerability assessment on Azure:
1. Select the relevant subscription. 1. For either the [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or Defender for Servers P2 plan, select **Settings**.
- :::image type="content" source="media/enable-vulnerability-assessment-agentless/defender-plan-settings.png" alt-text="Screenshot of link for the settings of the Defender plans for Azure subscriptions." lightbox="media/enable-vulnerability-assessment-agentless/defender-plan-settings.png":::
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/defender-plan-settings-azure.png" alt-text="Screenshot of link for the settings of the Defender plans for Azure accounts." lightbox="media/enable-vulnerability-assessment-agentless/defender-plan-settings-azure.png":::
The agentless scanning setting is shared by both Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2. When you enable agentless scanning on either plan, the setting is enabled for both plans. 1. In the settings pane, turn on **Agentless scanning for machines**.
- :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-off.png" alt-text="Screenshot of the agentless scanning status." lightbox="media/enable-vulnerability-assessment-agentless/agentless-scanning-off.png":::
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/turn-on-agentles-scanning-azure.png" alt-text="Screenshot of settings and monitoring screen to turn on agentless scanning." lightbox="media/enable-vulnerability-assessment-agentless/turn-on-agentles-scanning-azure.png":::
1. Select **Save**.
To enable agentless vulnerability assessment on Azure:
1. In the settings pane, turn on **Agentless scanning for machines**.
- :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-off-aws.png" alt-text="Screenshot of the agentless scanning status for AWS accounts.":::
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scan-on-aws.png" alt-text="Screenshot of the agentless scanning status for AWS accounts." lightbox="media/enable-vulnerability-assessment-agentless/agentless-scan-on-aws.png":::
1. Select **Save and Next: Configure Access**.
In this article, you learned about how to scan your machines for software vulner
Learn more about: -- [Vulnerability assessment with Microsoft Defender for Endpoint](deploy-vulnerability-assessment-tvm.md)
+- [Vulnerability assessment with Microsoft Defender for Endpoint](deploy-vulnerability-assessment-defender-vulnerability-management.md)
- [Vulnerability assessment with Qualys](deploy-vulnerability-assessment-vm.md) - [Vulnerability assessment with BYOL solutions](deploy-vulnerability-assessment-byol-vm.md)
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Last updated 06/28/2022
# Microsoft Defender for Servers
-**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with TVM. Aviv explains how this new integration with TVM works, the advantages of this integration, which includes software inventory and easy experience to onboard. Aviv also covers the integration with MDE for Linux and the Defender for Servers support for the new multicloud connector for AWS.
+**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with Microsoft Defender Vulnerability Management (formerly TVM). Aviv explains how this new integration with Defender Vulnerability Management works, the advantages of this integration, which includes software inventory and easy experience to onboard. Aviv also covers the integration with MDE for Linux and the Defender for Servers support for the new multicloud connector for AWS.
<br> <br>
Last updated 06/28/2022
- [1:22](/shows/mdc-in-the-field/defender-for-containers#time=01m22s) - Overview of the announcements for Microsoft Defender for Servers -- [5:50](/shows/mdc-in-the-field/defender-for-containers#time=05m50s) - Migration path from Qualys VA to TVM
+- [5:50](/shows/mdc-in-the-field/defender-for-containers#time=05m50s) - Migration path from Qualys VA to Defender Vulnerability Management
-- [7:12](/shows/mdc-in-the-field/defender-for-containers#time=07m12s) - TVM capabilities in Defender for Servers
+- [7:12](/shows/mdc-in-the-field/defender-for-containers#time=07m12s) - Defender Vulnerability Management capabilities in Defender for Servers
- [8:38](/shows/mdc-in-the-field/defender-for-containers#time=08m38s) - Threat detections for Defender for Servers - [9:52](/shows/mdc-in-the-field/defender-for-containers#time=09m52s) - Defender for Servers in AWS -- [12:23](/shows/mdc-in-the-field/defender-for-containers#time=12m23s) - Onboard process for TVM in an on-premises scenario
+- [12:23](/shows/mdc-in-the-field/defender-for-containers#time=12m23s) - Onboard process for Defender Vulnerability Management in an on-premises scenario
- [13:20](/shows/mdc-in-the-field/defender-for-containers#time=13m20s) - Demonstration ## Recommended resources
-Learn how to [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+Learn how to [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
defender-for-cloud Episode Twenty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-one.md
+
+ Title: Latest updates in the regulatory compliance dashboard | Defender for Cloud in the Field
+
+description: Learn about the latest updates in the regulatory compliance dashboard
+ Last updated : 11/24/2022++
+# Latest updates in the regulatory compliance dashboard| Defender for Cloud in the Field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Ronit Reger joins Yuri Diogenes to talk about the latest updates in the regulatory compliance dashboard that were released at Ignite. Ronit talks about the new attestation capability and the new Microsoft cloud security benchmark. Ronit also demonstrates how to create manual attestations in the regulatory compliance dashboard.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=b4aff57d-737e-4bf7-8748-4220131b730c" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [00:00](/shows/mdc-in-the-field/update-regulatory#time=00m00s) - Intro
+
+- [01:08](/shows/mdc-in-the-field/update-regulatory#time=01m08s) - What's new in the regulatory compliance dashboard
+
+- [04:45](/shows/mdc-in-the-field/update-regulatory#time=04m45s) - The new Microsoft cloud security benchmark
+
+- [08:48](/shows/mdc-in-the-field/update-regulatory#time=08m48s) - Demonstration
+
+- [13:49](/shows/mdc-in-the-field/security-explorer#time=13m49s) - Manual attestation
++
+## Recommended resources
+ - [Learn more](/azure/defender-for-cloud/regulatory-compliance-dashboard) about improving your regulatory compliance.
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Title: Cloud security explorer and attack path analysis | Defender for Cloud in
description: Learn about cloud security explorer and attack path analysis. Previously updated : 11/08/2022 Last updated : 11/24/2022 # Cloud security explorer and attack path analysis | Defender for Cloud in the Field
Last updated 11/08/2022
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Latest updates in the regulatory compliance dashboard](episode-twenty-one.md)
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
You can learn more by watching this video from the Defender for Cloud in the Fie
|Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]| |Pricing:|You'll need a Microsoft Purview account to create the data sensitivity classifications and run the scans. The integration between Purview and Microsoft Defender for Cloud doesn't incur extra costs, but the data is shown in Microsoft Defender for Cloud only for enabled plans.| |Required roles and permissions:|**Security admin** and **Security contributor**|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)|
--
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Regions: East US, East US 2, West US 2, West Central US, South Central US, Canada Central, Brazil South, North Europe, West Europe, UK South, Southeast Asia, Central India, Australia East) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)|
## The triage problem and Defender for Cloud's solution Security teams regularly face the challenge of how to triage incoming issues.
Microsoft Purview's data sensitivity classifications and data sensitivity labels
## Discover resources with sensitive data To provide information about discovered sensitive data and help ensure you have that information when you need it, Defender for Cloud displays information from Microsoft Purview in multiple locations.
-> [!TIP]
-> If a resource is scanned by multiple Microsoft Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
+Purview scans produce insights into the nature of the sensitive information so you can take action to protect that information:
+- If a resource is scanned by multiple Microsoft Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
+- Classifications and labels are shown for resources that were scanned within the last 3 months.
### Alerts and recommendations pages When you're reviewing a recommendation or investigating an alert, the information about any potentially sensitive data involved is included on the page. You can also filter the list of alerts by **Data sensitivity classifications** and **Data sensitivity labels** to help you focus on the alerts that relate to sensitive data.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
For more information about migrating servers from Defender for Endpoint to Defen
- **Advanced post-breach detection sensors**. Defender for Endpoint's sensors collect a vast array of behavioral signals from your machines. -- **Vulnerability assessment from the Microsoft threat and vulnerability management solution**. With Microsoft Defender for Endpoint installed, Defender for Cloud can show vulnerabilities discovered by the threat and vulnerability management module and also offer this module as a supported vulnerability assessment solution. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+- **Vulnerability assessment from the Microsoft threat and vulnerability management solution**. With Microsoft Defender for Endpoint installed, Defender for Cloud can show vulnerabilities discovered by the threat and vulnerability management module and also offer this module as a supported vulnerability assessment solution. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
This module also brings the software inventory features described in [Access a software inventory](asset-inventory.md#access-a-software-inventory) and can be automatically enabled for supported machines with [the auto deploy settings](auto-deploy-vulnerability-assessment.md).
defender-for-cloud Powershell Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-onboarding.md
Title: Onboard to Microsoft Defender for Cloud with PowerShell description: This document walks you through the process of enabling Microsoft Defender for Cloud with PowerShell cmdlets. Previously updated : 11/09/2021 Last updated : 11/24/2022 # Quickstart: Automate onboarding of Microsoft Defender for Cloud using PowerShell
These steps should be performed before you run the Defender for Cloud cmdlets:
1. Configure a Log Analytics workspace to which the agents will report. You must have a Log Analytics workspace that you already created, that the subscriptionΓÇÖs VMs will report to. You can define multiple subscriptions to report to the same workspace. If not defined, the default workspace will be used. ```powershell
- Set-AzSecurityWorkspaceSetting -Name "default" -Scope "/subscriptions/d07c0080-170c-4c24-861d-9c817742786c" -WorkspaceId"/subscriptions/d07c0080-170c-4c24-861d-9c817742786c/resourceGroups/myRg/providers/Microsoft.OperationalInsights/workspaces/myWorkspace"
+ Set-AzSecurityWorkspaceSetting -Name "default" -Scope "/subscriptions/d07c0080-170c-4c24-861d-9c817742786c" -WorkspaceId "/subscriptions/d07c0080-170c-4c24-861d-9c817742786c/resourceGroups/myRg/providers/Microsoft.OperationalInsights/workspaces/myWorkspace"
``` 1. Auto-provision installation of the Log Analytics agent on your Azure VMs:
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
As part of this update, vulnerabilities that have medium and low severities are
:::image type="content" source="media/release-notes/disable-rule.png" alt-text="Screenshot of the disable rule screen.":::
-Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.md)
+Learn more about [vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md)
### JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)
Use the security recommendation "[A vulnerability assessment solution should be
To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](release-notes-archive.md#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
-Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
### Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for Servers - released for general availability (GA)
Use the security recommendation "[A vulnerability assessment solution should be
To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
-Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
### Vulnerability assessment solutions can now be auto enabled (in preview)
If the [integration with Microsoft Defender for Endpoint](integration-defender-f
- (**NEW**) The Microsoft threat and vulnerability management module of Microsoft Defender for Endpoint (see [the release note](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solution-in-preview)) - The integrated Qualys agent Your chosen solution will be automatically enabled on supported machines.
To use these new features, you'll need to enable the [integration with Microsoft
For full details, including sample Kusto queries for Azure Resource Graph, see [Access a software inventory](asset-inventory.md#access-a-software-inventory). ### Changed prefix of some alert types from "ARM_" to "VM_"
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 11/13/2022 Last updated : 11/24/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [The ability to create custom assessments in AWS and GCP (Preview) is set to be deprecated](#the-ability-to-create-custom-assessments-in-aws-and-gcp-preview-is-set-to-be-deprecated) | November 2022 | | [Recommendation to configure dead-letter queues for Lambda functions to be deprecated](#recommendation-to-configure-dead-letter-queues-for-lambda-functions-to-be-deprecated) | November 2022 | | [Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated](#recommendation-to-enable-diagnostic-logs-for-virtual-machine-scale-sets-to-be-deprecated) | December 2022 |
+| [The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports is set to be deprecated](#the-policy-vulnerability-assessment-settings-for-sql-server-should-contain-an-email-address-to-receive-scan-reports-is-set-to-be-deprecated) | December 2022 |
### The ability to create custom assessments in AWS and GCP (Preview) is set to be deprecated
The recommendation [`Lambda functions should have a dead-letter queue configured
| Recommendation | Description | Severity | |--|--|--|
-| Lambda functions should have a dead-letter queue configured | This control checks whether a Lambda function is configured with a dead-letter queue. The control fails if the Lambda function is not configured with a dead-letter queue. As an alternative to an on-failure destination, you can configure your function with a dead-letter queue to save discarded events for further processing. A dead-letter queue acts the same as an on-failure destination. It is used when an event fails all processing attempts or expires without being processed. A dead-letter queue allows you to look back at errors or failed requests to your Lambda function to debug or identify unusual behavior. From a security perspective, it is important to understand why your function failed and to ensure that your function does not drop data or compromise data security as a result. For example, if your function cannot communicate to an underlying resource, that could be a symptom of a denial of service (DoS) attack elsewhere in the network. | Medium |
+| Lambda functions should have a dead-letter queue configured | This control checks whether a Lambda function is configured with a dead-letter queue. The control fails if the Lambda function isn't configured with a dead-letter queue. As an alternative to an on-failure destination, you can configure your function with a dead-letter queue to save discarded events for further processing. A dead-letter queue acts the same as an on-failure destination. It's used when an event fails all processing attempts or expires without being processed. A dead-letter queue allows you to look back at errors or failed requests to your Lambda function to debug or identify unusual behavior. From a security perspective, it's important to understand why your function failed and to ensure that your function doesn't drop data or compromise data security as a result. For example, if your function can't communicate to an underlying resource, that could be a symptom of a denial of service (DoS) attack elsewhere in the network. | Medium |
### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated
The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_P
|--|--|--| | Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low |
+### The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports is set to be deprecated
+
+**Estimated date for change: December 2022**
+
+The policy [`Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) is set to be deprecated.
+
+The Defender for SQL vulnerability assessment e-mail report will still be available and existing e-email configurations won't change after the policy is deprecated.
+ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
iot-hub Iot Hub Csharp Csharp Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-twin-getstarted.md
In this article, you create two .NET console apps:
[!INCLUDE [iot-hub-include-find-custom-connection-string](../../includes/iot-hub-include-find-custom-connection-string.md)]
-## Create a device app with a direct method
+## Create a device app that updates reported properties
In this section, you create a .NET console app that connects to your hub as **myDeviceId**, and then updates its reported properties to confirm that it's connected using a cellular network.
In this section, you create a .NET console app that connects to your hub as **my
1. Add the following `using` statements at the top of the **Program.cs** file:
- ```csharp
- using Microsoft.Azure.Devices.Client;
- using Microsoft.Azure.Devices.Shared;
- using Newtonsoft.Json;
- ```
+ ```csharp
+ using Microsoft.Azure.Devices.Client;
+ using Microsoft.Azure.Devices.Shared;
+ using Newtonsoft.Json;
+ ```
1. Add the following fields to the **Program** class. Replace `{device connection string}` with the device connection string you saw when you registered a device in the IoT Hub:
- ```csharp
- static string DeviceConnectionString = "HostName=<yourIotHubName>.azure-devices.net;DeviceId=<yourIotDeviceName>;SharedAccessKey=<yourIotDeviceAccessKey>";
- static DeviceClient Client = null;
- ```
+ ```csharp
+ static string DeviceConnectionString = "HostName=<yourIotHubName>.azure-devices.net;DeviceId=<yourIotDeviceName>;SharedAccessKey=<yourIotDeviceAccessKey>";
+ static DeviceClient Client = null;
+ ```
1. Add the following method to the **Program** class:
- ```csharp
- public static async void InitClient()
- {
- try
- {
- Console.WriteLine("Connecting to hub");
- Client = DeviceClient.CreateFromConnectionString(DeviceConnectionString,
- TransportType.Mqtt);
- Console.WriteLine("Retrieving twin");
- await Client.GetTwinAsync();
- }
- catch (Exception ex)
- {
- Console.WriteLine();
- Console.WriteLine("Error in sample: {0}", ex.Message);
- }
- }
- ```
-
- The **Client** object exposes all the methods you require to interact with device twins from the device. The code shown above initializes the **Client** object, and then retrieves the device twin for **myDeviceId**.
+ ```csharp
+ public static async void InitClient()
+ {
+ try
+ {
+ Console.WriteLine("Connecting to hub");
+ Client = DeviceClient.CreateFromConnectionString(DeviceConnectionString,
+ TransportType.Mqtt);
+ Console.WriteLine("Retrieving twin");
+ await Client.GetTwinAsync();
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine();
+ Console.WriteLine("Error in sample: {0}", ex.Message);
+ }
+ }
+ ```
+
+ The **Client** object exposes all the methods you require to interact with device twins from the device. The code shown above initializes the **Client** object, and then retrieves the device twin for **myDeviceId**.
1. Add the following method to the **Program** class:
- ```csharp
- public static async void ReportConnectivity()
- {
- try
- {
- Console.WriteLine("Sending connectivity data as reported property");
-
- TwinCollection reportedProperties, connectivity;
- reportedProperties = new TwinCollection();
- connectivity = new TwinCollection();
- connectivity["type"] = "cellular";
- reportedProperties["connectivity"] = connectivity;
- await Client.UpdateReportedPropertiesAsync(reportedProperties);
- }
- catch (Exception ex)
- {
- Console.WriteLine();
- Console.WriteLine("Error in sample: {0}", ex.Message);
- }
- }
- ```
+ ```csharp
+ public static async void ReportConnectivity()
+ {
+ try
+ {
+ Console.WriteLine("Sending connectivity data as reported property");
+
+ TwinCollection reportedProperties, connectivity;
+ reportedProperties = new TwinCollection();
+ connectivity = new TwinCollection();
+ connectivity["type"] = "cellular";
+ reportedProperties["connectivity"] = connectivity;
+ await Client.UpdateReportedPropertiesAsync(reportedProperties);
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine();
+ Console.WriteLine("Error in sample: {0}", ex.Message);
+ }
+ }
+ ```
The code above updates the reported property of **myDeviceId** with the connectivity information. 1. Finally, add the following lines to the **Main** method:
- ```csharp
- try
- {
- InitClient();
- ReportConnectivity();
- }
- catch (Exception ex)
- {
- Console.WriteLine();
- Console.WriteLine("Error in sample: {0}", ex.Message);
- }
- Console.WriteLine("Press Enter to exit.");
- Console.ReadLine();
- ```
+ ```csharp
+ try
+ {
+ InitClient();
+ ReportConnectivity();
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine();
+ Console.WriteLine("Error in sample: {0}", ex.Message);
+ }
+ Console.WriteLine("Press Enter to exit.");
+ Console.ReadLine();
+ ```
1. In Solution Explorer, right-click on your solution, and select **Set StartUp Projects**.
In this section, you create a .NET console app that connects to your hub as **my
1. Run this app by right-clicking the **ReportConnectivity** project and selecting **Debug**, then **Start new instance**. You should see the app getting the twin information, and then sending connectivity as a ***reported property***.
- ![Run device app to report connectivity](./media/iot-hub-csharp-csharp-twin-getstarted/rundeviceapp.png)
+ ![Run device app to report connectivity](./media/iot-hub-csharp-csharp-twin-getstarted/rundeviceapp.png)
After the device reported its connectivity information, it should appear in both queries. 1. Right-click the **AddTagsAndQuery** project and select **Debug** > **Start new instance** to run the queries again. This time, **myDeviceId** should appear in both query results.
- ![Device connectivity reported successfully](./media/iot-hub-csharp-csharp-twin-getstarted/tagappsuccess.png)
+ ![Device connectivity reported successfully](./media/iot-hub-csharp-csharp-twin-getstarted/tagappsuccess.png)
-## Create a service app to trigger a reboot
+## Create a service app that updates desired properties and queries twins
In this section, you create a .NET console app, using C#, that adds location metadata to the device twin associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
In this section, you create a .NET console app, using C#, that adds location met
1. Select **Browse** and search for and select **Microsoft.Azure.Devices**. Select **Install**.
- ![NuGet Package Manager window](./media/iot-hub-csharp-csharp-twin-getstarted/nuget-package-addtagsandquery-app.png)
+ ![NuGet Package Manager window](./media/iot-hub-csharp-csharp-twin-getstarted/nuget-package-addtagsandquery-app.png)
This step downloads, installs, and adds a reference to the [Azure IoT service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices/) NuGet package and its dependencies. 1. Add the following `using` statements at the top of the **Program.cs** file:
- ```csharp
- using Microsoft.Azure.Devices;
- ```
+ ```csharp
+ using Microsoft.Azure.Devices;
+ ```
1. Add the following fields to the **Program** class. Replace `{iot hub connection string}` with the IoT Hub connection string that you copied in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
- ```csharp
- static RegistryManager registryManager;
- static string connectionString = "{iot hub connection string}";
- ```
+ ```csharp
+ static RegistryManager registryManager;
+ static string connectionString = "{iot hub connection string}";
+ ```
1. Add the following method to the **Program** class:
- ```csharp
- public static async Task AddTagsAndQuery()
- {
- var twin = await registryManager.GetTwinAsync("myDeviceId");
- var patch =
- @"{
- tags: {
- location: {
- region: 'US',
- plant: 'Redmond43'
- }
- }
- }";
- await registryManager.UpdateTwinAsync(twin.DeviceId, patch, twin.ETag);
-
- var query = registryManager.CreateQuery(
- "SELECT * FROM devices WHERE tags.location.plant = 'Redmond43'", 100);
- var twinsInRedmond43 = await query.GetNextAsTwinAsync();
- Console.WriteLine("Devices in Redmond43: {0}",
- string.Join(", ", twinsInRedmond43.Select(t => t.DeviceId)));
-
- query = registryManager.CreateQuery("SELECT * FROM devices WHERE tags.location.plant = 'Redmond43' AND properties.reported.connectivity.type = 'cellular'", 100);
- var twinsInRedmond43UsingCellular = await query.GetNextAsTwinAsync();
- Console.WriteLine("Devices in Redmond43 using cellular network: {0}",
- string.Join(", ", twinsInRedmond43UsingCellular.Select(t => t.DeviceId)));
- }
- ```
-
- The **RegistryManager** class exposes all the methods required to interact with device twins from the service. The previous code first initializes the **registryManager** object, then retrieves the device twin for **myDeviceId**, and finally updates its tags with the desired location information.
-
- After updating, it executes two queries: the first selects only the device twins of devices located in the **Redmond43** plant, and the second refines the query to select only the devices that are also connected through cellular network.
-
- The previous code, when it creates the **query** object, specifies a maximum number of returned documents. The **query** object contains a **HasMoreResults** boolean property that you can use to invoke the **GetNextAsTwinAsync** methods multiple times to retrieve all results. A method called **GetNextAsJson** is available for results that are not device twins, for example, results of aggregation queries.
+ ```csharp
+ public static async Task AddTagsAndQuery()
+ {
+ var twin = await registryManager.GetTwinAsync("myDeviceId");
+ var patch =
+ @"{
+ tags: {
+ location: {
+ region: 'US',
+ plant: 'Redmond43'
+ }
+ }
+ }";
+ await registryManager.UpdateTwinAsync(twin.DeviceId, patch, twin.ETag);
+
+ var query = registryManager.CreateQuery(
+ "SELECT * FROM devices WHERE tags.location.plant = 'Redmond43'", 100);
+ var twinsInRedmond43 = await query.GetNextAsTwinAsync();
+ Console.WriteLine("Devices in Redmond43: {0}",
+ string.Join(", ", twinsInRedmond43.Select(t => t.DeviceId)));
+
+ query = registryManager.CreateQuery("SELECT * FROM devices WHERE tags.location.plant = 'Redmond43' AND properties.reported.connectivity.type = 'cellular'", 100);
+ var twinsInRedmond43UsingCellular = await query.GetNextAsTwinAsync();
+ Console.WriteLine("Devices in Redmond43 using cellular network: {0}",
+ string.Join(", ", twinsInRedmond43UsingCellular.Select(t => t.DeviceId)));
+ }
+ ```
+
+ The **RegistryManager** class exposes all the methods required to interact with device twins from the service. The previous code first initializes the **registryManager** object, then retrieves the device twin for **myDeviceId**, and finally updates its tags with the desired location information.
+
+ After updating, it executes two queries: the first selects only the device twins of devices located in the **Redmond43** plant, and the second refines the query to select only the devices that are also connected through cellular network.
+
+ The previous code, when it creates the **query** object, specifies a maximum number of returned documents. The **query** object contains a **HasMoreResults** boolean property that you can use to invoke the **GetNextAsTwinAsync** methods multiple times to retrieve all results. A method called **GetNextAsJson** is available for results that are not device twins, for example, results of aggregation queries.
1. Finally, add the following lines to the **Main** method:
- ```csharp
- registryManager = RegistryManager.CreateFromConnectionString(connectionString);
- AddTagsAndQuery().Wait();
- Console.WriteLine("Press Enter to exit.");
- Console.ReadLine();
- ```
+ ```csharp
+ registryManager = RegistryManager.CreateFromConnectionString(connectionString);
+ AddTagsAndQuery().Wait();
+ Console.WriteLine("Press Enter to exit.");
+ Console.ReadLine();
+ ```
1. Run this application by right-clicking on the **AddTagsAndQuery** project and selecting **Debug**, followed by **Start new instance**. You should see one device in the results for the query asking for all devices located in **Redmond43** and none for the query that restricts the results to devices that use a cellular network.
- ![Query results in window](./media/iot-hub-csharp-csharp-twin-getstarted/addtagapp.png)
+ ![Query results in window](./media/iot-hub-csharp-csharp-twin-getstarted/addtagapp.png)
In this article, you:
-* Configured a new IoT hub in the Azure portal
-* Created a device identity in the IoT hub's identity registry
* Added device metadata as tags from a back-end app * Reported device connectivity information in the device twin * Queried the device twin information, using SQL-like IoT Hub query language
To learn how to:
* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
-* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-csharp).
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-csharp).
iot-hub Iot Hub Java Java Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-twin-getstarted.md
In this article, you create two Java console apps:
[!INCLUDE [iot-hub-include-find-custom-connection-string](../../includes/iot-hub-include-find-custom-connection-string.md)]
-## Create a device app with a direct method
+## Create a device app that updates reported properties
In this section, you create a Java console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to confirm that it's connected using a cellular network. 1. In the **iot-java-twin-getstarted** folder, create a Maven project named **simulated-device** using the following command at your command prompt:
- ```cmd/sh
- mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=simulated-device -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
- ```
+ ```cmd/sh
+ mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=simulated-device -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
+ ```
-2. At your command prompt, navigate to the **simulated-device** folder.
+1. At your command prompt, navigate to the **simulated-device** folder.
-3. Using a text editor, open the **pom.xml** file in the **simulated-device** folder and add the following dependencies to the **dependencies** node. This dependency enables you to use the **iot-device-client** package in your app to communicate with your IoT hub.
+1. Using a text editor, open the **pom.xml** file in the **simulated-device** folder and add the following dependencies to the **dependencies** node. This dependency enables you to use the **iot-device-client** package in your app to communicate with your IoT hub.
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure.sdk.iot</groupId>
+ <artifactId>iot-device-client</artifactId>
+ <version>1.17.5</version>
+ </dependency>
+ ```
+
+ > [!NOTE]
+ > You can check for the latest version of **iot-device-client** using [Maven search](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22iot-device-client%22%20g%3A%22com.microsoft.azure.sdk.iot%22).
+
+1. Add the following dependency to the **dependencies** node. This dependency configures a NOP for the Apache [SLF4J](https://www.slf4j.org/) logging facade, which is used by the device client SDK to implement logging. This configuration is optional, but, if you omit it, you may see a warning in the console when you run the app. For more information about logging in the device client SDK, see [Logging](https://github.com/Azure/azure-iot-sdk-jav#logging) in the *Samples for the Azure IoT device SDK for Java* readme file.
+
+ ```xml
+ <dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-nop</artifactId>
+ <version>1.7.28</version>
+ </dependency>
+ ```
+
+1. Add the following **build** node after the **dependencies** node. This configuration instructs Maven to use Java 1.8 to build the app:
+
+ ```xml
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.3</version>
+ <configuration>
+ <source>1.8</source>
+ <target>1.8</target>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ ```
+
+1. Save and close the **pom.xml** file.
- ```xml
- <dependency>
- <groupId>com.microsoft.azure.sdk.iot</groupId>
- <artifactId>iot-device-client</artifactId>
- <version>1.17.5</version>
- </dependency>
- ```
+1. Using a text editor, open the **simulated-device\src\main\java\com\mycompany\app\App.java** file.
- > [!NOTE]
- > You can check for the latest version of **iot-device-client** using [Maven search](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22iot-device-client%22%20g%3A%22com.microsoft.azure.sdk.iot%22).
+1. Add the following **import** statements to the file:
-4. Add the following dependency to the **dependencies** node. This dependency configures a NOP for the Apache [SLF4J](https://www.slf4j.org/) logging facade, which is used by the device client SDK to implement logging. This configuration is optional, but, if you omit it, you may see a warning in the console when you run the app. For more information about logging in the device client SDK, see [Logging](https://github.com/Azure/azure-iot-sdk-jav#logging) in the *Samples for the Azure IoT device SDK for Java* readme file.
+ ```java
+ import com.microsoft.azure.sdk.iot.device.*;
+ import com.microsoft.azure.sdk.iot.device.DeviceTwin.*;
- ```xml
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-nop</artifactId>
- <version>1.7.28</version>
- </dependency>
- ```
+ import java.io.IOException;
+ import java.net.URISyntaxException;
+ import java.util.Scanner;
+ ```
-5. Add the following **build** node after the **dependencies** node. This configuration instructs Maven to use Java 1.8 to build the app:
-
- ```xml
- <build>
- <plugins>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.3</version>
- <configuration>
- <source>1.8</source>
- <target>1.8</target>
- </configuration>
- </plugin>
- </plugins>
- </build>
- ```
+1. Add the following class-level variables to the **App** class. Replace `{yourdeviceconnectionstring}` with the device connection string you saw when you registered a device in the IoT Hub:
-6. Save and close the **pom.xml** file.
+ ```java
+ private static String connString = "{yourdeviceconnectionstring}";
+ private static IotHubClientProtocol protocol = IotHubClientProtocol.MQTT;
+ private static String deviceId = "myDeviceId";
+ ```
-7. Using a text editor, open the **simulated-device\src\main\java\com\mycompany\app\App.java** file.
+ This sample app uses the **protocol** variable when it instantiates a **DeviceClient** object.
-8. Add the following **import** statements to the file:
+1. Add the following method to the **App** class to print information about twin updates:
- ```java
- import com.microsoft.azure.sdk.iot.device.*;
- import com.microsoft.azure.sdk.iot.device.DeviceTwin.*;
+ ```java
+ protected static class DeviceTwinStatusCallBack implements IotHubEventCallback {
+ @Override
+ public void execute(IotHubStatusCode status, Object context) {
+ System.out.println("IoT Hub responded to device twin operation with status " + status.name());
+ }
+ }
+ ```
- import java.io.IOException;
- import java.net.URISyntaxException;
- import java.util.Scanner;
- ```
+1. Replace the code in the **main** method with the following code to:
-9. Add the following class-level variables to the **App** class. Replace `{yourdeviceconnectionstring}` with the device connection string you saw when you registered a device in the IoT Hub:
+ * Create a device client to communicate with IoT Hub.
- ```java
- private static String connString = "{yourdeviceconnectionstring}";
- private static IotHubClientProtocol protocol = IotHubClientProtocol.MQTT;
- private static String deviceId = "myDeviceId";
- ```
+ * Create a **Device** object to store the device twin properties.
- This sample app uses the **protocol** variable when it instantiates a **DeviceClient** object.
+ ```java
+ DeviceClient client = new DeviceClient(connString, protocol);
-10. Add the following method to the **App** class to print information about twin updates:
+ // Create a Device object to store the device twin properties
+ Device dataCollector = new Device() {
+ // Print details when a property value changes
+ @Override
+ public void PropertyCall(String propertyKey, Object propertyValue, Object context) {
+ System.out.println(propertyKey + " changed to " + propertyValue);
+ }
+ };
+ ```
- ```java
- protected static class DeviceTwinStatusCallBack implements IotHubEventCallback {
- @Override
- public void execute(IotHubStatusCode status, Object context) {
- System.out.println("IoT Hub responded to device twin operation with status " + status.name());
- }
- }
- ```
+1. Add the following code to the **main** method to create a **connectivityType** reported property and send it to IoT Hub:
-11. Replace the code in the **main** method with the following code to:
+ ```java
+ try {
+ // Open the DeviceClient and start the device twin services.
+ client.open();
+ client.startDeviceTwin(new DeviceTwinStatusCallBack(), null, dataCollector, null);
+
+ // Create a reported property and send it to your IoT hub.
+ dataCollector.setReportedProp(new Property("connectivityType", "cellular"));
+ client.sendReportedProperties(dataCollector.getReportedProp());
+ }
+ catch (Exception e) {
+ System.out.println("On exception, shutting down \n" + " Cause: " + e.getCause() + " \n" + e.getMessage());
+ dataCollector.clean();
+ client.closeNow();
+ System.out.println("Shutting down...");
+ }
+ ```
+
+1. Add the following code to the end of the **main** method. Waiting for the **Enter** key allows time for IoT Hub to report the status of the device twin operations.
- * Create a device client to communicate with IoT Hub.
+ ```java
+ System.out.println("Press any key to exit...");
- * Create a **Device** object to store the device twin properties.
+ Scanner scanner = new Scanner(System.in);
+ scanner.nextLine();
- ```java
- DeviceClient client = new DeviceClient(connString, protocol);
+ dataCollector.clean();
+ client.close();
+ ```
- // Create a Device object to store the device twin properties
- Device dataCollector = new Device() {
- // Print details when a property value changes
- @Override
- public void PropertyCall(String propertyKey, Object propertyValue, Object context) {
- System.out.println(propertyKey + " changed to " + propertyValue);
- }
- };
- ```
-
-12. Add the following code to the **main** method to create a **connectivityType** reported property and send it to IoT Hub:
-
- ```java
- try {
- // Open the DeviceClient and start the device twin services.
- client.open();
- client.startDeviceTwin(new DeviceTwinStatusCallBack(), null, dataCollector, null);
-
- // Create a reported property and send it to your IoT hub.
- dataCollector.setReportedProp(new Property("connectivityType", "cellular"));
- client.sendReportedProperties(dataCollector.getReportedProp());
- }
- catch (Exception e) {
- System.out.println("On exception, shutting down \n" + " Cause: " + e.getCause() + " \n" + e.getMessage());
- dataCollector.clean();
- client.closeNow();
- System.out.println("Shutting down...");
- }
- ```
-
-13. Add the following code to the end of the **main** method. Waiting for the **Enter** key allows time for IoT Hub to report the status of the device twin operations.
+1. Modify the signature of the **main** method to include the exceptions as follows:
- ```java
- System.out.println("Press any key to exit...");
-
- Scanner scanner = new Scanner(System.in);
- scanner.nextLine();
-
- dataCollector.clean();
- client.close();
- ```
+ ```java
+ public static void main(String[] args) throws URISyntaxException, IOException
+ ```
-14. Modify the signature of the **main** method to include the exceptions as follows:
+1. Save and close the **simulated-device\src\main\java\com\mycompany\app\App.java** file.
- ```java
- public static void main(String[] args) throws URISyntaxException, IOException
- ```
+1. Build the **simulated-device** app and correct any errors. At your command prompt, navigate to the **simulated-device** folder and run the following command:
-15. Save and close the **simulated-device\src\main\java\com\mycompany\app\App.java** file.
+ ```cmd/sh
+ mvn clean package -DskipTests
+ ```
-16. Build the **simulated-device** app and correct any errors. At your command prompt, navigate to the **simulated-device** folder and run the following command:
-
- ```cmd/sh
- mvn clean package -DskipTests
- ```
-
-## Create a service app to trigger a reboot
+## Create a service app that updates desired properties and queries twins
In this section, you create a Java app that adds location metadata as a tag to the device twin in IoT Hub associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection. 1. On your development machine, create an empty folder named **iot-java-twin-getstarted**.
-2. In the **iot-java-twin-getstarted** folder, create a Maven project named **add-tags-query** using the following command at your command prompt:
-
- ```cmd/sh
- mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=add-tags-query -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
- ```
-
-3. At your command prompt, navigate to the **add-tags-query** folder.
-
-4. Using a text editor, open the **pom.xml** file in the **add-tags-query** folder and add the following dependency to the **dependencies** node. This dependency enables you to use the **iot-service-client** package in your app to communicate with your IoT hub:
-
- ```xml
- <dependency>
- <groupId>com.microsoft.azure.sdk.iot</groupId>
- <artifactId>iot-service-client</artifactId>
- <version>1.17.1</version>
- <type>jar</type>
- </dependency>
- ```
-
- > [!NOTE]
- > You can check for the latest version of **iot-service-client** using [Maven search](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22iot-service-client%22%20g%3A%22com.microsoft.azure.sdk.iot%22).
-
-5. Add the following **build** node after the **dependencies** node. This configuration instructs Maven to use Java 1.8 to build the app.
-
- ```xml
- <build>
- <plugins>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.3</version>
- <configuration>
- <source>1.8</source>
- <target>1.8</target>
- </configuration>
- </plugin>
- </plugins>
- </build>
- ```
-
-6. Save and close the **pom.xml** file.
+1. In the **iot-java-twin-getstarted** folder, create a Maven project named **add-tags-query** using the following command at your command prompt:
-7. Using a text editor, open the **add-tags-query\src\main\java\com\mycompany\app\App.java** file.
+ ```cmd/sh
+ mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=add-tags-query -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
+ ```
+
+1. At your command prompt, navigate to the **add-tags-query** folder.
+
+1. Using a text editor, open the **pom.xml** file in the **add-tags-query** folder and add the following dependency to the **dependencies** node. This dependency enables you to use the **iot-service-client** package in your app to communicate with your IoT hub:
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure.sdk.iot</groupId>
+ <artifactId>iot-service-client</artifactId>
+ <version>1.17.1</version>
+ <type>jar</type>
+ </dependency>
+ ```
+
+ > [!NOTE]
+ > You can check for the latest version of **iot-service-client** using [Maven search](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22iot-service-client%22%20g%3A%22com.microsoft.azure.sdk.iot%22).
+
+1. Add the following **build** node after the **dependencies** node. This configuration instructs Maven to use Java 1.8 to build the app.
+
+ ```xml
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.3</version>
+ <configuration>
+ <source>1.8</source>
+ <target>1.8</target>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ ```
+
+1. Save and close the **pom.xml** file.
+
+1. Using a text editor, open the **add-tags-query\src\main\java\com\mycompany\app\App.java** file.
+
+1. Add the following **import** statements to the file:
+
+ ```java
+ import com.microsoft.azure.sdk.iot.service.devicetwin.*;
+ import com.microsoft.azure.sdk.iot.service.exceptions.IotHubException;
+
+ import java.io.IOException;
+ import java.util.HashSet;
+ import java.util.Set;
+ ```
+
+1. Add the following class-level variables to the **App** class. Replace `{youriothubconnectionstring}` with the IoT hub connection string you copied in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
+
+ ```java
+ public static final String iotHubConnectionString = "{youriothubconnectionstring}";
+ public static final String deviceId = "myDeviceId";
+
+ public static final String region = "US";
+ public static final String plant = "Redmond43";
+ ```
+
+1. Update the **main** method signature to include the following `throws` clause:
+
+ ```java
+ public static void main( String[] args ) throws IOException
+ ```
+
+1. Replace the code in the **main** method with the following code to create the **DeviceTwin** and **DeviceTwinDevice** objects. The **DeviceTwin** object handles the communication with your IoT hub. The **DeviceTwinDevice** object represents the device twin with its properties and tags:
+
+ ```java
+ // Get the DeviceTwin and DeviceTwinDevice objects
+ DeviceTwin twinClient = DeviceTwin.createFromConnectionString(iotHubConnectionString);
+ DeviceTwinDevice device = new DeviceTwinDevice(deviceId);
+ ```
+
+1. Add the following `try/catch` block to the **main** method:
+
+ ```java
+ try {
+ // Code goes here
+ } catch (IotHubException e) {
+ System.out.println(e.getMessage());
+ } catch (IOException e) {
+ System.out.println(e.getMessage());
+ }
+ ```
+
+1. To update the **region** and **plant** device twin tags in your device twin, add the following code in the `try` block:
+
+ ```java
+ // Get the device twin from IoT Hub
+ System.out.println("Device twin before update:");
+ twinClient.getTwin(device);
+ System.out.println(device);
+
+ // Update device twin tags if they are different
+ // from the existing values
+ String currentTags = device.tagsToString();
+ if ((!currentTags.contains("region=" + region) && !currentTags.contains("plant=" + plant))) {
+ // Create the tags and attach them to the DeviceTwinDevice object
+ Set<Pair> tags = new HashSet<Pair>();
+ tags.add(new Pair("region", region));
+ tags.add(new Pair("plant", plant));
+ device.setTags(tags);
+
+ // Update the device twin in IoT Hub
+ System.out.println("Updating device twin");
+ twinClient.updateTwin(device);
+ }
+
+ // Retrieve the device twin with the tag values from IoT Hub
+ System.out.println("Device twin after update:");
+ twinClient.getTwin(device);
+ System.out.println(device);
+ ```
+
+1. To query the device twins in IoT hub, add the following code to the `try` block after the code you added in the previous step. The code runs two queries. Each query returns a maximum of 100 devices.
+
+ ```java
+ // Query the device twins in IoT Hub
+ System.out.println("Devices in Redmond:");
+
+ // Construct the query
+ SqlQuery sqlQuery = SqlQuery.createSqlQuery("*", SqlQuery.FromType.DEVICES, "tags.plant='Redmond43'", null);
+
+ // Run the query, returning a maximum of 100 devices
+ Query twinQuery = twinClient.queryTwin(sqlQuery.getQuery(), 100);
+ while (twinClient.hasNextDeviceTwin(twinQuery)) {
+ DeviceTwinDevice d = twinClient.getNextDeviceTwin(twinQuery);
+ System.out.println(d.getDeviceId());
+ }
-8. Add the following **import** statements to the file:
+ System.out.println("Devices in Redmond using a cellular network:");
- ```java
- import com.microsoft.azure.sdk.iot.service.devicetwin.*;
- import com.microsoft.azure.sdk.iot.service.exceptions.IotHubException;
+ // Construct the query
+ sqlQuery = SqlQuery.createSqlQuery("*", SqlQuery.FromType.DEVICES, "tags.plant='Redmond43' AND properties.reported.connectivityType = 'cellular'", null);
- import java.io.IOException;
- import java.util.HashSet;
- import java.util.Set;
- ```
-
-9. Add the following class-level variables to the **App** class. Replace `{youriothubconnectionstring}` with the IoT hub connection string you copied in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
-
- ```java
- public static final String iotHubConnectionString = "{youriothubconnectionstring}";
- public static final String deviceId = "myDeviceId";
-
- public static final String region = "US";
- public static final String plant = "Redmond43";
- ```
-
-10. Update the **main** method signature to include the following `throws` clause:
-
- ```java
- public static void main( String[] args ) throws IOException
- ```
-
-11. Replace the code in the **main** method with the following code to create the **DeviceTwin** and **DeviceTwinDevice** objects. The **DeviceTwin** object handles the communication with your IoT hub. The **DeviceTwinDevice** object represents the device twin with its properties and tags:
-
- ```java
- // Get the DeviceTwin and DeviceTwinDevice objects
- DeviceTwin twinClient = DeviceTwin.createFromConnectionString(iotHubConnectionString);
- DeviceTwinDevice device = new DeviceTwinDevice(deviceId);
- ```
-
-12. Add the following `try/catch` block to the **main** method:
-
- ```java
- try {
- // Code goes here
- } catch (IotHubException e) {
- System.out.println(e.getMessage());
- } catch (IOException e) {
- System.out.println(e.getMessage());
- }
- ```
-
-13. To update the **region** and **plant** device twin tags in your device twin, add the following code in the `try` block:
-
- ```java
- // Get the device twin from IoT Hub
- System.out.println("Device twin before update:");
- twinClient.getTwin(device);
- System.out.println(device);
-
- // Update device twin tags if they are different
- // from the existing values
- String currentTags = device.tagsToString();
- if ((!currentTags.contains("region=" + region) && !currentTags.contains("plant=" + plant))) {
- // Create the tags and attach them to the DeviceTwinDevice object
- Set<Pair> tags = new HashSet<Pair>();
- tags.add(new Pair("region", region));
- tags.add(new Pair("plant", plant));
- device.setTags(tags);
-
- // Update the device twin in IoT Hub
- System.out.println("Updating device twin");
- twinClient.updateTwin(device);
- }
-
- // Retrieve the device twin with the tag values from IoT Hub
- System.out.println("Device twin after update:");
- twinClient.getTwin(device);
- System.out.println(device);
- ```
-
-14. To query the device twins in IoT hub, add the following code to the `try` block after the code you added in the previous step. The code runs two queries. Each query returns a maximum of 100 devices.
-
- ```java
- // Query the device twins in IoT Hub
- System.out.println("Devices in Redmond:");
-
- // Construct the query
- SqlQuery sqlQuery = SqlQuery.createSqlQuery("*", SqlQuery.FromType.DEVICES, "tags.plant='Redmond43'", null);
-
- // Run the query, returning a maximum of 100 devices
- Query twinQuery = twinClient.queryTwin(sqlQuery.getQuery(), 100);
- while (twinClient.hasNextDeviceTwin(twinQuery)) {
- DeviceTwinDevice d = twinClient.getNextDeviceTwin(twinQuery);
- System.out.println(d.getDeviceId());
- }
-
- System.out.println("Devices in Redmond using a cellular network:");
-
- // Construct the query
- sqlQuery = SqlQuery.createSqlQuery("*", SqlQuery.FromType.DEVICES, "tags.plant='Redmond43' AND properties.reported.connectivityType = 'cellular'", null);
-
- // Run the query, returning a maximum of 100 devices
- twinQuery = twinClient.queryTwin(sqlQuery.getQuery(), 3);
- while (twinClient.hasNextDeviceTwin(twinQuery)) {
- DeviceTwinDevice d = twinClient.getNextDeviceTwin(twinQuery);
- System.out.println(d.getDeviceId());
- }
- ```
-
-15. Save and close the **add-tags-query\src\main\java\com\mycompany\app\App.java** file
-
-16. Build the **add-tags-query** app and correct any errors. At your command prompt, navigate to the **add-tags-query** folder and run the following command:
-
- ```cmd/sh
- mvn clean package -DskipTests
- ```
+ // Run the query, returning a maximum of 100 devices
+ twinQuery = twinClient.queryTwin(sqlQuery.getQuery(), 3);
+ while (twinClient.hasNextDeviceTwin(twinQuery)) {
+ DeviceTwinDevice d = twinClient.getNextDeviceTwin(twinQuery);
+ System.out.println(d.getDeviceId());
+ }
+ ```
+
+1. Save and close the **add-tags-query\src\main\java\com\mycompany\app\App.java** file
+
+1. Build the **add-tags-query** app and correct any errors. At your command prompt, navigate to the **add-tags-query** folder and run the following command:
+
+ ```cmd/sh
+ mvn clean package -DskipTests
+ ```
## Run the apps
You are now ready to run the console apps.
In this article, you:
-* Configured a new IoT hub in the Azure portal
-* Created a device identity in the IoT hub's identity registry
* Added device metadata as tags from a back-end app * Reported device connectivity information in the device twin * Queried the device twin information, using SQL-like IoT Hub query language
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-twin-getstarted.md
To complete this article, you need:
[!INCLUDE [iot-hub-include-find-custom-connection-string](../../includes/iot-hub-include-find-custom-connection-string.md)]
-## Create a device app with a direct method
+## Create a device app that updates reported properties
In this section, you create a Node.js console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
In this section, you create a Node.js console app that connects to your hub as *
![Show myDeviceId in both query results](media/iot-hub-node-node-twin-getstarted/service2.png)
-## Create a service app to trigger a reboot
+## Create a service app that updates desired properties and queries twins
In this section, you create a Node.js console app that adds location metadata to the device twin associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
In this section, you create a Node.js console app that adds location metadata to
In this article, you:
-* Configured a new IoT hub in the Azure portal
-* Created a device identity in the IoT hub's identity registry
* Added device metadata as tags from a back-end app * Reported device connectivity information in the device twin * Queried the device twin information, using SQL-like IoT Hub query language
iot-hub Iot Hub Python Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-twin-getstarted.md
Last updated 03/11/2020
+ # Get started with device twins (Python) [!INCLUDE [iot-hub-selector-twin-get-started](../../includes/iot-hub-selector-twin-get-started.md)]
In this article, you create two Python console apps:
[!INCLUDE [iot-hub-include-find-custom-connection-string](../../includes/iot-hub-include-find-custom-connection-string.md)]
-## Create a device app with a direct method
+## Create a device app that updates reported properties
In this section, you create a Python console app that connects to your hub as your **{Device ID}** and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
In this section, you create a Python console app that connects to your hub as yo
![receive desired properties on device app](./media/iot-hub-python-twin-getstarted/device-2.png)
-## Create a service app to trigger a reboot
+## Create a service app that updates desired properties and queries twins
In this section, you create a Python console app that adds location metadata to the device twin associated with your **{Device ID}**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
In this section, you create a Python console app that adds location metadata to
![first query showing all devices in Redmond](./media/iot-hub-python-twin-getstarted/service-1.png) - In this article, you:
-* Configured a new IoT hub in the Azure portal
-* Created a device identity in the IoT hub's identity registry
* Added device metadata as tags from a back-end app * Reported device connectivity information in the device twin
-* Queried the device twin information, using SQL-like IoT Hub query language
+* Queried the device twin information using the IoT Hub query language
## Next steps
To learn how to:
* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
-* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-python).
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-python).
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
Besides being the tool to put MLOps into practice, the machine learning pipeline
Depending on what a machine learning project already has, the starting point of building a machine learning pipeline may vary. There are a few typical approaches to building a pipeline.
-The first approach usually applies to the team that hasnΓÇÖt used pipeline before and wants to take some advantage of pipeline like MLOps. In this situation, data scientists typically have developed some machine learning models on their local environment using their favorite tools. Machine learning engineers need to take data scientistsΓÇÖ output into production. The work involves cleaning up some unnecessary code from original notebook or python code, changes the training input from local data to parameterized values, split the training code into multiple steps as needed, perform unit test of each step, and finally wraps all steps into a pipeline.
+The first approach usually applies to the team that hasnΓÇÖt used pipeline before and wants to take some advantage of pipeline like MLOps. In this situation, data scientists typically have developed some machine learning models on their local environment using their favorite tools. Machine learning engineers need to take data scientistsΓÇÖ output into production. The work involves cleaning up some unnecessary code from original notebook or Python code, changes the training input from local data to parameterized values, split the training code into multiple steps as needed, perform unit test of each step, and finally wraps all steps into a pipeline.
Once the teams get familiar with pipelines and want to do more machine learning projects using pipelines, they'll find the first approach is hard to scale. The second approach is set up a few pipeline templates, each try to solve one specific machine learning problem. The template predefines the pipeline structure including how many steps, each stepΓÇÖs inputs and outputs, and their connectivity. To start a new machine learning project, the team first forks one template repo. The team leader then assigns members which step they need to work on. The data scientists and data engineers do their regular work. When they're happy with their result, they structure their code to fit in the pre-defined steps. Once the structured codes are checked-in, the pipeline can be executed or automated. If there's any change, each member only needs to work on their piece of code without touching the rest of the pipeline code.
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
ms.devlang: azurecli
#Customer intent: As a machine learning engineer, I want to test and debug online endpoints locally using Visual Studio Code before deploying them Azure.
-# Debug online endpoints locally in Visual Studio Code (preview)
+# Debug online endpoints locally in Visual Studio Code
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning How To Safely Rollout Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-online-endpoints.md
Also note the following behavior:
* You can send traffic directly to the mirror deployment by specifying the deployment set for mirror traffic. * You can send traffic directly to a live deployment by specifying the deployment set for live traffic, but in this case the traffic won't be mirrored to the mirror deployment. Mirror traffic is routed from traffic sent to endpoint without specifying the deployment.
+> [!TIP]
+> You can use `--deployment-name` option [for CLI v2](/cli/azure/ml/online-endpoint#az-ml-online-endpoint-invoke-optional-parameters), or `deployment_name` option [for SDK v2](/python/api/azure-ai-ml/azure.ai.ml.operations.onlineendpointoperations#azure-ai-ml-operations-onlineendpointoperations-invoke) to specify the deployment to be routed to.
+ :::image type="content" source="./media/how-to-safely-rollout-managed-endpoints/endpoint-concept-mirror.png" alt-text="Diagram showing 10% traffic mirrored to one deployment."::: # [Azure CLI](#tab/azure-cli)
If you aren't going use the deployment, you should delete it with:
- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md) - [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md)-- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
description: Learn how to use Azure PowerShell to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher. documentationcenter: network-watcher--++ Last updated 10/12/2022
# Quickstart: Diagnose a virtual machine network traffic filter problem - Azure PowerShell
-In this quickstart, you deploy a virtual machine (VM) and then check communications to an IP address and URL and from an IP address. You determine the cause of a communication failure and how you can resolve it.
+In this quickstart, you deploy a virtual machine (VM) and then check communications to and from an IP address, and to a URL. You determine the cause of a communication failure and how you can resolve it.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
network-watcher Network Watcher Connectivity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-overview.md
na Previously updated : 11/02/2022 Last updated : 11/10/2022
The connection troubleshoot feature of Network Watcher provides the capability t
> [!IMPORTANT] > Connection troubleshoot requires that the VM you troubleshoot from has the `AzureNetworkWatcherExtension` VM extension installed. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). The extension is not required on the destination endpoint.
+## Supported source types
+
+The following sources are supported by Network Watcher:
+
+- Virtual Machines
+- Bastion
+- Application Gateways (except v1)
+ ## Response The following table shows the properties returned when connection troubleshoot has finished running.
sentinel Get Visibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/get-visibility.md
Title: Visualize collected data
description: Learn how to quickly view and monitor what's happening across your environment by using Microsoft Sentinel. - Previously updated : 11/09/2021 Last updated : 11/24/2022 # Visualize collected data - In this article, you will learn how to quickly be able to view and monitor what's happening across your environment using Microsoft Sentinel. After you connected your data sources to Microsoft Sentinel, you get instant visualization and analysis of data so that you can know what's happening across all your connected data sources. Microsoft Sentinel gives you workbooks that provide you with the full power of tools already available in Azure as well as tables and charts that are built in to provide you with analytics for your logs and queries. You can either use built-in workbooks or create a new workbook easily, from scratch or based on an existing workbook. ## Get visualization
-To visualize and get analysis of what's happening on your environment, first, take a look at the overview dashboard to get an idea of the security posture of your organization. You can click on each element of these tiles to drill down to the raw data from which they are created. To help you reduce noise and minimize the number of alerts you have to review and investigate, Microsoft Sentinel uses a fusion technique to correlate alerts into incidents. **Incidents** are groups of related alerts that together create an actionable incident that you can investigate and resolve.
+To visualize and get analysis of what's happening on your environment, first, take a look at the overview dashboard to get an idea of the security posture of your organization. To help you reduce noise and minimize the number of alerts you have to review and investigate, Microsoft Sentinel uses a fusion technique to correlate alerts into incidents. **Incidents** are groups of related alerts that together create an actionable incident that you can investigate and resolve.
+
+In the Azure portal, select Microsoft Sentinel and then select the workspace you want to monitor.
++
+If you want to refresh the data for all sections of the dashboard, select **Refresh** at the top of the dashboard. To improve performance, the data for each section of the dashboard is pre-calculated, and you can see the refresh time at the top of each section.
+
+### View incident data
+
+You see different types of incident data under **Incidents**.
+
+
+- On the top left, you see the number of new, active, and closed incidents over the last 24 hours.
+- On the top right, you see incidents organized by severity, and closed incidents by closing classification.
+- On the bottom left, a graph breaks up the incident status by creation time, in four hour intervals.
+- On the bottom right, you can see the mean time to acknowledge an incident and mean time to close, with a link to the SOC efficiency workbook.
+
+### View automation data
-- In the Azure portal, select Microsoft Sentinel and then select the workspace you want to monitor.
+You see different types of automation data under **Automation**.
- ![Microsoft Sentinel overview](./media/qs-get-visibility/overview.png)
-- The toolbar across the top tells you how many events you got over the time period selected, and it compares it to the previous 24 hours. The toolbar tells you from these events, the alerts that were triggered (the small number represents change over the last 24 hours), and then it tells you for those events, how many are open, in progress, and closed. Check to see that there isn't a dramatic increase or drop in the number of events. If there is a drop, it could be that a connection stopped reporting to Microsoft Sentinel. If there is an increase, something suspicious may have happened. Check to see if you have new alerts.
+- At the top, you see a summary of the automation rules activity: Incidents closed by automation, the time the automation saved, and related playbooks health.
+- Below the summary, a graph summarizes the numbers of actions performed by automation, by type of action.
+- At the bottom, you can find a count of the active automation rules with a link to the automation blade.
- ![Microsoft Sentinel counters](./media/qs-get-visibility/funnel.png)
+### View status of data records, data collectors, and threat intelligence
-The main body of the overview page gives insight at a glance into the security status of your workspace:
+You see different types of data on data records, data collectors, and threat intelligence under **Data**.
-- **Events and alerts over time**: Lists the number of events and how many alerts were created from those events. If you see a spike that's unusual, you should see alerts for it - if there's something unusual where there is a spike in events but you don't see alerts, it might be cause for concern. -- **Potential malicious events**: When traffic is detected from sources that are known to be malicious, Microsoft Sentinel alerts you on the map. If you see orange, it is inbound traffic: someone is trying to access your organization from a known malicious IP address. If you see Outbound (red) activity, it means that data from your network is being streamed out of your organization to a known malicious IP address.
+- On the left, a graph shows the number of records that Microsoft Sentinel collected in the last 24 hours, compared to the previous 24 hours, and anomalies detected in that time period.
+- On the top right, you see a summary of the data connector status, divided by unhealthy and active connectors. **Unhealthy connectors** indicate how many connectors have errors. **Active connectors** are connectors with data streaming into Microsoft Sentinel, as measured by a query included in the connector.
+- On the bottom right, you can see threat intelligence records in Microsoft Sentinel, by indicator of compromise.
- ![Malicious traffic map](./media/qs-get-visibility/map.png)
+### View analytics data
-- **Recent incidents**: To view your recent incidents, their severity and the number of alerts associated with the incident. If you see a sudden peak in a specific type of alert, it could mean that there is an active attack currently running. For example, if you have a sudden peak of 20 Pass-the-hash events from Microsoft Defender for Identity (formerly Azure ATP), it's possible that someone is currently trying to attack you.
+You see data for analytics rules under **Analytics**.
-- **Data source anomalies**: Microsoft's data analysts created models that constantly search the data from your data sources for anomalies. If there aren't any anomalies, nothing is displayed. If anomalies are detected, you should deep dive into them to see what happened. For example, click on the spike in Azure Activity. You can click on **Chart** to see when the spike happened, and then filter for activities that occurred during that time period to see what caused the spike.
- ![Anomalous data sources](./media/qs-get-visibility/anomolies.png)
+You see the number of analytics rules in Microsoft Sentinel, by enabled, disabled, or auto-disabled status.
## Use built-in workbooks<a name="dashboards"></a>
site-recovery Asr Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/asr-arm-templates.md
Previously updated : 02/04/2021 Last updated : 02/18/2021
site-recovery Avs Tutorial Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-failover.md
Previously updated : 09/30/2020 Last updated : 04/06/2022
site-recovery Avs Tutorial Prepare Avs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-avs.md
Previously updated : 09/29/2020 Last updated : 08/23/2022
site-recovery Avs Tutorial Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-replication.md
Previously updated : 09/29/2020 Last updated : 08/23/2022
site-recovery Azure Stack Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-stack-site-recovery.md
Title: Replicate Azure Stack VMs to Azure using Azure Site Recovery | Microsoft Docs description: Learn how to set up disaster recovery to Azure for Azure Stack VMs with the Azure Site Recovery service. Previously updated : 08/05/2019 Last updated : 10/02/2021 # Replicate Azure Stack VMs to Azure
site-recovery Azure To Azure About Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-about-networking.md
Previously updated : 3/13/2020 Last updated : 11/21/2021
site-recovery Azure To Azure Autoupdate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-autoupdate.md
Previously updated : 04/02/2020 Last updated : 07/23/2020
site-recovery Azure To Azure Enable Replication Added Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-replication-added-disk.md
Previously updated : 04/29/2019 Last updated : 01/14/2020
site-recovery Azure To Azure How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md
Previously updated : 07/14/2020 Last updated : 04/23/2022 # Replicate machines with private endpoints
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
Previously updated : 11/27/2018 Last updated : 01/13/2022
site-recovery Azure To Azure Move Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-move-overview.md
description: Using Azure Site Recovery to move Azure VMs from one Azure region t
Previously updated : 01/28/2019 Last updated : 09/10/2020
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 11/11/2022 Last updated : 11/23/2022
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
Previously updated : 04/29/2022 Last updated : 07/29/2022
site-recovery Azure To Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-replication.md
description: Troubleshoot replication in Azure VM disaster recovery with Azure S
Previously updated : 04/03/2020 Last updated : 03/07/2022 # Troubleshoot replication in Azure VM disaster recovery
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recovery. description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 09/21/2022 Last updated : 11/23/2022
static-web-apps Deploy Blazor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-blazor.md
Previously updated : 10/11/2022 Last updated : 11/22/2022
-# Build an Azure Static Web Apps website with Blazor
+# Build an Azure Static Web Apps website with Blazor and serverless API
Azure Static Web Apps publishes a website to a production environment by building apps from a GitHub repository, which is supported by a serverless backend. The following tutorial shows how to deploy C# Blazor WebAssembly app that displays weather data returned by a serverless API.+ ## Prerequisites - [GitHub](https://github.com) account
Now that the repository is created, create a static web app from the Azure porta
| Property | Value | Description | | | | | | App location | **Client** | Folder containing the Blazor WebAssembly app |
- | API location | **Api** | Folder containing the Azure Functions app |
+ | API location | **ApiIsolated** | Folder containing the .NET 7 Azure Functions app |
| Output location | **wwwroot** | Folder in the build output containing the published Blazor WebAssembly application | 8. Select **Review + Create** to verify the details are all correct.
The Static Web Apps overview window displays a series of links that help you int
2. Once GitHub Actions workflow is complete, you can select the _URL_ link to open the website in new tab. :::image type="content" source="media/deploy-blazor/my-first-static-blazor-app.png" alt-text="Screenshot of Static Web Apps Blazor webpage.":::
+
## 4. Understand the application overview Together, the following projects make up the parts required to create a Blazor WebAssembly application running in the browser supported by an Azure Functions API backend. |Visual Studio project |Description | |||
-|API | The C# Azure Functions application implements the API endpoint that provides weather information to the Blazor WebAssembly app. The **WeatherForecastFunction** returns an array of `WeatherForecast` objects. |
+|Api | The *.NET 6 in-process* C# Azure Functions application implements the API endpoint that provides weather information to the Blazor WebAssembly app. The **WeatherForecastFunction** returns an array of `WeatherForecast` objects. |
+|ApiIsolated | The *.NET 7 isolated-process* C# Azure Functions application implements the API endpoint that provides weather information to the Blazor WebAssembly app. The **WeatherForecastFunction** returns an array of `WeatherForecast` objects. |
|Client |The front-end Blazor WebAssembly project. A [fallback route](#fallback-route) is implemented to ensure client-side routing is functional. | |Shared | Holds common classes referenced by both the Api and Client projects, which allow data to flow from API endpoint to the front-end web app. The [`WeatherForecast`](https://github.com/staticwebdev/blazor-starter/blob/main/Shared/WeatherForecast.cs) class is shared among both apps. |
The app exposes URLs like `/counter` and `/fetchdata`, which map to specific rou
} ```
-The json configuration ensures that requests to any route in the app return the `https://docsupdatetracker.net/index.html` page.
+The JSON configuration ensures that requests to any route in the app return the `https://docsupdatetracker.net/index.html` page.
## Clean up resources
virtual-desktop Security Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md
We recommend you install an endpoint detection and response (EDR) product to pro
### Enable threat and vulnerability management assessments
-Identifying software vulnerabilities that exist in operating systems and applications is critical to keeping your environment secure. Microsoft Defender for Cloud can help you identify problem spots through [Microsoft Defender for Endpoint's threat and vulnerability management solution](../defender-for-cloud/deploy-vulnerability-assessment-tvm.md). You can also use third-party products if you're so inclined, although we recommend using Microsoft Defender for Cloud and Microsoft Defender for Endpoint.
+Identifying software vulnerabilities that exist in operating systems and applications is critical to keeping your environment secure. Microsoft Defender for Cloud can help you identify problem spots through [Microsoft Defender for Endpoint's threat and vulnerability management solution](../defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md). You can also use third-party products if you're so inclined, although we recommend using Microsoft Defender for Cloud and Microsoft Defender for Endpoint.
### Patch software vulnerabilities in your environment