Updates from: 03/02/2022 02:11:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
Open your web app in a code editor such as Visual Studio Code. Under the `call-p
||| |`APP_CLIENT_ID`|The **Application (client) ID** for the web app you registered in [step 2.3](#step-23-register-the-web-app). | |`APP_CLIENT_SECRET`|The client secret for the web app you created in [step 2.4](#step-24-create-a-client-secret) |
-|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority for the user flow you created in [step 1](#step-1-configure-your-user-flow) such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi_node_app`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). |
+|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority for the user flow you created in [step 1](#step-1-configure-your-user-flow) such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). |
|`AUTHORITY_DOMAIN`| The Azure AD B2C authority domain such as `https://<your-tenant-name>.b2clogin.com`. Replace `<your-tenant-name>` with the name of your tenant.| |`APP_REDIRECT_URI`| The application redirect URI where Azure AD B2C will return authentication responses (tokens). It matches the **Redirect URI** you set while registering your app in Azure portal. This URL need to be publicly accessible. Leave the value as is.|
-|`LOGOUT_ENDPOINT`| The Azure AD B2C sign out endpoint such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>/oauth2/v2.0/logout?post_logout_redirect_uri=http://localhost:3000`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi_node_app`.|
+|`LOGOUT_ENDPOINT`| The Azure AD B2C sign out endpoint such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>/oauth2/v2.0/logout?post_logout_redirect_uri=http://localhost:3000`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`.|
After the update, your final configuration file should look similar to the following sample:
To get the web API sample code, do one of the following:
- For `clientID`, use the **Application (Client) ID** for the web API you created in [step 2.1](#step-21-register-the-web-api-application).
- - For `policyName`, use the name of the **Sing in and sign up** user flow you created in [step 1](#step-1-configure-your-user-flow) such as `B2C_1_susi_node_app`.
+ - For `policyName`, use the name of the **Sing in and sign up** user flow you created in [step 1](#step-1-configure-your-user-flow) such as `B2C_1_susi`.
After the update, your code should look similar to the following sample:
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
+
+ Title: Configure Azure Active Directory B2C with Transmit Security
+
+description: Configure Azure Active Directory B2C with Transmit Security for passwordless strong customer authentication
++++++ Last updated : 02/28/2022++
+zone_pivot_groups: b2c-policy-type
++
+# Configure Transmit Security with Azure Active Directory B2C for passwordless authentication
+++
+In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [Transmit Security](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth login experience for all customers across every device and channel eliminating fraud, phishing, and credential reuse.
+
+## Scenario description
+
+The following architecture diagram shows the implementation.
+
+![Screenshot showing the bindid and Azure AD B2C architecture diagram](media/partner-bindid/partner-bindid-architecture-diagram.png)
+
+|Step | Description |
+|:--| :--|
+| 1. | User arrives at a login page. Users select sign-in/sign-up and enter username into the page.
+| 2. | Azure AD B2C redirects the user to BindID using an OpenID Connect (OIDC) request.
+| 3. | BindID authenticates the user using appless FIDO2 biometrics, such as fingerprint.
+| 4. | A decentralized authentication response is returned to BindID.
+| 5. | The OIDC response is passed on to Azure AD B2C.
+| 6.| User is either granted or denied access to the customer application based on the verification results.
+
+## Onboard with BindID
+
+To integrate BindID with your Azure AD B2C instance, you'll need to configure an application in the [BindID Admin
+Portal](https://admin.bindid-sandbox.io/console/). For more information, see [getting started guide](https://developer.bindid.io/docs/guides/admin_portal/topics/getStarted/get_started_admin_portal). You can either create a new application or use one that you already created.
+
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
+
+- A BindID tenant. You can [sign up for free.](https://www.transmitsecurity.com/developer?utm_signup=dev_hub#try)
+
+- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).
++
+- Complete the steps in the article [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
++
+### Step 1 - Create an application registration in BindID
+
+From [Applications](https://admin.bindid-sandbox.io/console/#/applications) to configure your tenant application in BindID, the following information is needed
+
+| Property | Description |
+|:|:|
+| Name | Azure AD B2C/your desired application name|
+| Domain | name.onmicrosoft.com|
+| Redirect URIs| https://jwt.ms |
+| Redirect URLs |Specify the page to which users are redirected after BindID authentication: https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
+
+>[!NOTE]
+>BindID will provide you Client ID and Client Secret, which you'll need later to configure the Identity provider in Azure AD B2C.
++
+### Step 2 - Add a new Identity provider in Azure AD B2C
+
+1. Sign-in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. Choose **All services** in the top-left corner of the Azure portal, then search for and select **Azure AD B2C**.
+
+5. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers**.
+
+6. Select **New OpenID Connect Provider**.
+
+7. Select **Add**.
+
+### Step 3 - Configure an Identity provider
+
+1. Select **Identity provider type > OpenID Connect**
+
+2. Fill out the form to set up the Identity provider:
+
+ |Property |Value |
+ |:|:|
+ |Name |Enter BindID ΓÇô Passwordless or a name of your choice|
+ |Metadata URL| `https://signin.bindid-sandbox.io/.well-known/openid-configuration` |
+ |Client ID|The application ID from the BindID admin UI captured in **Step 1**|
+ |Client Secret|The application Secret from the BindID admin UI captured in **Step 1**|
+ |Scope|OpenID email|
+ |Response type|Code|
+ |Response mode|form_post|
+ |**Identity provider claims mapping**|
+ |User ID|sub|
+ |Email|email|
+
+3. Select **Save** to complete the setup for your new OIDC Identity provider.
+
+### Step 4 - Create a user flow policy
+
+You should now see BindID as a new OIDC Identity provider listed within your B2C identity providers.
+
+1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
+
+2. Select **New user flow**
+
+3. Select **Sign up and sign in** > **Version** **Reccomended** > **Create**.
+
+4. Enter a **Name** for your policy.
+
+5. In the Identity providers section, select your newly created BindID Identity provider.
+
+6. Select **None** for Local Accounts to disable email and password-based authentication.
+
+7. Select **Create**
+
+8. Select the newly created User Flow
+
+9. Select **Run user flow**
+
+10. In the form, select the JWT Application and enter the Replying URL, such as `https://jwt.ms`.
+
+11. Select **Run user flow**.
+
+12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user will receive a push notification to the registered user mobile device for a Fast Identity Online (FIDO2) certified authentication. It can be a user finger print, biometric or decentralized pin.
+
+13. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
+++
+### Step 2 - Create a BindID policy key
+
+Store the client secret that you previously recorded in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+
+5. On the Overview page, select **Identity Experience Framework**.
+
+6. Select **Policy Keys** and then select **Add**.
+
+7. For **Options**, choose `Manual`.
+
+8. Enter a **Name** for the policy key. For example, `BindIDClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+
+9. In **Secret**, enter your client secret that you previously recorded.
+
+10. For **Key usage**, select `Signature`.
+
+11. Select **Create**.
+
+>[!NOTE]
+>In Azure Active Directory B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](./user-flow-overview.md).
+
+### Step 3- Configure BindID as an Identity provider
+
+To enable users to sign in using BindID, you need to define BindID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity.
+
+You can define BindID as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy
+
+1. Open the `TrustFrameworkExtensions.xml`.
+
+2. Find the **ClaimsProviders** element. If it dosen't exist, add it under the root element.
+
+3. Add a new **ClaimsProvider** as follows:
+
+```xml
+ <ClaimsProvider>
+ <Domain>signin.bindid-sandbox.io</Domain>
+ <DisplayName>BindID</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="BindID-OpenIdConnect">
+ <DisplayName>BindID</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <Metadata>
+ <Item Key="METADATA">https://signin.bindid-sandbox.io/.well-known/openid-configuration</Item>
+ <!-- Update the Client ID below to the BindID Application ID -->
+ <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="scope">openid email</Item>
+ <Item Key="response_mode">form_post</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="AccessTokenResponseFormat">json</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_BindIDClientSecret" />
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource"
+ DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+
+```
+
+4. Set **client_id** with your BindID Application ID.
+
+5. Save the file.
+
+### Step 4 - Add a user journey
+
+At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
+
+1. Open the `TrustFrameworkBase.xml` file from the starter pack.
+
+2. Find and copy the entire contents of the **UserJourneys** element that includes `ID=SignUpOrSignIn`.
+
+3. Open the `TrustFrameworkExtensions.xml` and find the UserJourneys element. If the element doesn't exist, add one.
+
+4. Paste the entire content of the UserJourney element that you copied as a child of the UserJourneys element.
+
+5. Rename the ID of the user journey. For example, `ID=CustomSignUpSignIn`
+
+### Step 5 - Add the identity provider to a user journey
+
+Now that you have a user journey, add the new identity provider to the user journey.
+
+1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `BindIDExchange`.
+
+2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the BindID button to `BindID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+
+The following XML demonstrates orchestration steps of a user journey with the identity provider:
++
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="BindIDExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="BindIDExchange" TechnicalProfileReferenceId="BindID-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+### Step 6 - Configure the relying party policy
+
+The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/SocialAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **PolicyProfile** TechnicalProfile element. In this sample, the application will receive the user attributes such as display name, given name, surname, email, objectId, identity provider, and tenantId.
+
+```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignInWithBindID" />
+ <TechnicalProfile Id="BindID-OpenIdConnect">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+```
+
+### Step 7 - Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+5. Under Policies, select **Identity Experience Framework**.
+
+6. Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
++
+### Step 8 - Test your custom policy
+
+1. Open the Azure AD B2C tenant and under Policies select **Identity Experience Framework**.
+
+2. Click on your previously created **CustomSignUpSignIn** and select the settings:
+
+ a. **Application**: select the registered app (sample is JWT)
+
+ b. **Reply URL**: select the **redirect URL** that should show `https://jwt.ms`.
+
+ c. Select **Run now**.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
++
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](custom-policy-overview.md)
+
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+
+- [Sample custom policies for BindID and Azure AD B2C integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration)
++
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md
The Experian integration includes the following components:
- Experian ΓÇô The Experian service takes inputs provided by the user and verifies the user's identity -- Custom Rest API ΓÇô This API implements the integration between Azure AD B2C and the Experian service.
+- Custom REST API ΓÇô This API implements the integration between Azure AD B2C and the Experian service.
The following architecture diagram shows the implementation.
The Experian API call is protected by a client certificate. This client certific
### Part 3 - Configure the API
-Application settings can be [configured in the App service in Azure](../app-service/configure-common.md#configure-app-settings). With this method, settings can be securely configured without checking them into a repository. You'll need to provide the following settings to the Rest API:
+Application settings can be [configured in the App service in Azure](../app-service/configure-common.md#configure-app-settings). With this method, settings can be securely configured without checking them into a repository. You'll need to provide the following settings to the REST API:
| Application settings | Source | Notes | | :-- | :| :--|
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for MFA and Passwordless authenticati
| ISV partner | Description and integration walkthroughs | |:-|:--|
+|![Screenshot of a bindid logo](./medi) solution BindID is a passwordless authentication service that uses strong FIDO2 biometric authentication for a reliable omni-channel authentication experience, which ensures a smooth login experience for customers across every device and channel eliminating fraud, phishing, and credential reuse. |
| ![Screenshot of a bloksec logo](./medi) is a passwordless authentication and tokenless MFA solution, which provides real-time consent-based services and protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks. | | ![Screenshot of a haventec logo](./medi) is a passwordless authentication provider, which provides decentralized identity platform that eliminates passwords, shared secrets, and friction. | | ![Screenshot of a hypr logo](./medi) is a passwordless authentication provider, which replaces passwords with public key encryptions eliminating fraud, phishing, and credential reuse. |
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
To get started, you'll need:
- A HYPR cloud tenant, get a free [trial account](https://get.hypr.com/free-trial). -- A user's mobile device registered using the HYPR Rest APIs or the HYPR Device Manager in your HYPR tenant. For example, you can use the [HYPR Java SDK](https://docs.hypr.com/integratinghypr/docs/hypr-java-web-sdk) to accomplish this task.
+- A user's mobile device registered using the HYPR REST APIs or the HYPR Device Manager in your HYPR tenant. For example, you can use the [HYPR Java SDK](https://docs.hypr.com/integratinghypr/docs/hypr-java-web-sdk) to accomplish this task.
## Scenario description
The HYRP integration includes the following components:
- The HYPR mobile app - The HYPR mobile app can be used to execute this sample if prefer not to use the mobile SDKs in your own mobile applications. -- HYPR Rest APIs - You can use the HYPR APIs to do both user device registration and authentication. These APIs can be found [here](https://apidocs.hypr.com).
+- HYPR REST APIs - You can use the HYPR APIs to do both user device registration and authentication. These APIs can be found [here](https://apidocs.hypr.com).
The following architecture diagram shows the implementation.
active-directory-b2c Partner Idology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idology.md
The IDology integration includes the following components:
- Azure AD B2C ΓÇô The authorization server responsible for verifying the userΓÇÖs credentials. It's also known as the identity provider. - IDology ΓÇô The IDology service takes input provided by the user and verifies the userΓÇÖs identity.-- Custom Rest API ΓÇô This API implements the integration between Azure AD and the IDology service.
+- Custom REST API ΓÇô This API implements the integration between Azure AD and the IDology service.
The following architecture diagram shows the implementation.
You'll need the URL of the deployed service to configure Azure AD with the requi
### Part 2 - Configure the API
-Application settings can be [configured in App Service in Azure](../app-service/configure-common.md#configure-app-settings). With this method, settings can be securely configured without checking them into a repository. You'll need to provide the following settings to the Rest API:
+Application settings can be [configured in App Service in Azure](../app-service/configure-common.md#configure-app-settings). With this method, settings can be securely configured without checking them into a repository. You'll need to provide the following settings to the REST API:
| Application settings | Source | Notes | | :-- | :| :--|
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
The Jumio integration includes the following components:
- Jumio: The service that takes the ID details provided by the user and verifies them. -- Intermediate Rest API: The API that implements the integration between Azure AD B2C and the Jumio service.
+- Intermediate REST API: The API that implements the integration between Azure AD B2C and the Jumio service.
- Azure Blob storage: The service that supplies custom UI files to the Azure AD B2C policies.
Use the following PowerShell script to create the string:
### Configure the API
-You can [configure application settings in Azure App Service](../app-service/configure-common.md#configure-app-settings). With this method, you can securely configure settings without checking them into a repository. You'll need to provide the following settings to the Rest API:
+You can [configure application settings in Azure App Service](../app-service/configure-common.md#configure-app-settings). With this method, you can securely configure settings without checking them into a repository. You'll need to provide the following settings to the REST API:
| Application settings | Source | Notes | | :-- | :| :--|
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
The ThreatMetrix integration includes the following components:
- ThreatMetrix ΓÇô The ThreatMetrix service takes inputs provided by the user and combines it with profiling information gathered from the user's machine to verify the security of the user interaction. -- Custom Rest API ΓÇô This API implements the integration between Azure AD B2C and the ThreatMetrix service.
+- Custom REST API ΓÇô This API implements the integration between Azure AD B2C and the ThreatMetrix service.
The following architecture diagram shows the implementation.
Deploy the provided [API code](https://github.com/azure-ad-b2c/partner-integrati
### Part 2 - Configure the API
-Application settings can be [configured in the App service in Azure](../app-service/configure-common.md#configure-app-settings). With this method, settings can be securely configured without checking them into a repository. You'll need to provide the following settings to the Rest API:
+Application settings can be [configured in the App service in Azure](../app-service/configure-common.md#configure-app-settings). With this method, settings can be securely configured without checking them into a repository. You'll need to provide the following settings to the REST API:
| Application settings | Source | Notes | | :-- | :| :--|
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md
The Onfido integration includes the following components:
- Onfido client ΓÇô A configurable JavaScript client document collection utility deployed within other webpages. Collects the documents and does preliminary checks like document size and quality. -- Intermediate Rest API ΓÇô Provides endpoints for the Azure AD B2C tenant to communicate with the Onfido API service, handling data processing and adhering to the security requirements of both.
+- Intermediate REST API ΓÇô Provides endpoints for the Azure AD B2C tenant to communicate with the Onfido API service, handling data processing and adhering to the security requirements of both.
- Onfido API service ΓÇô The backend service provided by Onfido, which saves and verifies the documents provided by the user.
For more information about Onfido, see [Onfido API documentation](https://docume
#### Adding sensitive configuration settings
-Application settings can be configured in the [App service in Azure](../app-service/configure-common.md#configure-app-settings). The App service allows for settings to be securely configured without checking them into a repository. The Rest API needs the following settings:
+Application settings can be configured in the [App service in Azure](../app-service/configure-common.md#configure-app-settings). The App service allows for settings to be securely configured without checking them into a repository. The REST API needs the following settings:
| Application setting name | Source | Notes | |:-|:-|:-|
active-directory-b2c Phone Authentication User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-authentication-user-flows.md
Here's an example showing how to add phone sign-up to a new user flow.
1. Under **Social identity providers**, select any other identity providers you want to allow for this user flow. > [!NOTE]
- > Multi-factor authentication (MFA) is disabled by default for sign-up user flows. You can enable MFA for a phone sign-up user flow, but because a phone number is used as the primary identifier, email one-time passcode is the only option available for the second authentication factor.
+ > [Multi-factor authentication (MFA)](multi-factor-authentication.md) is disabled by default for sign-up user flows. You can enable MFA for a phone sign-up user flow, but because a phone number is used as the primary identifier, email one-time passcode and Authenticator app - TOTP (preview) are the only options available for the second authentication factor.
1. In the **User attributes and token claims** section, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
active-directory-b2c Supported Azure Ad Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/supported-azure-ad-features.md
An Azure AD B2C tenant is different than an Azure Active Directory tenant, which
| [Conditional Access](../active-directory/conditional-access/overview.md) | Fully supported for administrative and user accounts. | A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user) Lean how to configure Azure AD B2C [conditional access](conditional-access-user-flow.md).| | [Premium P1](https://azure.microsoft.com/pricing/details/active-directory) | Fully supported for Azure AD premium P1 features. For example, [Password Protection](../active-directory/authentication/concept-password-ban-bad.md), [Hybrid Identities](../active-directory/hybrid/whatis-hybrid-identity.md), [Conditional Access](../active-directory/roles/permissions-reference.md#), [Dynamic groups](../active-directory/enterprise-users/groups-create-rule.md), and more. | Azure AD B2C uses [Azure AD B2C Premium P1 license](https://azure.microsoft.com/pricing/details/active-directory/external-identities/), which is different from Azure AD premium P1. A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md).| | [Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) | Fully supported for Azure AD premium P2 features. For example, [Identity Protection](../active-directory/identity-protection/overview-identity-protection.md), and [Identity Governance](../active-directory/governance/identity-governance-overview.md). | Azure AD B2C uses [Azure AD B2C Premium P2 license](https://azure.microsoft.com/pricing/details/active-directory/external-identities/), which is different from Azure AD premium P2. A subset of Azure AD Identity Protection features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to [Investigate risk with Identity Protection](identity-protection-investigate-risk.md) and configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md). |
+|[Data retention policy](../active-directory/reports-monitoring/reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data)|Data retention period for both audit and sign in logs depend on your subscription. Learn more about [How long Azure AD store reporting data](../active-directory/reports-monitoring/reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data).|Sign in and audit logs are only retained for **seven (7) days**. If you require a longer retention period, use the [Azure monitor](azure-monitor.md).|
> [!NOTE] > **Other Azure resources in your tenant:** <br>In an Azure AD B2C tenant, you can't provision other Azure resources such as virtual machines, Azure web apps, or Azure functions. You must create these resources in your Azure AD tenant.
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
Previously updated : 10/18/2021 Last updated : 03/01/2022 zone_pivot_groups: b2c-policy-type
Add the application IDs to the extensions file *TrustFrameworkExtensions.xml*.
## Add Facebook as an identity provider
-The **SocialAndLocalAccounts** starter pack includes Facebook social sign in. Facebook is *not* required for using custom policies, but we use it here to demonstrate how you can enable federated social login in a custom policy.
+The **SocialAndLocalAccounts** starter pack includes Facebook social sign in. Facebook is *not* required for using custom policies, but we use it here to demonstrate how you can enable federated social login in a custom policy. If you don't need to enable federated social login, use the **LocalAccounts** starter pack instead, and skip [Add Facebook as an identity provider](tutorial-create-user-flows.md?pivots=b2c-custom-policy#add-facebook-as-an-identity-provider) section.
### Create Facebook application
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md
Previously updated : 10/08/2021 Last updated : 03/01/2022
Extension attributes can only be registered on an application object, even thoug
1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. 1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **All applications**.
-1. Select the `b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.` application.
+1. Select the **b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.** application.
1. Copy the following identifiers to your clipboard and save them: * **Application ID**. Example: `11111111-1111-1111-1111-111111111111`. * **Object ID**. Example: `22222222-2222-2222-2222-222222222222`.
The following example demonstrates the use of a custom attribute in Azure AD B2C
## Using custom attribute with MS Graph API
-Microsoft Graph API supports creating and updating a user with extension attributes. Extension attributes in the Graph API are named by using the convention `extension_ApplicationClientID_attributename`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` application. Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example:
+[Microsoft Graph API][ms-graph-api] supports creating and updating a user with extension attributes. Extension attributes in the Microsoft Graph API are named by using the convention `extension_ApplicationClientID_attributename`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` [application](#azure-ad-b2c-extensions-app). Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example, the Microsoft Graph API identifies an extension attribute `loyaltyId` in Azure AD B2C as `extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyId`.
-```json
-"extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyId": "212342"
-```
+Learn how to [interact with resources in your Azure AD B2C tenant](microsoft-graph-operations.md#user-management) using Microsoft Graph API.
+
## Remove extension attribute
Unlike built-in attributes, extension/custom attributes can be removed. The exte
::: zone pivot="b2c-user-flow"
-Use the following steps to remove extension/custom attribute from a user flow:
+Use the following steps to remove extension/custom attribute from a user flow in your:
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 2. Make sure you're using the directory that contains your Azure AD B2C tenant:
- 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the Directory name list, and then select **Switch** 1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **User attributes**, and then select the attribute you want to delete.
To remove a custom attribute, use [MS Graph API](microsoft-graph-operations.md),
## Next steps Follow the guidance for how to [add claims and customize user input using custom policies](configure-user-input.md). This sample uses a built-in claim 'city'. To use a custom attribute, replace 'city' with your own custom attributes.++
+<!-- LINKS -->
+[ms-graph]: /graph/
+[ms-graph-api]: /graph/api/overview
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Let's cover each step:
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-alt.png" alt-text="Screenshot of the Sign-in if FIDO2 is also enabled.":::
-1. After the user clicks the link, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](/azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). For the correct endpoint for other environments, see the specific Microsoft cloud docs.
+1. After the user clicks the link, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](/azure/azure-government/compare-azure-government-global-azure#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). For the correct endpoint for other environments, see the specific Microsoft cloud docs.
The endpoint performs mutual authentication and requests the client certificate as part of the TLS handshake. You will see an entry for this request in the Sign-in logs. There is a [known issue](#known-issues) where User ID is displayed instead of Username.
For the next test scenario, configure the authentication policy where the Issuer
- [How to configure Azure AD CBA](how-to-certificate-based-authentication.md) - [FAQ](certificate-based-authentication-faq.yml) - [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)-
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 02/16/2022 Last updated : 03/1/2022 -+
Before combined registration, users registered authentication methods for Azure
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the sepperate legacy registration workflows for MFA and SSPR.
This article outlines what combined security registration is. To get started with combined security registration, see the following article:
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
+
+ Title: Azure AD feature availability in Azure Government
+description: Learn which Azure AD features are available in Azure Government.
+++++ Last updated : 02/28/2022++++++++
+# Cloud feature availability
+
+<!Jeremy said there are additional features that don't fit nicely in this list that we need to add later>
+
+This following table lists Azure AD feature availability in Azure Government.
++
+|Service | Feature | Availability |
+|:||::|
+|**Authentication, single sign-on, and MFA**|||
+||Cloud authentication (Pass-through authentication, password hash synchronization) | &#x2705; |
+|| Federated authentication (Active Directory Federation Services or federation with other identity providers) | &#x2705; |
+|| Single sign-on (SSO) unlimited | &#x2705; |
+|| Multifactor authentication (MFA) | Hardware OATH tokens are not available. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address. Microsoft Authenticator only shows GUID and not UPN for compliance reasons. |
+|| Passwordless (Windows Hello for Business, Microsoft Authenticator, FIDO2 security key integrations) | &#x2705; |
+|| Service-level agreement | &#x2705; |
+|**Applications access**|||
+|| SaaS apps with modern authentication (Azure AD application gallery apps, SAML, and OAUTH 2.0) | &#x2705; |
+|| Group assignment to applications | &#x2705; |
+|| Cloud app discovery (Microsoft Cloud App Security) | &#x2705; |
+|| Application Proxy for on-premises, header-based, and Integrated Windows Authentication | &#x2705; |
+|| Secure hybrid access partnerships (Kerberos, NTLM, LDAP, RDP, and SSH authentication) | &#x2705; |
+|**Authorization and Conditional Access**|||
+|| Role-based access control (RBAC) | &#x2705; |
+|| Conditional Access | &#x2705; |
+|| SharePoint limited access | &#x2705; |
+|| Session lifetime management | &#x2705; |
+|| Identity Protection (vulnerabilities and risky accounts) | See [Identity protection](#identity-protection) below. |
+|| Identity Protection (risk events investigation, SIEM connectivity) | See [Identity protection](#identity-protection) below. |
+|**Administration and hybrid identity**|||
+|| User and group management | &#x2705; |
+|| Advanced group management (Dynamic groups, naming policies, expiration, default classification) | &#x2705; |
+|| Directory synchronizationΓÇöAzure AD Connect (sync and cloud sync) | &#x2705; |
+|| Azure AD Connect Health reporting | &#x2705; |
+|| Delegated administrationΓÇöbuilt-in roles | &#x2705; |
+|| Global password protection and management ΓÇô cloud-only users | &#x2705; |
+|| Global password protection and management ΓÇô custom banned passwords, users synchronized from on-premises Active Directory | &#x2705; |
+|| Microsoft Identity Manager user client access license (CAL) | &#x2705; |
+|**End-user self-service**|||
+|| Application launch portal (My Apps) | &#x2705; |
+|| User application collections in My Apps | &#x2705; |
+|| Self-service account management portal (My Account) | &#x2705; |
+|| Self-service password change for cloud users | &#x2705; |
+|| Self-service password reset/change/unlock with on-premises write-back | &#x2705; |
+|| Self-service sign-in activity search and reporting | &#x2705; |
+|| Self-service group management (My Groups) | &#x2705; |
+|| Self-service entitlement management (My Access) | &#x2705; |
+|**Identity governance**|||
+|| Automated user provisioning to apps | &#x2705; |
+|| Automated group provisioning to apps | &#x2705; |
+|| HR-driven provisioning | Partial. See [HR-provisioning apps](#hr-provisioning-apps). |
+|| Terms of use attestation | &#x2705; |
+|| Access certifications and reviews | &#x2705; |
+|| Entitlement management | &#x2705; |
+|| Privileged Identity Management (PIM), just-in-time access | &#x2705; |
+|**Event logging and reporting**|||
+|| Basic security and usage reports | &#x2705; |
+|| Advanced security and usage reports | &#x2705; |
+|| Identity Protection: vulnerabilities and risky accounts | &#x2705; |
+|| Identity Protection: risk events investigation, SIEM connectivity | &#x2705; |
+|**Frontline workers**|||
+|| SMS sign-in | Feature not available. |
+|| Shared device sign-out | Enterprise state roaming for Windows 10 devices is not available. |
+|| Delegated user management portal (My Staff) | Feature not available. |
++
+## Identity protection
+
+| Risk Detection | Availability |
+|-|:--:|
+|Leaked credentials (MACE) | &#x2705; |
+|Azure AD threat intelligence | Feature not available. |
+|Anonymous IP address | &#x2705; |
+|Atypical travel | &#x2705; |
+|Anomalous Token | Feature not available. |
+|Token Issuer Anomaly| Feature not available. |
+|Malware linked IP address | &#x2705; |
+|Suspicious browser | &#x2705; |
+|Unfamiliar sign-in properties | &#x2705; |
+|Admin confirmed user compromised | &#x2705; |
+|Malicious IP address | &#x2705; |
+|Suspicious inbox manipulation rules | &#x2705; |
+|Password spray | &#x2705; |
+|Impossible travel | &#x2705; |
+|New country | &#x2705; |
+|Activity from anonymous IP address | &#x2705; |
+|Suspicious inbox forwarding | &#x2705; |
+|Azure AD threat intelligence | Feature not available. |
+|Additional risk detected | &#x2705; |
++
+## HR-provisioning apps
+
+| HR-provisioning app | Availability |
+|-|:--:|
+|Workday to Azure AD User Provisioning | &#x2705; |
+|Workday Writeback | &#x2705; |
+|SuccessFactors to Azure AD User Provisioning | &#x2705; |
+|SuccessFactors to Writeback | &#x2705; |
+|Provisioning agent configuration and registration with Gov cloud tenant| Works with special undocumented command-line invocation:<br> AADConnectProvisioningAgent.Installer.exe ENVIRONMENTNAME=AzureUSGovernment |
+++++
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 02/23/2022 Last updated : 02/28/2022
When a user responds to an MFA push notification using Microsoft Authenticator,
During self-service password reset, Microsoft Authenticator notification will show a number that the user will need to type in their Authenticator app notification. This number will only be seen to users who have been enabled for number matching.
->[!NOTE]
->Number matching for admin roles during SSPR is pending and unavailable for a couple days.
- ### Combined registration When a user is goes through combined registration to set up Microsoft Authenticator, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification.
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 02/11/2022 Last updated : 02/28/2022
The nudge will not appear on mobile devices that run Android or iOS.
## Frequently asked questions
-**Will this feature be available for MFA Server?**
-No. This feature will be available only for users using Azure MFA.
+**Is registration campaign available for MFA Server?**
+
+No. This feature is available only for users using Azure MFA.
+
+**Can users be nudged within an application?**
+
+Nudge is available only on browsers and not on applications.
**How long will the campaign run for?** + You can use the APIs to enable the campaign for as long as you like. Whenever you want to be done running the campaign, simply use the APIs to disable the campaign. **Can each group of users have a different snooze duration?** + No. The snooze duration for the prompt is a tenant-wide setting and applies to all groups in scope. **Can users be nudged to set up passwordless phone sign-in?** + The feature aims to empower admins to get users set up with MFA using the Authenticator app and not passwordless phone sign-in. **Will a user who has a 3rd party authenticator app setup see the nudge?** + If this user doesnΓÇÖt have the Microsoft Authenticator app set up for push notifications and are enabled for it by policy, yes, the user will see the nudge.
-**Will a user who has a Microsoft Authenticator app setup only for TOTP codes see the nudge?** Yes. If the Microsoft Authenticator app is not set up for push notifications and the user is enabled for it by policy, yes, the user will see the nudge.
+**Will a user who has a Microsoft Authenticator app setup only for TOTP codes see the nudge?** 
+
+Yes. If the Microsoft Authenticator app is not set up for push notifications and the user is enabled for it by policy, yes, the user will see the nudge.
**If a user just went through MFA registration, will they be nudged in the same sign-in session?** + No. To provide a good user experience, users will not be nudged to set up the Authenticator in the same session that they registered other authentication methods. **Can I nudge my users to register another authentication method?** + No. The feature, for now, aims to nudge users to set up the Microsoft Authenticator app only. **Is there a way for me to hide the snooze option and force my users to setup the Authenticator app?** + There is no way to hide the snooze option on the nudge. You can set the snoozeDuration to 0, which will ensure that users will see the nudge during each MFA attempt. **Will I be able to nudge my users if I am not using Azure MFA?** + No. The nudge will only work for users who are doing MFA using the Azure MFA service. **Will Guest/B2B users in my tenant be nudged?** + Yes. If they have been scoped for the nudge using the policy.
-**What if the user closes the browser?** It's the same as snoozing.
+**What if the user closes the browser?**
+
+It's the same as snoozing.
## Next steps
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
# Enable passwordless security key sign-in to on-premises resources by using Azure AD
-This document discusses how to enable passwordless authentication to on-premises resources for environments with both *Azure Active Directory (Azure AD)-joined* and *hybrid Azure AD-joined* Windows 10 devices. This passwordless authentication functionality provides seamless single sign-on (SSO) to on-premises resources when you use Microsoft-compatible security keys, or with [Windows Hello for Business Cloud trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust.md)
+This document discusses how to enable passwordless authentication to on-premises resources for environments with both *Azure Active Directory (Azure AD)-joined* and *hybrid Azure AD-joined* Windows 10 devices. This passwordless authentication functionality provides seamless single sign-on (SSO) to on-premises resources when you use Microsoft-compatible security keys, or with [Windows Hello for Business Cloud trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)
## Use SSO to sign in to on-premises resources by using FIDO2 keys
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
A major step in every multifactor authentication deployment is getting users reg
### Combined registration for SSPR and Azure AD MFA
+> [!NOTE]
+> Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to disable the combined registration experience.
+ We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md). It's critical to inform users about upcoming changes, registration requirements, and any necessary user actions. We provide [communication templates](https://aka.ms/mfatemplates) and [user documentation](https://support.microsoft.com/account-billing/set-up-security-info-from-a-sign-in-page-28180870-c256-4ebf-8bd7-5335571bf9a8) to prepare your users for the new experience and help to ensure a successful rollout. Send users to https://myprofile.microsoft.com to register by selecting the **Security Info** link on that page.
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Previously updated : 06/28/2021 Last updated : 03/1/2022 -+
Before combined registration, users registered authentication methods for Azure
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the sepperate legacy registration workflows for MFA and SSPR.
To make sure you understand the functionality and effects before you enable the new experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
Before deploying SSPR, you may opt to determine the number and the average cost
### Combined registration for SSPR and Azure AD Multi-Factor Authentication
+> [!NOTE]
+> Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to disable the combined registration experience.
+ We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md). It's critical to inform users about upcoming changes, registration requirements, and any necessary user actions. We provide [communication templates](https://aka.ms/mfatemplates) and [user documentation](https://support.microsoft.com/account-billing/set-up-security-info-from-a-sign-in-page-28180870-c256-4ebf-8bd7-5335571bf9a8) to prepare your users for the new experience and help to ensure a successful rollout. Send users to https://myprofile.microsoft.com to register by selecting the **Security Info** link on that page.
active-directory Cloudknox Howto Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-add-remove-role-task.md
This article describes how you can add and remove roles and tasks for Microsoft
## View permissions 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**.
-1. To search for more parameters, you can make a selection from the **User States**, **Privilege Creep Index**, and **Task usage** dropdowns.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP**.
+1. To search for more parameters, you can make a selection from the **User States**, **Permission Creep Index**, and **Task Usage** dropdowns.
1. Select **Apply**. CloudKnox displays a list of groups, users, and service accounts that match your criteria. 1. In **Enter a username**, enter or select a user.
-1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
1. Make a selection from the results list.
- The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current role**.
+ The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current Role**.
## Add a role 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list. 1. To attach a role, select **Add role**.
-1. In the **Add role** page, from the **Available roles** list, select the plus sign **(+)** to move the role to the **Selected roles** list.
+1. In the **Add Role** page, from the **Available Roles** list, select the plus sign **(+)** to move the role to the **Selected Roles** list.
1. When you have finished adding roles, select **Submit**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can add and remove roles and tasks for Microsoft
## Remove a role 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list.
-1. To remove a role, select **Remove role**.
-1. In the **Remove role** page, from the **Available roles** list, select the plus sign **(+)** to move the role to the **Selected roles** list.
+1. To remove a role, select **Remove Role**.
+1. In the **Remove Role** page, from the **Available Roles** list, select the plus sign **(+)** to move the role to the **Selected Roles** list.
1. When you have finished selecting roles, select **Submit**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can add and remove roles and tasks for Microsoft
## Add a task 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list.
-1. To attach a role, select **Add tasks**.
-1. In the **Add tasks** page, from the **Available tasks** list, select the plus sign **(+)** to move the task to the **Selected tasks** list.
+1. To attach a role, select **Add Tasks**.
+1. In the **Add Tasks** page, from the **Available Tasks** list, select the plus sign **(+)** to move the task to the **Selected Tasks** list.
1. When you have finished adding tasks, select **Submit**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can add and remove roles and tasks for Microsoft
## Remove a task 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list.
-1. To remove a task, select **Remove tasks**.
-1. In the **Remove tasks** page, from the **Available tasks** list, select the plus sign **(+)** to move the task to the **Selected tasks** list.
+1. To remove a task, select **Remove Tasks**.
+1. In the **Remove Tasks** page, from the **Available Tasks** list, select the plus sign **(+)** to move the task to the **Selected Tasks** list.
1. When you have finished selecting tasks, select **Submit**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
active-directory Cloudknox Howto Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-attach-detach-permissions.md
This article describes how you can attach and detach permissions for users, role
## View permissions 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **AWS**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Authorization System Type** dropdown, select **AWS**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
1. From the **Search For** dropdown, select **Group**, **User**, or **Role**.
-1. To search for more parameters, you can make a selection from the **User States**, **Privilege Creep Index**, and **Task usage** dropdowns.
+1. To search for more parameters, you can make a selection from the **User States**, **Permission Creep Index**, and **Task Usage** dropdowns.
1. Select **Apply**. CloudKnox displays a list of users, roles, or groups that match your criteria. 1. In **Enter a username**, enter or select a user. 1. In **Enter a group name**, enter or select a group, then select **Apply**. 1. Make a selection from the results list.
- The table displays the related **Username** **Domain/Account**, **Source** and **Policy name**.
+ The table displays the related **Username** **Domain/Account**, **Source** and **Policy Name**.
## Attach policies 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **AWS**.
+1. From the **Authorization System Type** dropdown, select **AWS**.
1. In **Enter a username**, enter or select a user.
-1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
1. Make a selection from the results list.
-1. To attach a policy, select **Attach policies**.
-1. In the **Attach policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
+1. To attach a policy, select **Attach Policies**.
+1. In the **Attach Policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
1. When you have finished adding policies, select **Submit**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can attach and detach permissions for users, role
## Detach policies 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **AWS**.
+1. From the **Authorization System Type** dropdown, select **AWS**.
1. In **Enter a username**, enter or select a user.
-1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
1. Make a selection from the results list.
-1. To remove a policy, select **Detach policies**.
-1. In the **Detach policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
+1. To remove a policy, select **Detach Policies**.
+1. In the **Detach Policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
1. When you have finished selecting policies, select **Submit**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
active-directory Cloudknox Howto Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-alert-trigger.md
This article describes how you can create and view activity alerts and alert tri
1. From the **Alert Name** dropdown, select an alert. 1. From the **Date** dropdown, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range**.
- If you select **Custom range**, select date and time settings, and then select **Apply**.
+ If you select **Custom Range**, select date and time settings, and then select **Apply**.
1. To view the alert, select **Apply** The **Alerts** table displays information about your alert.
This article describes how you can create and view activity alerts and alert tri
## View activity alert triggers 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. In the **Activity** tab, select the **Alert triggers** subtab.
+1. In the **Activity** tab, select the **Alert Triggers** subtab.
1. From the **Status** dropdown, select **All**, **Activated** or **Deactivated**, then select **Apply**. The **Triggers** table displays the following information:
This article describes how you can create and view activity alerts and alert tri
- Select a number in this column to view information about the user.
- - **Created by**: The email address of the user who created the alert trigger.
- - **Modified by**: The email address of the user who last modified the alert trigger.
- - **Last updated**: The date and time the alert trigger was last updated.
+ - **Created By**: The email address of the user who created the alert trigger.
+ - **Modified By**: The email address of the user who last modified the alert trigger.
+ - **Last Updated**: The date and time the alert trigger was last updated.
- **Subscription**: A switch that displays if the alert is **On** or **Off**. - If the column displays **Off**, the current user isn't subscribed to that alert. Switch the toggle to **On** to subscribe to the alert.
This article describes how you can create and view activity alerts and alert tri
- **Rename**: Enter the new name of the query, and then select **Save.** - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users. - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
- - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger and their **User status**.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger and their **User Status**.
- **Delete**: Delete the alert. If the **Subscription** is **Off**, the following options are available: - **View**: View details of the alert trigger.
- - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger and their **User status**.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger and their **User Status**.
- **Duplicate**: Create a duplicate copy of the selected alert trigger.
active-directory Cloudknox Howto Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-approve-privilege-request.md
The **Remediation** dashboard has two privilege-on-demand (POD) workflows you ca
## Create a request for permissions
-1. On the CloudKnox home page, select the **Remediation** tab, and then select the **My requests** subtab.
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **My Requests** subtab.
- The **My requests** subtab displays the following options:
+ The **My Requests** subtab displays the following options:
- **Pending**: A list of requests youΓÇÖve made but haven't yet been reviewed. - **Approved**: A list of requests that have been reviewed and approved by the approver. These requests have either already been activated or are in the process of being activated. - **Processed**: A summary of the requests youΓÇÖve created that have been approved (**Done**), **Rejected**, and requests that have been **Canceled**.
-1. To create a request for permissions, select **New request**.
+1. To create a request for permissions, select **New Request**.
1. In the **Roles/Tasks** page:
- 1. From the **Select an authorization system type** dropdown, select the authorization system type you want to access: **AWS**, **Azure** or **GCP**.
- 1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+ 1. From the **Authorization System Type** dropdown, select the authorization system type you want to access: **AWS**, **Azure** or **GCP**.
+ 1. From the **Authorization System** dropdown, select the accounts you want to access.
1. From the **Identity** dropdown, select the identity on whose behalf youΓÇÖre requesting access. - If the identity you select is a Security Assertions Markup Language (SAML) user, and since a SAML user accesses the system through assumption of a role, select the userΓÇÖs role in **Role**. - If the identity you select is a local user, to select the policies you want:
- 1. Select **Request policy(s)**.
- 1. In **Available policies**, select the policies you want.
+ 1. Select **Request Policy(s)**.
+ 1. In **Available Policies**, select the policies you want.
1. To select a specific policy, select the plus sign, and then find and select the policy you want. The policies youΓÇÖve selected appear in the **Selected policies** box.
The **Remediation** dashboard has two privilege-on-demand (POD) workflows you ca
1. If you selected **AWS**, the **Scope** page appears.
- 1. In **Select scope**, select:
+ 1. In **Select Scope**, select:
- **All Resources** - **Specific Resources**, and then select the resources you want. - **No Resources**
The **Remediation** dashboard has two privilege-on-demand (POD) workflows you ca
- **Monthly** 1. Select **Submit**.
- The following message appears: **Your request has been successfully submitted.**
+ The following message appears: **Your Request Has Been Successfully Submitted.**
The request you submitted is now listed in **Pending Requests**.
active-directory Cloudknox Howto Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-custom-queries.md
This article describes how you can use the **Audit** dashboard in CloudKnox Perm
## Create a custom query
-1. In the **Audit** dashboard, in the **New Query** subtab, select **Authorization system type**, and then select the authorization systems you want to search: Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. In the **Audit** dashboard, in the **New Query** subtab, select **Authorization System Type**, and then select the authorization systems you want to search: Amazon Web Services (**AWS**), Microsoft **Azure**, Google Cloud Platform (**GCP**), or Platform (**Platform**).
1. Select the authorization systems you want to search from the **List** and **Folders** box, and then select **Apply**. 1. In the **New Query** box, enter your query parameters, and then select **Add**.
This article describes how you can use the **Audit** dashboard in CloudKnox Perm
Repeat this step for the second and third box to complete entering the parameters. 1. To change your query as you're creating it, select **Edit** (the pencil icon), and then change the query parameters. 1. To change the parameter options, select the down arrow in each box to display a dropdown of available selections. Then select the option you want.
-1. To discard your selections, select **Reset query** for the parameter you want to change, and then make your selections again.
+1. To discard your selections, select **Reset Query** for the parameter you want to change, and then make your selections again.
1. When youΓÇÖre ready to run your query, select **Search**. 1. To save the query, select **Save**.
- CloudKnox saves the query and adds it to the **Saved queries** list.
+ CloudKnox saves the query and adds it to the **Saved Queries** list.
## Save the query under a new name
-1. In the **Audit** dashboard, select the ellipses menu **(…)** on the far right and select **Save as**.
+1. In the **Audit** dashboard, select the ellipses menu **(…)** on the far right and select **Save As**.
2. Enter a new name for the query, and then select **Save**.
- CloudKnox saves the query under the new name. Both the new query and the original query display in the **Saved queries** list.
+ CloudKnox saves the query under the new name. Both the new query and the original query display in the **Saved Queries** list.
## View a saved query
-1. In the **Audit** dashboard, select the down arrow next to **Saved queries**.
+1. In the **Audit** dashboard, select the down arrow next to **Saved Queries**.
A list of saved queries appears. 2. Select the query you want to open.
This article describes how you can use the **Audit** dashboard in CloudKnox Perm
CloudKnox displays details of the query in the **Activity** table. Select a query to see its details:
- - The **Identity details**.
+ - The **Identity Details**.
- The **Domain** name.
- - The **Resource name** and **Resource type**.
- - The **Task name**.
+ - The **Resource Name** and **Resource Type**.
+ - The **Task Name**.
- The **Date**.
- - The **IP address**.
- - The **Authorization system**.
+ - The **IP Address**.
+ - The **Authorization System**.
## View a raw events summary
-1. In the **Audit** dashboard, select **View** (the eye icon) to open the **Raw events summary** box.
+1. In the **Audit** dashboard, select **View** (the eye icon) to open the **Raw Events Summary** box.
- The **Raw events summary** box displays **Identity details**, the **Task name**, and the script for your query.
+ The **Raw Events Summary** box displays **Username or Role Session Name**, the **Task name**, and the script for your query.
1. Select **Copy** to copy the script. 1. Select **X** to close the **Raw events summary** box.
This article describes how you can use the **Audit** dashboard in CloudKnox Perm
1. In the **Audit** dashboard, load the query you want to delete. 2. Select **Delete**.
- CloudKnox deletes the query. Deleted queries don't display in the **Saved queries** list.
+ CloudKnox deletes the query. Deleted queries don't display in the **Saved Queries** list.
## Rename a query
This article describes how you can use the **Audit** dashboard in CloudKnox Perm
2. Select the ellipses menu **(…)** on the far right, and select **Rename**. 3. Enter a new name for the query, and then select **Save**.
- CloudKnox saves the query under the new name. Both the new query and the original query display in the **Saved queries** list.
+ CloudKnox saves the query under the new name. Both the new query and the original query display in the **Saved Queries** list.
## Duplicate a query 1. In the **Audit** dashboard, load the query you want to duplicate. 2. Select the ellipses menu **(…)** on the far right, and then select **Duplicate**.
- CloudKnox creates a copy of the query. Both the copy of the query and the original query display in the **Saved queries** list.
+ CloudKnox creates a copy of the query. Both the copy of the query and the original query display in the **Saved Queries** list.
You can rename the original or copy of the query, change it, and save it without changing the other query.
This article describes how you can use the **Audit** dashboard in CloudKnox Perm
- For information on how to view how users access information, see [Use queries to see how users access information](cloudknox-ui-audit-trail.md). - For information on how to filter and view user activity, see [Filter and query user activity](cloudknox-product-audit-trail.md).-- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
active-directory Cloudknox Howto Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-group-based-permissions.md
This article describes how you can create and manage group-based permissions in
1. Select the permission setting you want: 2.
- - **Admin for all authorization system types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
- - **Admin for selected authorization system types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Admin for all Authorization System Types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected Authorization System Types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
- **Custom** allows you to set **View**, **Control**, and **Approve** permissions for the authorization system types that you select. 1. Select **Next**
-1. If you selected **Admin for all authorization system types**
+1. If you selected **Admin for all Authorization System Types**
- Select Identities for each Authorization System that you would like members of this group to Request on.
-1. If you selected **Admin for selected authorization system types**
- - Select **Viewer**, **Controller**, or **Approver** for the **Authorization system types** you want.
+1. If you selected **Admin for selected Authorization System Types**
+ - Select **Viewer**, **Controller**, or **Approver** for the **Authorization System Types** you want.
- Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on.
-1. If you select **Custom**, select the **Authorization system types** you want.
+1. If you select **Custom**, select the **Authorization System Types** you want.
- Select **Viewer**, **Controller**, or **Approver** for the **Authorization Systems** you want. - Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on.
-1. Select **Save**, The following message appears: **New group has been created successfully.**
+1. Select **Save**, The following message appears: **New Group Has been Created Successfully.**
1. To see the group you created in the **Groups** table, refresh the page. ## Next steps
active-directory Cloudknox Howto Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-role-policy.md
This article describes how you can use the **Remediation** dashboard in CloudKno
1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab. 1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
-1. Select **Create policy**.
+1. Select **Create Policy**.
1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings. - To change the settings, make a selection from the dropdown.
-1. Under **How would you like to create the policy?**, select the required option:
-
- - **Activity of user(s)**: Allows you to create a policy based on user activity.
- - **Activity of group(s)**: Allows you to create a policy based on the aggregated activity of all the users belonging to the group(s).
- - **Activity of resource(s)**: Allows you to create a policy based on the activity of a resource, for example, an EC2 instance.
- - **Activity of role**: Allows you to create a policy based on the aggregated activity of all the users that assumed the role.
- - **Activity of tag(s)**: Allows you to create a policy based on the aggregated activity of all the tags.
- - **Activity of Lambda function**: Allows you to create a new policy based on the Lambda function.
- - **From existing policy**: Allows you to create a new policy based on an existing policy.
- - **New policy**: Allows you to create a new policy from scratch.
-1. In **Tasks performed in the last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. Under **How Would You Like To Create The Policy**, select the required option:
+
+ - **Activity of User(s)**: Allows you to create a policy based on user activity.
+ - **Activity of Group(s)**: Allows you to create a policy based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of Resource(s)**: Allows you to create a policy based on the activity of a resource, for example, an EC2 instance.
+ - **Activity of Role**: Allows you to create a policy based on the aggregated activity of all the users that assumed the role.
+ - **Activity of Tag(s)**: Allows you to create a policy based on the aggregated activity of all the tags.
+ - **Activity of Lambda Function**: Allows you to create a new policy based on the Lambda function.
+ - **From Existing Policy**: Allows you to create a new policy based on an existing policy.
+ - **New Policy**: Allows you to create a new policy from scratch.
+1. In **Tasks performed in last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
1. Depending on your preference, select or deselect **Include Access Advisor data.** 1. In **Settings**, from the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
This article describes how you can use the **Remediation** dashboard in CloudKno
1. In **Request Conditions**, select **JSON** . 1. In **Effect**, select **Allow** or **Deny**, and then select **Next**. 1. In **Policy name:**, enter a name for your policy.
-1. To add another statement to your policy, select **Add statement**, and then, from the list of **Statements**, select a statement.
+1. To add another statement to your policy, select **Add Statement**, and then, from the list of **Statements**, select a statement.
1. Review your **Task**, **Resources**, **Request Conditions**, and **Effect** settings, and then select **Next**.
This article describes how you can use the **Remediation** dashboard in CloudKno
1. If your controller isn't enabled, select **Download JSON** or **Download Script** to download the code and run it yourself. If your controller is enabled, skip this step.
-1. Select **Split policy**, and then select **Submit**.
+1. Select **Split Policy**, and then select **Submit**.
A message confirms that your policy has been submitted for creation
This article describes how you can use the **Remediation** dashboard in CloudKno
1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab. 1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
-1. Select **Create role**.
+1. Select **Create Role**.
1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings. - To change the settings, select the box and make a selection from the dropdown.
-1. Under **How would you like to create the role?**, select the required option:
+1. Under **How Would You Like To Create The Role?**, select the required option:
- - **Activity of user(s)**: Allows you to create a role based on user activity.
- - **Activity of group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
- - **Activity of app(s)**: Allows you to create a role based on the aggregated activity of all apps.
- - **From existing role**: Allows you to create a new role based on an existing role.
- - **New role**: Allows you to create a new role from scratch.
+ - **Activity of User(s)**: Allows you to create a role based on user activity.
+ - **Activity of Group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of App(s)**: Allows you to create a role based on the aggregated activity of all apps.
+ - **From Existing Role**: Allows you to create a new role based on an existing role.
+ - **New Role**: Allows you to create a new role from scratch.
-1. In **Tasks performed in the last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. In **Tasks performed in last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
1. Depending on your preference:
- - Select or deselect **Ignore non-Microsoft read actions**.
- - Select or deselect **Include read-only tasks**.
+ - Select or deselect **Ignore Non-Microsoft Read Actions**.
+ - Select or deselect **Include Read-Only Tasks**.
1. In **Settings**, from the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**. 1. On the **Tasks** page, in **Role name:**, enter a name for your role.
This article describes how you can use the **Remediation** dashboard in CloudKno
1. Select **Next**. 1. On the **Preview** page, review:
- - The list of selected **Actions** and **Not actions**.
+ - The list of selected **Actions** and **Not Actions**.
- The **JSON** or **Script** to confirm it's what you want. 1. If your controller isn't enabled, select **Download JSON** or **Download Script** to download the code and run it yourself.
This article describes how you can use the **Remediation** dashboard in CloudKno
1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab. 1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
-1. Select **Create role**.
+1. Select **Create Role**.
1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings. - To change the settings, select the box and make a selection from the dropdown.
-1. Under **How would you like to create the role?**, select the required option:
+1. Under **How Would You Like To Create The Role?**, select the required option:
- - **Activity of user(s)**: Allows you to create a role based on user activity.
- - **Activity of group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
- - **Activity of service account(s)**: Allows you to create a role based on the aggregated activity of all service accounts.
- - **From existing role**: Allows you to create a new role based on an existing role.
- - **New role**: Allows you to create a new role from scratch.
+ - **Activity of User(s)**: Allows you to create a role based on user activity.
+ - **Activity of Group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of Service Account(s)**: Allows you to create a role based on the aggregated activity of all service accounts.
+ - **From Existing Role**: Allows you to create a new role based on an existing role.
+ - **New Role**: Allows you to create a new role from scratch.
-1. In **Tasks performed in the last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
-1. If you selected **Activity of service account(s)** in the previous step, select or deselect **Collect activity across all GCP authorization systems.**
+1. In **Tasks performed in last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. If you selected **Activity Of Service Account(s)** in the previous step, select or deselect **Collect activity across all GCP Authorization Systems.**
1. From the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
This article describes how you can use the **Remediation** dashboard in CloudKno
- To add a whole category, select a category. - To add individual items from a category, select the down arrow on the left of the category name, and then select individual items. 1. Select **Next**.
-1. In **Role name:**, enter a name for your role.
-1. To add another statement to your role, select **Add statement**, and then, from the list of **Statements**, select a statement.
-1. Review your **Task**, **Resources**, **Request Conditions**, and **Effect** settings, and then select **Next**.
- 1. On the **Preview** page, review: - The list of selected **Actions**.
active-directory Cloudknox Howto Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-rule.md
This article describes how to create a rule in the CloudKnox Permissions Managem
1. In the CloudKnox home page, select the **Autopilot** tab. 1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
-1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
-1. In the **Autopilot** dashboard, select **New rule**.
-1. In the **Rule name** box, enter a name for your rule.
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select **New Rule**.
+1. In the **Rule Name** box, enter a name for your rule.
1. Select **AWS**, **Azure**, **GCP**, and then select **Next**.
-1. Select **Authorization systems**, and then select **All** or the account names that you want.
+1. Select **Authorization Systems**, and then select **All** or the account names that you want.
1. From the **Folders** dropdown, select a folder, and then select **Apply**. To change your folder settings, select **Reset**. - The **Status** column displays if the authorization system is **Online** or **Offline**.
- - The **Controller** column displays if the controller is **Enabled** or **Not enabled**.
+ - The **Controller** column displays if the controller is **Enabled** or **Not Enabled**.
1. Select **Configure** , and then select the following parameters for your rule:
- - **Role created on is**: Select the duration in days.
- - **Role last used on is**: Select the duration in days when the role was last used.
- - **Cross account role**: Select **True** or **False**.
+ - **Role Created On Is**: Select the duration in days.
+ - **Role Last Used On Is**: Select the duration in days when the role was last used.
+ - **Cross Account Role**: Select **True** or **False**.
-1. Select **Mode**, and then, if you want recommendations to be generated and applied manually, select **On-demand**.
+1. Select **Mode**, and then, if you want recommendations to be generated and applied manually, select **On-Demand**.
1. Select **Save**
- The following information displays in the **Autopilot rules** table:
+ The following information displays in the **Autopilot Rules** table:
- **Rule Name**: The name of the rule. - **State**: The status of the rule: idle (not being use) or active (being used).
active-directory Cloudknox Howto Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-notifications-rule.md
This article describes how to view notification settings for a rule in the Cloud
1. In the CloudKnox home page, select the **Autopilot** tab. 1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
-1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
1. In the **Autopilot** dashboard, select a rule. 1. In the far right of the row, select the ellipses **(...)**
-1. To view notification settings for a rule, select **Notification settings**.
+1. To view notification settings for a rule, select **Notification Settings**.
CloudKnox displays a list of subscribed users. These users are signed up to receive notifications for the selected rule.
-1. To close the **Notification settings** box, select **Close**.
+1. To close the **Notification Settings** box, select **Close**.
## Next steps
active-directory Cloudknox Howto Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-recommendations-rule.md
This article describes how to generate and view rule recommendations in the Clou
1. In the CloudKnox home page, select the **Autopilot** tab. 1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
-1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
1. In the **Autopilot** dashboard, select a rule. 1. In the far right of the row, select the ellipses **(...)**.
-1. To generate recommendations for each user and the authorization system, select **Generate recommendations**.
+1. To generate recommendations for each user and the authorization system, select **Generate Recommendations**.
Only the user who created the selected rule can generate a recommendation. 1. View your recommendations in the **Recommendations** subtab.
This article describes how to generate and view rule recommendations in the Clou
1. In the CloudKnox home page, select the **Autopilot** tab. 1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
-1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
1. In the **Autopilot** dashboard, select a rule. 1. In the far right of the row, select the ellipses **(...)**
-1. To view recommendations for each user and the authorization system, select **View recommendations**.
+1. To view recommendations for each user and the authorization system, select **View Recommendations**.
CloudKnox displays the recommendations for each user and authorization system in the **Recommendations** subtab.
This article describes how to generate and view rule recommendations in the Clou
1. In the CloudKnox home page, select the **Autopilot** tab. 1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
-1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
1. In the **Autopilot** dashboard, select a rule. 1. In the far right of the row, select the ellipses **(...)**
-1. To view recommendations for each user and the authorization system, select **View recommendations**.
+1. To view recommendations for each user and the authorization system, select **View Recommendations**.
CloudKnox displays the recommendations for each user and authorization system in the **Recommendations** subtab.
-1. To apply a recommendation, select the **Apply recommendations** subtab, and then select a recommendation.
+1. To apply a recommendation, select the **Apply Recommendations** subtab, and then select a recommendation.
1. Select **Close** to close the **Recommendations** subtab. ## Unapply rule recommendations 1. In the CloudKnox home page, select the **Autopilot** tab. 1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
-1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
1. In the **Autopilot** dashboard, select a rule. 1. In the far right of the row, select the ellipses **(...)**
-1. To view recommendations for each user and the authorization system, select **View recommendations**.
+1. To view recommendations for each user and the authorization system, select **View Recommendations**.
CloudKnox displays the recommendations for each user and authorization system in the **Recommendations** subtab.
-1. To remove a recommendation, select the **Unapply recommendations** subtab, and then select a recommendation.
+1. To remove a recommendation, select the **Unapply Recommendations** subtab, and then select a recommendation.
1. Select **Close** to close the **Recommendations** subtab.
This article describes how to generate and view rule recommendations in the Clou
- For more information about viewing rules, see [View roles in the Autopilot dashboard](cloudknox-ui-autopilot.md). - For information about creating rules, see [Create a rule](cloudknox-howto-create-rule.md).-- For information about notification settings for rules, see [View notification settings for a rule](cloudknox-howto-notifications-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](cloudknox-howto-notifications-rule.md).
active-directory Cloudknox Howto Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-revoke-task-readonly-status.md
This article describes how you can revoke high-risk and unused tasks or assign r
## View an identity's permissions 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**.
-1. To search for more parameters, you can make a selection from the **User States**, **Privilege Creep Index**, and **Task usage** dropdowns.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP/Service Account**.
+1. To search for more parameters, you can make a selection from the **User States**, **Permission Creep Index**, and **Task Usage** dropdowns.
1. Select **Apply**. CloudKnox displays a list of groups, users, and service accounts that match your criteria. 1. In **Enter a username**, enter or select a user.
-1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
1. Make a selection from the results list.
- The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current role**.
+ The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current Role**.
## Revoke an identity's access to unused tasks 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list.
-1. To revoke an identity's access to tasks they aren't using, select **Revoke unused tasks**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. To revoke an identity's access to tasks they aren't using, select **Revoke Unused Tasks**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can revoke high-risk and unused tasks or assign r
## Revoke an identity's access to high-risk tasks 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list.
-1. To revoke an identity's access to high-risk tasks, select **Revoke high-risk tasks**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. To revoke an identity's access to high-risk tasks, select **Revoke High-Risk Tasks**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can revoke high-risk and unused tasks or assign r
## Revoke an identity's ability to delete tasks 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list.
-1. To revoke an identity's ability to delete tasks, select **Revoke delete tasks**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. To revoke an identity's ability to delete tasks, select **Revoke Delete Tasks**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can revoke high-risk and unused tasks or assign r
## Assign read-only status to an identity 1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
-1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
-1. From the **Select an authorization system** dropdown, select the accounts you want to access.
-1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
1. Make a selection from the results list.
-1. To assign read-only status to an identity, select **Assign read-only status**.
-1. When the following message displays: **Are you sure you want to change permissions?**, select:
+1. To assign read-only status to an identity, select **Assign Read-Only Status**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
- **Generate Script** to generate a script where you can manually add/remove the permissions you selected. - **Execute** to change the permission. - **Close** to cancel the action.
This article describes how you can revoke high-risk and unused tasks or assign r
- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md). - For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md). - For information on how to add and remove roles and tasks for Azure and GCP identities, see [Add and remove roles and tasks for Azure and GCP identities](cloudknox-howto-attach-detach-permissions.md).-- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
active-directory Cloudknox Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-audit-trail.md
If you have used filters before, the default filter is last filter you selected.
1. To display the **Audit** dashboard, on the CloudKnox home page, select **Audit**.
-1. To select your authorization system type, in the **Authorization system type** box, select Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
+1. To select your authorization system type, in the **Authorization System Type** box, select Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), Google Cloud Platform (**GCP**), or Platform (**Platform**).
-1. To select your authorization system, in the **Authorization system** box:
+1. To select your authorization system, in the **Authorization System** box:
- From the **List** subtab, select the accounts you want to use. - From the **Folders** subtab, select the folders you want to use.
If you have used filters before, the default filter is last filter you selected.
There are several different query parameters you can configure individually or in combination. The query parameters and corresponding instructions are listed in the following sections. -- To create a new query, select **New query**.
+- To create a new query, select **New Query**.
- To view an existing query, select **View** (the eye icon). - To edit an existing query, select **Edit** (the pencil icon). - To delete a function line in a query, select **Delete** (the minus sign **-** icon).-- To create multiple queries at one time, select **Add new tab** to the right of the **Query** tabs that are displayed.
+- To create multiple queries at one time, select **Add New Tab** to the right of the **Query** tabs that are displayed.
You can open a maximum number of six query tab pages at the same time. A message will appear when you've reached the maximum.
There are several different query parameters you can configure individually or i
### Create a query with a date
-1. In the **New query** section, the default parameter displayed is **Date In "Last day"**.
+1. In the **New Query** section, the default parameter displayed is **Date In "Last day"**.
The first-line parameter always defaults to **Date** and can't be deleted.
The **Operator** menu displays the following options depending on the identity y
1. In the **New query** section, select **Add**.
-1. From the menu, select **Resource name**.
+1. From the menu, select **Resource Name**.
1. From the **Operator** menu, select the required option.
The **Operator** menu displays the following options depending on the identity y
### Create a query with a resource type
-1. In the **New query** section, select **Add**.
+1. In the **New Query** section, select **Add**.
-1. From the menu, select **Resource type**.
+1. From the menu, select **Resource Type**.
1. From the **Operator** menu, select the required option.
The **Operator** menu displays the following options depending on the identity y
### Create a query with a task name
-1. In the **New query** section, select **Add**.
+1. In the **New Query** section, select **Add**.
-1. From the menu, select **Task name**.
+1. From the menu, select **Task Name**.
1. From the **Operator** menu, select the required option.
The **Operator** menu displays the following options depending on the identity y
### Create a query with a state
-1. In the **New query** section, select **Add**.
+1. In the **New Query** section, select **Add**.
1. From the menu, select **State**. 1. From the **Operator** menu, select the required option.
- - **Is** / **Is not**: Allows a user to select in the value field and select **Authorization failure**, **Error**, or **Success**.
+ - **Is** / **Is not**: Allows a user to select in the value field and select **Authorization Failure**, **Error**, or **Success**.
1. To add criteria to this section, select **Add**.
-1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with State **Authorization failure**.
+1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with State **Authorization Failure**.
1. Select the **Add** icon, select **Or** with **Is**, and then select **Success**.
The **Operator** menu displays the following options depending on the identity y
### Create a query with a role session name
-1. In the **New query** section, select **Add**.
+1. In the **New Query** section, select **Add**.
-2. From the menu, select **Role session name**.
+2. From the menu, select **Role Session Name**.
3. From the **Operator** menu, select the required option.
The **Operator** menu displays the following options depending on the identity y
### Create a query with an access key ID
-1. In the **New query** section, select **Add**.
+1. In the **New Query** section, select **Add**.
2. From the menu, select **Access Key ID**.
The **Operator** menu displays the following options depending on the identity y
### Create a query with a tag key
-1. In the **New query** section, select **Add**.
+1. In the **New Query** section, select **Add**.
2. From the menu, select **Tag Key**.
The **Operator** menu displays the following options depending on the identity y
### Create a query with a tag key value
-1. In the **New query** section, select **Add**.
+1. In the **New Query** section, select **Add**.
2. From the menu, select **Tag Key Value**.
The **Operator** menu displays the following options depending on the identity y
1. To sort each column by ascending or descending value, select the up or down arrows next to the column name.
- - **Identity details**: The name of the identity, for example the name of the role session performing the task.
+ - **Identity Details**: The name of the identity, for example the name of the role session performing the task.
- - To view the **Raw events summary**, which displays the full details of the event, next to the **Name** column, select **View**.
+ - To view the **Raw Events Summary**, which displays the full details of the event, next to the **Name** column, select **View**.
- - **Resource name**: The name of the resource on which the task is being performed.
+ - **Resource Name**: The name of the resource on which the task is being performed.
If the column displays **Multiple**, it means multiple resources are listed in the column. 1. To view a list of all resources, hover over **Multiple**.
- - **Resource type**: Displays the type of resource, for example, *Key* (encryption key) or *Bucket* (storage).
- - **Task name**: The name of the task that was performed by the identity.
+ - **Resource Type**: Displays the type of resource, for example, *Key* (encryption key) or *Bucket* (storage).
+ - **Task Name**: The name of the task that was performed by the identity.
An exclamation mark (**!**) next to the task name indicates that the task failed. - **Date**: The date when the task was performed.
- - **IP address**: The IP address from where the user performed the task.
+ - **IP Address**: The IP address from where the user performed the task.
- - **Authorization system**: The authorization system name in which the task was performed.
+ - **Authorization System**: The authorization system name in which the task was performed.
1. To download the results in comma-separated values (CSV) file format, select **Download**. ## Save a query
-1. After you complete your query selections from the **New query** section, select **Save**.
-
-2. In the **Query name** box, enter a name for your query, and then select **Save**.
+1. After you complete your query selections from the **New Query** section, select **Save**.
-3. To save a query with a different name, select the ellipses (**...**) next to **Save**, and then select **Save as**.
+2. In the **Query Name** box, enter a name for your query, and then select **Save**.
-4. Make your query selections from the **New query** section, select the ellipses (**...**), and then select **Save as**.
+3. To save a query with a different name, select the ellipses (**...**) next to **Save**, and then select **Save As**.
-5. To save a new query, in the **Save query** box, enter the name for the query, and then select **Save**.
+4. Make your query selections from the **New Query** section, select the ellipses (**...**), and then select **Save As**.
- The following message displays in green at the top of the screen to indicate the query was saved successfully: **Saved query as XXX**.
+5. To save a new query, in the **Save Query** box, enter the name for the query, and then select **Save**.
6. To save an existing query you've modified, select the ellipses (**...**). - To save a modified query under the same name, select **Save**.
- - To save a modified query under a different name, select **Save as**.
+ - To save a modified query under a different name, select **Save As**.
### View a saved query
-1. Select **Saved Queries**, and then select **Load queries**.
+1. Select **Saved Queries**, and then select a query from the **Load Queries** list.
A message box opens with the following options: **Load with the saved authorization system** or **Load with the currently selected authorization system**.
-1. Select the appropriate option, and then select **Load query**.
+1. Select the appropriate option, and then select **Load Queries**.
1. View the query information:
- - **Query**: Displays the name of the saved query.
- - **Query type**: Displays whether the query is a *System* query or a *Custom* query.
+ - **Query Name**: Displays the name of the saved query.
+ - **Query Type**: Displays whether the query is a *System* query or a *Custom* query.
- **Schedule**: Displays how often a report will be generated. You can schedule a one-time report or a monthly report.
- - **Next on**: Displays the date and time the next report will be generated.
+ - **Next On**: Displays the date and time the next report will be generated.
- **Format**: Displays the output format for the report, for example, CSV.
+ - **Last Modified On**: Displays the date in which the query was last modified on.
-1. To view or set schedule details, select the gear icon, select **Create schedule**, and then set the details.
+1. To view or set schedule details, select the gear icon, select **Create Schedule**, and then set the details.
- If a schedule has already been created, select the gear icon to open the **Edit schedule** box.
+ If a schedule has already been created, select the gear icon to open the **Edit Schedule** box.
- - **Repeats**: Sets how often the report should repeat.
- - **Date**: Sets the date when you want to receive the report.
- - **hh:mm**: Sets the specific time when you want to receive the report.
- - **Report file format**: Select the output type for the file, for example, CSV.
- - **Share report with people**: The email address of the user who is creating the schedule is displayed in this field. You can add other email addresses.
+ - **Repeat**: Sets how often the report should repeat.
+ - **Start On**: Sets the date when you want to receive the report.
+ - **At**: Sets the specific time when you want to receive the report.
+ - **Report Format**: Select the output type for the file, for example, CSV.
+ - **Share Report With**: The email address of the user who is creating the schedule is displayed in this field. You can add other email addresses.
1. After selecting your options, select **Schedule**.
The **Operator** menu displays the following options depending on the identity y
- **Rename**: Enter the new name of the query and select **Save**. - **Delete**: Delete the saved query.
- The **Delete query** box opens, asking you to confirm that you want to delete the query. Select **Yes** or **No**.
+ The **Delete Query** box opens, asking you to confirm that you want to delete the query. Select **Yes** or **No**.
- **Duplicate**: Creates a duplicate of the query and names it *Copy of XXX*.
- - **Delete schedule**: Deletes the schedule details for this query.
+ - **Delete Schedule**: Deletes the schedule details for this query.
This option isn't available if you haven't yet saved a schedule.
- The **Delete schedule** box opens, asking you to confirm that you want to delete the schedule. Select **Yes** or **No**.
+ The **Delete Schedule** box opens, asking you to confirm that you want to delete the schedule. Select **Yes** or **No**.
## Export the results of a query as a report
The **Operator** menu displays the following options depending on the identity y
- For information on how to view how users access information, see [Use queries to see how users access information](cloudknox-ui-audit-trail.md). - For information on how to create a query, see [Create a custom query](cloudknox-howto-create-custom-queries.md).-- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
active-directory Cloudknox Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permission-analytics.md
This article describes how you can create and view permission analytics triggers
## View permission analytics triggers 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Permission analytics**, and then select the **Alerts** subtab.
+1. Select **Permission Analytics**, and then select the **Alerts** subtab.
The **Alerts** subtab displays the following information:
- - **Alert name**: Lists the name of the alert.
+ - **Alert Name**: Lists the name of the alert.
- To view the name, ID, role, domain, authorization system, statistical condition, anomaly date, and observance period, select **Alert name**. - To expand the top information found with a graph of when the anomaly occurred, select **Details**.
- - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
- **# of Occurrences**: Displays how many times the alert trigger has occurred. - **Task**: Displays how many tasks are affected by the alert - **Resources**: Displays how many resources are affected by the alert
This article describes how you can create and view permission analytics triggers
## Create a permission analytics trigger 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Permission analytics**, select the **Alerts** subtab, and then select **Create Permission Analytics Trigger**.
-1. In the **Alert name** box, enter a name for the alert.
-1. Select the **Authorization system**.
+1. Select **Permission Analytics**, select the **Alerts** subtab, and then select **Create Permission Analytics Trigger**.
+1. In the **Alert Name** box, enter a name for the alert.
+1. Select the **Authorization System**.
1. Select **Identity performed high number of tasks**, and then select **Next**.
-1. On the **Authorization systems** tab, select the appropriate accounts and folders, or select **All**.
+1. On the **Authorization Systems** tab, select the appropriate accounts and folders, or select **All**.
This screen defaults to the **List** view but can also be changed to the **Folder** view, and the applicable folder can be selected instead of individually by system.
This article describes how you can create and view permission analytics triggers
## View permission analytics alert triggers 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Permission analytics**, and then select the **Alert triggers** subtab.
+1. Select **Permission Analytics**, and then select the **Alert Triggers** subtab.
The **Alert triggers** subtab displays the following information: - **Alert**: Lists the name of the alert.
- - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
- **# of users subscribed**: Displays the number of users subscribed to the alert.
- - **Created by**: Displays the email address of the user who created the alert.
- - **Last modified by**: Displays the email address of the user who last modified the alert.
+ - **Created By**: Displays the email address of the user who created the alert.
+ - **Last modified By**: Displays the email address of the user who last modified the alert.
- **Last Modified On**: Displays the date and time the trigger was last modified. - **Subscription**: Toggle the button to **On** or **Off**.
- - **View trigger**: Displays the current trigger settings and applicable authorization system details.
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details.
1. To view other options available to you, select the ellipses (**...**), and then make a selection from the available options:
active-directory Cloudknox Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permissions-analytics-reports.md
This article describes how to generate and download the **Permissions analytics
1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems Reports** subtab. The **Systems Reports** subtab displays a list of reports the **Reports** table.
-1. Find **Permissions analytics report** in the list, and to download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. Find **Permissions Analytics Report** in the list, and to download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
- The following message displays: **Successfully started to generate on-demand report.**
+ The following message displays: **Successfully Started To Generate On Demand Report.**
1. For detailed information in the report, select the right arrow next to one of the following categories. Or, select the required category under the **Findings** column.
This article describes how to generate and download the **Permissions analytics
1. Select a category and view the following columns of information:
- - **User**, **Role**, **Resource**, **Serverless function name**: Displays the name of the identity.
- - **Authorization system**: Displays the authorization system to which the identity belongs.
+ - **User**, **Role**, **Resource**, **Serverless Function Name**: Displays the name of the identity.
+ - **Authorization System**: Displays the authorization system to which the identity belongs.
- **Domain**: Displays the domain name to which the identity belongs. - **Permissions**: Displays the maximum number of permissions that the identity can be granted. - **Used**: Displays how many permissions that the identity has used. - **Granted**: Displays how many permissions that the identity has been granted. - **PCI**: Displays the permission creep index (PCI) score of the identity.
- - **Date last active on**: Displays the date that the identity was last active.
- - **Date created on**: Displays the date when the identity was created.
+ - **Date Last Active On**: Displays the date that the identity was last active.
+ - **Date Created On**: Displays the date when the identity was created.
<!## Add and remove tags in the Permissions analytics report 1. Select **Tags**.
-1. Select one of the categories from the **Permissions analytics report**.
+1. Select one of the categories from the **Permissions Analytics Report**.
1. Select the identity name to which you want to add a tag. Then, select the checkbox at the top to select all identities.
-1. Select **Add tag**.
-1. In the **tag** column:
- - To select from the available options from the list, select **Select a tag**.
+1. Select **Add Tag**.
+1. In the **Tag** column:
+ - To select from the available options from the list, select **Select a Tag**.
- To search for a tag, enter the tag name.
- - To create a new custom tag, select **New custom tag**.
+ - To create a new custom tag, select **New Custom Tag**.
- To create a new tag, enter a name for the tag and select **Create**. - To remove a tag, select **Delete**.
active-directory Cloudknox Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-reports.md
CloudKnox Permissions Management (CloudKnox) has various types of system report
## Explore the Reports dashboard
-The **Reports** dashboard provides a table of information with both system reports and custom reports. The **Reports** dashboard defaults to the **System reports** tab, which has the following details:
+The **Reports** dashboard provides a table of information with both system reports and custom reports. The **Reports** dashboard defaults to the **System Reports** tab, which has the following details:
- **Report Name**: The name of the report. - **Category**: The type of report. For example, **Permission**.-- **Authorization System**: Displays which authorizations the custom report applies to.
+- **Authorization Systems**: Displays which authorizations the custom report applies to.
- **Format**: Displays the output format the report can be generated in. For example, comma-separated values (CSV) format, portable document format (PDF), or Microsoft Excel Open XML Spreadsheet (XLSX) format. - To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
- The following message displays across the top of the screen in green if the download is successful: **Successfully started to generate on demand report**.
+ The following message displays across the top of the screen in green if the download is successful: **Successfully Started To Generate On Demand Report**.
## Available system reports CloudKnox offers the following reports for management associated with the authorization systems noted in parenthesis: -- **Access key entitlements and usage**:
+- **Access Key Entitlements And Usage**:
- **Summary of report**: Provides information about access key, for example, permissions, usage, and rotation date. - **Applies to**: Amazon Web Services (AWS) and Microsoft Azure - **Report output type**: CSV
CloudKnox offers the following reports for management associated with the author
- The access key age, last rotation date, and last usage date is available in the summary report to help with key rotation. - The granted task and Permissions creep index (PCI) score to take action on the keys. -- **User entitlements and usage**:
+- **User Entitlements And Usage**:
- **Summary of report**: Provides information about the identities' permissions, for example, entitlement, usage, and PCI. - **Applies to**: AWS, Azure, and Google Cloud Platform (GCP) - **Report output type**: CSV
CloudKnox offers the following reports for management associated with the author
- **Use cases**: - The data displayed on the **Usage Analytics** screen is downloaded as part of the **Summary** report. The user's detailed permissions usage is listed in the **Detailed** report. -- **Group entitlements and usage**:
+- **Group Entitlements And Usage**:
- **Summary of report**: Provides information about the group's permissions, for example, entitlement, usage, and PCI. - **Applies to**: AWS, Azure, and GCP - **Report output type**: CSV
CloudKnox offers the following reports for management associated with the author
- **Use cases**: - All group level entitlements and permission assignments, PCIs, and the number of members are listed as part of this report. -- **Identity permissions**:
+- **Identity Permissions**:
- **Summary of report**: Report on identities that have specific permissions, for example, identities that have permission to delete any S3 buckets. - **Applies to**: AWS, Azure, and GCP - **Report output type**: CSV
CloudKnox offers the following reports for management associated with the author
- The **Role summary** lists similar details as **Group Summary**. - The **Delete Task summary** section lists the number of times the **Delete task** has been executed in the given time period. -- **Permissions analytics report**
+- **Permissions Analytics Report**
- **Summary of report**: Provides information about the violation of key security best practices. - **Applies to**: AWS, Azure, and GCP - **Report output type**: CSV
active-directory Cloudknox Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-rule-based-anomalies.md
Rule-based anomalies identify recent activity in CloudKnox Permissions Managemen
## View rule-based anomaly alerts 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Rule-based anomaly**, and then select the **Alerts** subtab.
+1. Select **Rule-Based Anomaly**, and then select the **Alerts** subtab.
The **Alerts** subtab displays the following information:
- - **Alert name**: Lists the name of the alert.
+ - **Alert Name**: Lists the name of the alert.
- To view the specific identity, resource, and task names that occurred during the alert collection period, select the **Alert Name**.
- - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
- - **# of occurrences**: How many times the alert trigger has occurred.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
+ - **# of Occurrences**: How many times the alert trigger has occurred.
- **Task**: How many tasks performed are triggered by the alert. - **Resources**: How many resources accessed are triggered by the alert. - **Identity**: How many identities performing unusual behavior are triggered by the alert.
- - **Authorization system**: Displays which authorization systems the alert applies to, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+ - **Authorization System**: Displays which authorization systems the alert applies to, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
- **Date/Time**: Lists the date and time of the alert. - **Date/Time (UTC)**: Lists the date and time of the alert in Coordinated Universal Time (UTC).
Rule-based anomalies identify recent activity in CloudKnox Permissions Managemen
## Create a rule-based anomaly trigger 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Rule-based anomaly**, and then select the **Alerts** subtab.
+1. Select **Rule-Based Anomaly**, and then select the **Alerts** subtab.
1. Select **Create Anomaly Trigger**. 1. In the **Alert Name** box, enter a name for the alert.
-1. Select the **Authorization system**, **AWS**, **Azure**, or **GCP**.
+1. Select the **Authorization System**, **AWS**, **Azure**, or **GCP**.
1. Select one of the following conditions: - **Any Resource Accessed for the First Time**: The identity accesses a resource for the first time during the specified time interval. - **Identity Performs a Particular Task for the First Time**: The identity does a specific task for the first time during the specified time interval.
Rule-based anomalies identify recent activity in CloudKnox Permissions Managemen
## View a rule-based anomaly trigger 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Rule-based anomaly**, and then select the **Alert triggers** subtab.
+1. Select **Rule-Based Anomaly**, and then select the **Alert Triggers** subtab.
- The **Alert triggers** subtab displays the following information:
+ The **Alert Triggers** subtab displays the following information:
- **Alerts**: Displays the name of the alert. - **Anomaly Alert Rule**: Displays the name of the selected rule when creating the alert.
- - **# of users subscribed**: Displays the number of users subscribed to the alert.
- - **Created by**: Displays the email address of the user who created the alert.
+ - **# of Users Subscribed**: Displays the number of users subscribed to the alert.
+ - **Created By**: Displays the email address of the user who created the alert.
- **Last Modified By**: Displays the email address of the user who last modified the alert. - **Last Modified On**: Displays the date and time the trigger was last modified. - **Subscription**: Subscribes you to receive alert emails. Switches between **On** and **Off**.
Rule-based anomalies identify recent activity in CloudKnox Permissions Managemen
- **Rename**: Enter the new name of the query, and then select **Save.** - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users. - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
- - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger.
- **Delete**: Delete the alert. If the **Subscription** is **Off**, the following options are available: - **View**: View details of the alert trigger.
- - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger.
- **Duplicate**: Create a duplicate copy of the selected alert trigger. 1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
active-directory Cloudknox Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-statistical-anomalies.md
Statistical anomalies can detect outliers in an identity's behavior if recent ac
## View statistical anomalies in an identity's behavior 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Statistical anomaly**, and then select the **Alerts** subtab.
+1. Select **Statistical Anomaly**, and then select the **Alerts** subtab.
The **Alerts** subtab displays the following information:
Statistical anomalies can detect outliers in an identity's behavior if recent ac
## Create a statistical anomaly trigger 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Statistical anomaly**, select the **Alerts** subtab, and then select **Create alert trigger**.
+1. Select **Statistical Anomaly**, select the **Alerts** subtab, and then select **Create Alert Trigger**.
1. Enter a name for the alert in the **Alert Name** box.
-1. Select the **Authorization system**, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. Select the **Authorization System**, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
1. Select one of the following conditions: - **Identity Performed High Number of Tasks**: The identity performs higher than their usual volume of tasks. For example, an identity typically performs 25 tasks per day, and now it is performing 100 tasks per day.
Statistical anomalies can detect outliers in an identity's behavior if recent ac
- **Identity Performed Tasks with Multiple Unusual Patterns**: The identity has several unusual patterns in the tasks performed by the identity as established by their baseline in the observance period. 1. Select **Next**.
-1. On the **Authorization systems** tab, select the appropriate systems, or, to select all systems, select **All**.
+1. On the **Authorization Systems** tab, select the appropriate systems, or, to select all systems, select **All**.
The screen defaults to the **List** view but you can switch to **Folder** view using the menu, and then select the applicable folder instead of individually by system.
Statistical anomalies can detect outliers in an identity's behavior if recent ac
- The **Controller** column displays if the controller is enabled or disabled.
-1. On the **Configuration** tab, to update the **Time Interval**, from the **Time range** dropdown, select **90 Days**, **60 Days**, or **30 Days**, and then select **Save**.
+1. On the **Configuration** tab, to update the **Time Interval**, from the **Time Range** dropdown, select **90 Days**, **60 Days**, or **30 Days**, and then select **Save**.
## View statistical anomaly triggers 1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
-1. Select **Statistical anomaly**, and then select the **Alert triggers** subtab.
+1. Select **Statistical Anomaly**, and then select the **Alert Triggers** subtab.
- The **Alert triggers** subtab displays the following information:
+ The **Alert Triggers** subtab displays the following information:
- **Alert**: Displays the name of the alert.
- - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
- **# of users subscribed**: Displays the number of users subscribed to the alert.
- - **Created by**: Displays the email address of the user who created the alert.
- - **Last modified by**: Displays the email address of the user who last modified the alert.
- - **Last modified on**: Displays the date and time the trigger was last modified.
+ - **Created By**: Displays the email address of the user who created the alert.
+ - **Last Modified By**: Displays the email address of the user who last modified the alert.
+ - **Last Modified On**: Displays the date and time the trigger was last modified.
- **Subscription**: Subscribes you to receive alert emails. Toggle the button to **On** or **Off**. 1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
Statistical anomalies can detect outliers in an identity's behavior if recent ac
- **Rename**: Enter the new name of the query, and then select **Save.** - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users. - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
- - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger.
- **Delete**: Delete the alert. If the **Subscription** is **Off**, the following options are available:
active-directory Cloudknox Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-create-custom-report.md
This article describes how to create, view, and share a custom report in CloudKn
## Create a custom report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
1. Select **New Custom Report**. 1. In the **Report Name** box, enter a name for your report. 1. From the **Report Based on** list: 1. To view which authorization systems the report applies to, hover over each report name. 1. To view a description of a report, select the report. 1. Select a report you want to use as the base for your custom report, and then select **Next**.
-1. In the **MyReport** box, select the **Authorization system** you want: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
+1. In the **MyReport** box, select the **Authorization System** you want: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
1. To add specific accounts, select the **List** subtab, and then select **All** or the account names. 1. To add specific folders, select the **Folders** subtab, and then select **All** or the folder names.
The report name appears in the **Reports** table.
## View a custom report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
The **Custom Reports** tab displays the following information in the **Reports** table:
The report name appears in the **Reports** table.
## Share a custom report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
1. In the **Reports** table, select a report and then select the ellipses (**...**) icon.
-1. In the **Report settings** box, select **Share with**.
+1. In the **Report Settings** box, select **Share with**.
1. In the **Search Email to add** box, enter the name of other CloudKnox user(s). You can only share reports with other CloudKnox users.
The report name appears in the **Reports** table.
## Search for a custom report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
1. On the **Custom Reports** tab, select **Search**. 1. In the **Search** box, enter the name of the report you want.
The report name appears in the **Reports** table.
## Modify a saved or scheduled custom report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
1. Hover over the report name on the **Custom Reports** tab. - To rename the report, select **Edit** (the pencil icon), and enter a new name.
The report name appears in the **Reports** table.
- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](cloudknox-product-reports.md). - For a detailed overview of available system reports, see [View a list and description of system reports](cloudknox-all-reports.md). - For information about how to generate and view a system report, see [Generate and view a system report](cloudknox-report-view-system-report.md).-- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
active-directory Cloudknox Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-view-system-report.md
This article describes how to generate and view a system report in CloudKnox Per
## Generate a system report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems reports** subtab.
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
The **Systems Reports** subtab displays the following options in the **Reports** table: - **Report Name**: The name of the report.
This article describes how to generate and view a system report in CloudKnox Per
Or, from the ellipses **(...)** menu, select **Download**.
- The following message displays: **Successfully started to generate on demand report.**
+ The following message displays: **Successfully Started To Generate On Demand Report.**
> [!NOTE] > If you select one authorization system, the report includes a summary. If you select more than one authorization system, the report does not include a summary.
active-directory Cloudknox Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-troubleshoot.md
This section answers troubleshoot issues with CloudKnox Permissions Management (
### The individual files are generated according to the authorization system (subscription/account/project). -- Select the **Collate** option in the **Custom report** screen in the CloudKnox **Reports** tab.
+- Select the **Collate** option in the **Custom Report** screen in the CloudKnox **Reports** tab.
## Data collection in AWS
active-directory Cloudknox Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-audit-trail.md
This article provides an overview of the components of the **Audit** dashboard.
1. The following options display at the top of the **Audit** dashboard: - A tab for each existing query. Select the tab to see details about the query.
- - **New query**: Select the tab to create a new query.
- - **New tab (+)**: Select the tab to add a **New query** tab.
- - **Saved queries**: Select to view a list of saved queries.
+ - **New Query**: Select the tab to create a new query.
+ - **New tab (+)**: Select the tab to add a **New Query** tab.
+ - **Saved Queries**: Select to view a list of saved queries.
-1. To return to the main page, select **Back to Audit**.
+1. To return to the main page, select **Back to Audit Trail**.
## Use a query to view information
This article provides an overview of the components of the **Audit** dashboard.
1. In CloudKnox, select the **Audit** tab. 1. The **New query** tab displays the following options:
- - **Authorization systems type**: A list of your authorization systems: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
+ - **Authorization Systems Type**: A list of your authorization systems: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), Google Cloud Platform (**GCP**), or Platform (**Platform**).
- - **Authorization system**: A **List** of accounts and **Folders** in the authorization system.
+ - **Authorization System**: A **List** of accounts and **Folders** in the authorization system.
- To display a **List** of accounts and **Folders** in the authorization system, select the down arrow, and then select **Apply**.
-1. To add an **Audit condition**, select **Conditions** (the eye icon), select the conditions you want to add, and then select **Close**.
+1. To add an **Audit Trail Condition**, select **Conditions** (the eye icon), select the conditions you want to add, and then select **Close**.
1. To edit existing parameters, select **Edit** (the pencil icon).
This article provides an overview of the components of the **Audit** dashboard.
1. To save your query under a different name, select **Save As** (the ellipses **(...)** icon).
-1. To discard your work and start creating a query again, select **Reset query**.
+1. To discard your work and start creating a query again, select **Reset Query**.
1. To delete a query, select the **X** to the right of the query tab.
This article provides an overview of the components of the **Audit** dashboard.
- For information on how to filter and view user activity, see [Filter and query user activity](cloudknox-product-audit-trail.md). - For information on how to create a query,see [Create a custom query](cloudknox-howto-create-custom-queries.md).-- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
active-directory Cloudknox Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-autopilot.md
The **Autopilot** dashboard in CloudKnox Permissions Management (CloudKnox) prov
1. In the CloudKnox home page, select the **Autopilot** tab. 1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select the authorization system types you want: Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
-1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want.
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want.
1. Select **Apply**.
- The following information displays in the **Autopilot rules** table:
+ The following information displays in the **Autopilot Rules** table:
- **Rule Name**: The name of the rule. - **State**: The status of the rule: idle (not being use) or active (being used).
The **Autopilot** dashboard in CloudKnox Permissions Management (CloudKnox) prov
The following options are available:
- - **View rule**: Select to view details of the rule.
- - **Delete rule**: Select to delete the rule. Only the user who created the selected rule can delete the rule.
- - **Generate recommendations**: Creates recommendations for each user and the authorization system. Only the user who created the selected rule can create recommendations.
- - **View recommendations**: Displays the recommendations for each user and authorization system.
- - **Notification settings**: Displays the users subscribed to this rule. Only the user who created the selected rule can add other users to be notified.
+ - **View Rule**: Select to view details of the rule.
+ - **Delete Rule**: Select to delete the rule. Only the user who created the selected rule can delete the rule.
+ - **Generate Recommendations**: Creates recommendations for each user and the authorization system. Only the user who created the selected rule can create recommendations.
+ - **View Recommendations**: Displays the recommendations for each user and authorization system.
+ - **Notification Settings**: Displays the users subscribed to this rule. Only the user who created the selected rule can add other users to be notified.
You can also select:
You can also select:
- For information about creating rules, see [Create a rule](cloudknox-howto-create-rule.md). - For information about generating, viewing, and applying rule recommendations for rules, see [Generate, view, and apply rule recommendations for rules](cloudknox-howto-recommendations-rule.md).-- For information about notification settings for rules, see [View notification settings for a rule](cloudknox-howto-notifications-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](cloudknox-howto-notifications-rule.md).
active-directory Cloudknox Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-triggers.md
This article describes how to use the **Activity triggers** dashboard in CloudKn
The **Activity triggers** dashboard has four tabs: - **Activity**
- - **Rule-based anomaly**
- - **Statistical anomaly**
- - **Permission analytics**
+ - **Rule-Based Anomaly**
+ - **Statistical Anomaly**
+ - **Permission Analytics**
Each tab has two subtabs: - **Alerts**
- - **Alert triggers**
+ - **Alert Triggers**
## View information about alerts
-The **Alerts** subtab in the **Activity**, **Rule-based anomaly**, **Statistical anomaly**, and **Permission analytics** tabs display the following information:
+The **Alerts** subtab in the **Activity**, **Rule-Based Anomaly**, **Statistical Anomaly**, and **Permission Analytics** tabs display the following information:
- **Alert Name**: Select **All** alert names or specific ones.-- **Date**: Select **Last 24 hours**, **Last 2 Days**, **Last week**, or **Custom range.**
+- **Date**: Select **Last 24 hours**, **Last 2 Days**, **Last Week**, or **Custom Range.**
- - If you select **Custom range**, also enter **From** and **To** duration settings.
+ - If you select **Custom Range**, also enter **From** and **To** duration settings.
- **Apply**: Select this option to activate your settings.-- **Reset filter**: Select this option to discard your settings.
+- **Reset Filter**: Select this option to discard your settings.
- **Reload**: Select this option to refresh the displayed information. - **Create Activity Trigger**: Select this option to [create a new alert trigger](cloudknox-howto-create-alert-trigger.md). - The **Alerts** table displays a list of alerts with the following information: - **Alerts**: The name of the alert. - **# of users subscribed**: The number of users who have subscribed to the alert.
- - **Created by**: The name of the user who created the alert.
+ - **Created By**: The name of the user who created the alert.
- **Modified By**: The name of the user who modified the alert.
-The **Rule-based anomaly** tab and the **Statistical anomaly** tab both have one more option:
+The **Rule-Based Anomaly** tab and the **Statistical Anomaly** tab both have one more option:
- **Columns**: Select the columns you want to display: **Task**, **Resource**, and **Identity**. - To return to the system default settings, select **Reset to default**. ## View information about alert triggers
-The **Alert triggers** subtab in the **Activity**, **Rule-based anomaly**, **Statistical anomaly**, and **Permission analytics** tab displays the following information:
+The **Alert Triggers** subtab in the **Activity**, **Rule-Based Anomaly**, **Statistical Anomaly**, and **Permission Analytics** tab displays the following information:
- **Status**: Select the alert status you want to display: **All**, **Activated**, or **Deactivated**. - **Apply**: Select this option to activate your settings. -- **Reset filter**: Select this option to discard your settings.
+- **Reset Filter**: Select this option to discard your settings.
- **Reload**: Select **Reload** to refresh the displayed information. - **Create Activity Trigger**: Select this option to [create a new alert trigger](cloudknox-howto-create-alert-trigger.md). - The **Triggers** table displays a list of triggers with the following information: - **Alerts**: The name of the alert. - **# of users subscribed**: The number of users who have subscribed to the alert.
- - **Created by**: The name of the user who created the alert.
+ - **Created By**: The name of the user who created the alert.
- **Modified By**: The name of the user who modified the alert.
active-directory Cloudknox Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-user-management.md
This article describes how to use the CloudKnox Permissions Management (CloudKno
- In the upper right of the CloudKnox home page, select **User** (your initials) in the upper right of the screen, and then select **User management.**
- The **User management** dashboard has two tabs:
+ The **User Management** dashboard has two tabs:
- **Users**: Displays information about registered users. - **Groups**: Displays information about groups.
This article describes how to use the CloudKnox Permissions Management (CloudKno
Use the **Users** tab to display the following information about users: -- **User name** and **Email address**: The user's name and email address.-- **Joined on**: The date the user registered on the system.-- **Recent activity**: The date the user last used their permissions to access the system.-- The ellipses **(...)** menu: Select the ellipses, and then select **View Permissions** to open the **View user permission** box.
+- **Name** and **Email Address**: The user's name and email address.
+- **Joined On**: The date the user registered on the system.
+- **Recent Activity**: The date the user last used their permissions to access the system.
+- The ellipses **(...)** menu: Select the ellipses, and then select **View Permissions** to open the **View User Permission** box.
- To view details about the user's permissions, select one of the following options:
- - **Admin for all authorization system types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
- - **Admin for selected authorization system types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Admin for all Authorization System Types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected Authorization System Types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
- **Custom** provides **View**, **Control**, and **Approve** permissions for the authorization system types you select. You can also select the following options:
You can also select the following options:
Use the **Groups** tab to display the following information about groups: -- **Group name**: Displays the registered user's name and email address.
+- **Name**: Displays the registered user's name and email address.
- **Permissions**:
- - The **Authorization systems** and the type of permissions the user has been granted: **Admin for all authorization system types**, **Admin for selected authorization system types**, or **Custom**.
+ - The **Authorization Systems** and the type of permissions the user has been granted: **Admin for all Authorization System Types**, **Admin for selected Authorization System Types**, or **Custom**.
- Information about the **Viewer**, **Controller**, **Approver**, and **Requestor**.-- **Modified by**: The email address of the user who modified the group.-- **Modified on**: The date the user last modified the group.
+- **Modified By**: The email address of the user who modified the group.
+- **Modified On**: The date the user last modified the group.
- The ellipses **(...)** menu: Select the ellipses to:
- - **View permissions**: Select this option to view details about the group's permissions, and then select one of the following options:
- - **Admin for all authorization system types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
- - **Admin for selected authorization system types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **View Permissions**: Select this option to view details about the group's permissions, and then select one of the following options:
+ - **Admin for all Authorization System Types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected Authorization System Types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
- **Custom** provides **View**, **Control**, and **Approve** permissions for specific authorization system types that you select.
- - **Edit permissions**: Select this option to modify the group's permissions.
+ - **Edit Permissions**: Select this option to modify the group's permissions.
- **Delete**: Select this option to delete the group's permissions.
- The **Delete permission** box asks you to confirm that you want to delete the group.
+ The **Delete Permission** box asks you to confirm that you want to delete the group.
- Select **Delete** if you want to delete the group, **Cancel** to discard your changes.
You can also select the following options:
- **Reload**: Select this option to refresh the information displayed in the **User** table. - **Search**: Enter a name or email address to search for a specific user. - **Filters**: Select the authorization systems and accounts you want to display. -- **Create permission**: Create a group and set up its permissions. For more information, see [Create group-based permissions](cloudknox-howto-create-group-based-permissions.md)
+- **Create Permission**: Create a group and set up its permissions. For more information, see [Create group-based permissions](cloudknox-howto-create-group-based-permissions.md)
You can also select the following options:
- For information about how to view information about active and completed tasks, see [View information about active and completed tasks](cloudknox-ui-tasks.md). - For information about how to view personal and organization information, see [View personal and organization information](cloudknox-product-account-settings.md).-- For information about how to select group-based permissions settings, see [Select group-based permissions settings](cloudknox-howto-create-group-based-permissions.md).
+- For information about how to select group-based permissions settings, see [Select group-based permissions settings](cloudknox-howto-create-group-based-permissions.md).
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md
Previously updated : 01/31/2021 Last updated : 03/01/2022
By default, the Azure Active Directory (Azure AD) Connect provisioning agent ins
- In step #7 above, instead of click **Open file**, go to start run and navigate to the **AADConnectProvisioningAgentSetup.exe** file. In the run box, after the executable, enter **ENVIRONMENTNAME=AzureUSGovernment** and click **Ok**. [![Screenshot showing US govt cloud install](media/how-to-install/new-install-12.png)](media/how-to-install/new-install-12.png#lightbox)</br>
+## Password hash synchronization and FIPS with cloud sync
+If your server has been locked down according to Federal Information Processing Standard (FIPS), then MD5 is disabled.
+
+**To enable MD5 for password hash synchronization, perform the following steps:**
+
+1. Go to %programfiles%\Microsoft Azure AD Connect Provisioning Agent.
+2. Open AADConnectProvisioningAgent.exe.config.
+3. Go to the configuration/runtime node at the top of the file.
+4. Add the following node: `<enforceFIPSPolicy enabled="false"/>`
+5. Save your changes.
+
+For reference, this snippet is what it should look like:
+
+```
+ <configuration>
+ <runtime>
+ <enforceFIPSPolicy enabled="false"/>
+ </runtime>
+ </configuration>
+```
+
+For information about security and FIPS, see [Azure AD password hash sync, encryption, and FIPS compliance](https://blogs.technet.microsoft.com/enterprisemobility/2014/06/28/aad-password-sync-encryption-and-fips-compliance/).
+ ## Next steps
active-directory Tutorial V2 Windows Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md
This section shows how to use the Microsoft Authentication Library to get a toke
using Microsoft.Graph; using System.Diagnostics; using System.Threading.Tasks;
+ using System.Net.Http.Headers;
``` 1. Replace your `MainPage` class with the following code:
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 02/01/2022 Last updated : 03/01/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## February 2022
+
+### New articles
+
+- [Quickstart: Sign in users and call the Microsoft Graph API from an Android app](mobile-app-quickstart-portal-android.md)
+- [Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app](mobile-app-quickstart-portal-ios.md)
+
+### Updated articles
+
+- [Desktop app that calls web APIs: Acquire a token using WAM](scenario-desktop-acquire-token-wam.md)
+ ## January 2022 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Microsoft identity platform developer glossary](developer-glossary.md) - [Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow](quickstart-v2-javascript-auth-code-angular.md) - [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)-
-## November 2021
-
-### Updated articles
--- [How to migrate a Node.js app from ADAL to MSAL](msal-node-migration.md)-- [Migrate confidential client applications from ADAL.NET to MSAL.NET](msal-net-migration-confidential-client.md)-- [Microsoft identity platform access tokens](access-tokens.md)-- [Microsoft identity web authentication library](microsoft-identity-web.md)-- [Protected web API: App registration](scenario-protected-web-api-app-registration.md)-- [Providing your own HttpClient and proxy using MSAL.NET](msal-net-provide-httpclient.md)-- [Single sign-on with MSAL.js](msal-js-sso.md)-- [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)-- [What's new for authentication?](reference-breaking-changes.md)-
active-directory Enterprise State Roaming Windows Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-windows-settings-reference.md
Previously updated : 02/25/2022 Last updated : 03/01/2022
List of settings that can be configured to sync in recent Windows versions. Thes
| Date, Time, and Region: region format (locale) | sync | | Language: language profile | sync | | Language: list of keyboards | sync |
+| Mouse: Primary Mouse Button | sync |
+| Passwords: Web Credentials | sync |
+| Pen: Pen Handedness | sync |
+| Touchpad: Scrolling Direction | sync |
| Wi-Fi: Wi-Fi profiles (only WPA) | sync | ## Browser settings
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Previously updated : 02/15/2022 Last updated : 03/01/2022
The AADLoginForWindows extension must install successfully in order for the VM t
| Command to run | Expected output | | | |
- | `curl -H @{"Metadata"="true"} "http://169.254.169.254/metadata/instance?api-version=2017-08-01"` | Correct information about the Azure VM |
- | `curl -H @{"Metadata"="true"} "http://169.254.169.254/metadata/identity/info?api-version=2018-02-01"` | Valid Tenant ID associated with the Azure Subscription |
- | `curl -H @{"Metadata"="true"} "http://169.254.169.254/metadata/identity/oauth2/token?resource=urn:ms-drs:enterpriseregistration.windows.net&api-version=2018-02-01"` | Valid access token issued by Azure Active Directory for the managed identity that is assigned to this VM |
+ | `curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01"` | Correct information about the Azure VM |
+ | `curl -H Metadata:true "http://169.254.169.254/metadata/identity/info?api-version=2018-02-01"` | Valid Tenant ID associated with the Azure Subscription |
+ | `curl -H Metadata:true "http://169.254.169.254/metadata/identity/oauth2/token?resource=urn:ms-drs:enterpriseregistration.windows.net&api-version=2018-02-01"` | Valid access token issued by Azure Active Directory for the managed identity that is assigned to this VM |
> [!NOTE] > The access token can be decoded using a tool like [calebb.net](http://calebb.net/). Verify the `oid` in the access token matches the managed identity assigned to the VM.
This exit code translates to `DSREG_E_MSI_TENANTID_UNAVAILABLE` because the exte
- RDP to the VM as a local administrator and verify the endpoint returns valid Tenant ID by running this command from an elevated PowerShell window on the VM:
- - `curl -H @{"Metadata"="true"} http://169.254.169.254/metadata/identity/info?api-version=2018-02-01`
+ - `curl -H Metadata:true http://169.254.169.254/metadata/identity/info?api-version=2018-02-01`
1. The VM admin attempts to install the AADLoginForWindows extension, but a system assigned managed identity has not enabled the VM first. Navigate to the Identity blade of the VM. From the System assigned tab, verify Status is toggled to On.
This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension
1. Verify the required endpoints are accessible from the VM using PowerShell:
- - `curl https://login.microsoftonline.com// -D`
- - `curl https://login.microsoftonline.com/<TenantID>// -D`
- - `curl https://enterpriseregistration.windows.net// -D`
- - `curl https://device.login.microsoftonline.com// -D`
- - `curl https://pas.windows.net// -D`
+ - `curl https://login.microsoftonline.com/ -D -`
+ - `curl https://login.microsoftonline.com/<TenantID>/ -D -`
+ - `curl https://enterpriseregistration.windows.net/ -D -`
+ - `curl https://device.login.microsoftonline.com/ -D -`
+ - `curl https://pas.windows.net/ -D -`
> [!NOTE]
- > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name to get the directory / tenant ID, or select **Azure Active Directory > Properties > Directory ID** in the Azure portal.<br/>`enterpriseregistration.windows.net` and `pas.windows.net` should return 404 Not Found, which is expected behavior.
+ > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name to get the directory / tenant ID, or select **Azure Active Directory > Properties > Directory ID** in the Azure portal.<br/> Attempt to connect to `enterpriseregistration.windows.net` may return 404 Not Found, which is expected behavior.<br/> Attempt to connect to `pas.windows.net` may prompt for pin credentials (you do not need to enter the pin) or may return 404 Not Found. Either one is sufficient to verify the URL is reachable.
1. If any of the commands fails with "Could not resolve host `<URL>`", try running this command to determine the DNS server that is being used by the VM.
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Previously updated : 02/11/2022 Last updated : 03/01/2022
Microsoft is making security defaults available to everyone. The goal is to ensu
### Who's it for? -- If you're an organization that wants to increase your security posture but you don't know how or where to start, security defaults are for you.-- If you're an organization using the free tier of Azure Active Directory licensing, security defaults are for you.
+- Organizations who want to increase their security posture, but don't know how or where to start.
+- Organizations using the free tier of Azure Active Directory licensing.
### Who should use Conditional Access? -- If you're an organization currently using Conditional Access policies to bring signals together, to make decisions, and enforce organizational policies, security defaults are probably not right for you.
+- If you're an organization currently using Conditional Access policies, security defaults are probably not right for you.
- If you're an organization with Azure Active Directory Premium licenses, security defaults are probably not right for you. - If your organization has complex security requirements, you should consider Conditional Access.
Using Azure Resource Manager to manage your services is a highly privileged acti
It's important to verify the identity of users who want to access Azure Resource Manager and update configurations. You verify their identity by requiring more authentication before you allow access.
-After you enable security defaults in your tenant, any user who's accessing the Azure portal, Azure PowerShell, or the Azure CLI will need to complete more authentication. This policy applies to all users who are accessing Azure Resource Manager, whether they're an administrator or a user.
+After you enable security defaults in your tenant, any user accessing the following services must complete multi-factor authentication:
+
+- Azure portal
+- Azure PowerShell
+- Azure CLI
+
+This policy applies to all users who are accessing Azure Resource Manager services, whether they're an administrator or a user.
> [!NOTE] > Pre-2017 Exchange Online tenants have modern authentication disabled by default. In order to avoid the possibility of a login loop while authenticating through these tenants, you must [enable modern authentication](/exchange/clients-and-mobile-in-exchange-online/enable-or-disable-modern-authentication-in-exchange-online).
Emergency access accounts are:
- Aren't used on a daily basis - Are protected with a long complex password
-The credentials for these emergency access accounts should be stored offline in a secure location such as a fireproof safe. Only authorized individuals should have access to these credentials.
-
-For more detailed information about emergency access accounts, see the article [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
+The credentials for these emergency access accounts should be stored offline in a secure location such as a fireproof safe. Only authorized individuals should have access to these credentials.
To create an emergency access account:
To create an emergency access account:
1. Under **Usage location**, select the appropriate location. 1. Select **Create**.
+You may choose [disable password expiration](../authentication/concept-sspr-policy.md#set-a-password-to-never-expire) to for these accounts using Azure AD PowerShell.
+
+For more detailed information about emergency access accounts, see the article [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
+ ### Authentication methods These free security defaults allow registration and use of Azure AD Multi-Factor Authentication **using only the Microsoft Authenticator app using notifications**. Conditional Access allows the use of any authentication method the administrator chooses to enable.
If your organization is a previous user of per-user based Azure AD Multi-Factor
### Conditional Access
-You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which aren't available in security defaults. If you're using Conditional Access and have Conditional Access policies enabled in your environment, security defaults won't be available to you. If you have a license that provides Conditional Access but don't have any Conditional Access policies enabled in your environment, you're welcome to use security defaults until you enable Conditional Access policies. More information about Azure AD licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which aren't available in security defaults. If you're using Conditional Access and have Conditional Access policies enabled in your environment, security defaults won't be available to you. More information about Azure AD licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
![Warning message that you can have security defaults or Conditional Access not both](./media/concept-fundamentals-security-defaults/security-defaults-conditional-access.png)
-Here are step-by-step guides on how you can use Conditional Access to configure a set of policies, which form a good starting point for protecting your identities:
+Here are step-by-step guides for Conditional Access to configure a set of policies, which form a good starting point for protecting your identities:
- [Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) - [Require MFA for Azure management](../conditional-access/howto-conditional-access-policy-azure-management.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
+## August 2021
+
+### New major version of AADConnect available
+
+**Type:** Fixed
+**Service category:** AD Connect
+**Product capability:** Identity Lifecycle Management
+
+We've released a new major version of Azure Active Directory Connect. This version contains several updates of foundational components to the latest versions and is recommended for all customers using Azure AD Connect. [Learn more](../hybrid/whatis-azure-ad-connect-v2.md).
+
++
+### Public Preview - Azure AD single Sign on and device-based Conditional Access support in Firefox on Windows 10
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** SSO
+
+
+We now support native single sign-on (SSO) support and device-based Conditional Access to the Firefox browser on Windows 10 and Windows Server 2019. Support is available in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
+
++
+### Public preview - beta MS Graph APIs for Azure AD access reviews returns list of contacted reviewer names
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+
+We've released beta MS Graph API for Azure AD access reviews. The API has methods to return a list of contacted reviewer names in addition to the reviewer type. [Learn more](/graph/api/resources/accessreviewinstance).
+
++
+### General Availability - "Register or join devices" user action in Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+
+The "Register or join devices" user action is generally available in Conditional access. This user action allows you to control multi-factor authentication policies for Azure Active Directory (AD) device registration. Currently, this user action only allows you to enable multi-factor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
+++
+### General Availability - customers can scope reviews of privileged roles to eligible or permanent assignments
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Administrators can now create access reviews of only permanent or eligible assignments to privileged Azure AD or Azure resource roles. [Learn more](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md).
+
+
+
+### General availability - assign roles to Azure Active Directory (AD) groups
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+
+Assigning roles to Azure AD groups is now generally available. This feature can simplify the management of role assignments in Azure AD for Global Administrators and Privileged Role Administrators. [Learn more](../roles/groups-concept.md).
+
++
+### New Federated Apps available in Azure AD Application gallery - Aug 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In August 2021, we have added following 46 new applications in our App gallery with Federation support:
+
+[Siriux Customer Dashboard](https://portal.siriux.tech/login), [STRUXI](https://struxi.app/), [Autodesk Construction Cloud - Meetings](https://acc.autodesk.com/), [Eccentex AppBase for Azure](../saas-apps/eccentex-appbase-for-azure-tutorial.md), [Bookado](https://adminportal.bookado.io/), [FilingRamp](https://app.filingramp.com/login), [BenQ IAM](../saas-apps/benq-iam-tutorial.md), [Rhombus Systems](../saas-apps/rhombus-systems-tutorial.md), [CorporateExperience](../saas-apps/corporateexperience-tutorial.md), [TutorOcean](../saas-apps/tutorocean-tutorial.md), [Bookado Device](https://adminportal.bookado.io/), [HiFives-AD-SSO](https://app.hifives.in/login/azure), [Darzin](https://au.darzin.com/), [Simply Stakeholders](https://au.simplystakeholders.com/), [KACTUS HCM - Smart People](https://kactusspc.digitalware.co/), [Five9 UC Adapter for Microsoft Teams V2](https://uc.five9.net/?vendor=msteams), [Automation Center](https://automationcenter.cognizantgoc.com/portal/boot/signon), [Cirrus Identity Bridge for Azure AD](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md), [ShiftWizard SAML](../saas-apps/shiftwizard-saml-tutorial.md), [Safesend Returns](https://www.safesendwebsites.com/), [Brushup](../saas-apps/brushup-tutorial.md), [directprint.io Cloud Print Administration](../saas-apps/directprint-io-cloud-print-administration-tutorial.md), [plain-x](https://app.plain-x.com/#/login),[X-point Cloud](../saas-apps/x-point-cloud-tutorial.md), [SmartHub INFER](../saas-apps/smarthub-infer-tutorial.md), [Fresh Relevance](../saas-apps/fresh-relevance-tutorial.md), [FluentPro G.A. Suite](https://gas.fluentpro.com/Account/SSOLogin?provider=Microsoft), [Clockwork Recruiting](../saas-apps/clockwork-recruiting-tutorial.md), [WalkMe SAML2.0](../saas-apps/walkme-saml-tutorial.md), [Sideways 6](https://app.sideways6.com/account/login?ReturnUrl=/), [Kronos Workforce Dimensions](../saas-apps/kronos-workforce-dimensions-tutorial.md), [SysTrack Cloud Edition](https://cloud.lakesidesoftware.com/Cloud/Account/Login), [mailworx Dynamics CRM Connector](https://www.mailworx.info/), [Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](../saas-apps/palo-alto-networks-cloud-identity-enginecloud-authentication-service-tutorial.md), [Peripass](https://accounts.peripass.app/v1/sso/challenge), [JobDiva](https://www.jobssos.com/index_azad.jsp?SSO=AZURE&ID=1), [Sanebox For Office365](https://sanebox.com/login), [Tulip](../saas-apps/tulip-tutorial.md), [HP Wolf Security](https://bec-pocda37b439.bromium-online.com/gui/), [Genesys Engage cloud Email](https://login.microsoftonline.com/common/oauth2/authorize?prompt=consent&accessType=offline&state=07e035a7-6fb0-4411-afd9-efa46c9602f9&resource=https://graph.microsoft.com/&response_type=code&redirect_uri=https://iwd.api01-westus2.dev.genazure.com/iwd/v3/emails/oauth2/microsoft/callback&client_id=36cd21ab-862f-47c8-abb6-79facad09dda), [Meta Wiki](https://meta.dunkel.eu/), [Palo Alto Networks Cloud Identity Engine Directory Sync](https://directory-sync.us.paloaltonetworks.com/directory?instance=L2qoLVONpBHgdJp1M5K9S08Z7NBXlpi54pW1y3DDu2gQqdwKbyUGA11EgeaDfZ1dGwn397S8eP7EwQW3uyE4XL), [Valarea](https://www.valarea.com/en/download), [LanSchool Air](../saas-apps/lanschool-air-tutorial.md), [Catalyst](https://www.catalyst.org/sso-login/), [Webcargo](../saas-apps/webcargo-tutorial.md)
+
+You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
+++
+### New provisioning connectors in the Azure AD Application Gallery - August 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Chatwork](../saas-apps/chatwork-provisioning-tutorial.md)
+- [Freshservice](../saas-apps/freshservice-provisioning-tutorial.md)
+- [InviteDesk](../saas-apps/invitedesk-provisioning-tutorial.md)
+- [Maptician](../saas-apps/maptician-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see Automate user provisioning to SaaS applications with Azure AD.
+
++
+### Multifactor fraud report ΓÇô new audit event
+
+**Type:** Changed feature
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+
+To help administrators understand that their users are blocked for multi-factor authentication as a result of fraud report, weΓÇÖve added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multi-factor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
+++
+### Improved Low-Risk Detections
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+To improve the quality of low risk alerts that Identity Protection issues, we've modified the algorithm to issue fewer low risk Risky Sign-Ins. Organizations may see a significant reduction in low risk sign-in in their environment. [Learn more](../identity-protection/concept-identity-protection-risks.md).
+
++
+### Non-interactive risky sign-ins
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+Identity Protection now emits risky sign-ins on non-interactive sign-ins. Admins can find these risky sign-ins using the **sign-in type** filter in the risky sign-ins report. [Learn more](../identity-protection/howto-identity-protection-investigate-risk.md).
+
++
+### Change from User Administrator to Identity Governance Administrator in Entitlement Management
+
+**Type:** Changed feature
+**Service category:** Roles
+**Product capability:** Identity Governance
+
+The permissions assignments to manage access packages and other resources in Entitlement Management are moving from the User Administrator role to the Identity Governance administrator role.
+
+Users that have been assigned the User administrator role can longer create catalogs or manage access packages in a catalog they don't own. If users in your organization have been assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, they will need a new assignment. You should instead assign these users the Identity Governance administrator role. [Learn more](../governance/entitlement-management-delegate.md)
+++
+### Windows Azure Active Directory connector is deprecated
+
+**Type:** Deprecated
+**Service category:** Microsoft Identity Manager
+**Product capability:** Identity Lifecycle Management
+
+The Windows Azure AD Connector for FIM is at feature freeze and deprecated. The solution of using FIM and the Azure AD Connector has been replaced. Existing deployments should migrate to [Azure AD Connect](../hybrid/whatis-hybrid-identity.md), Azure AD Connect Sync, or the [Microsoft Graph Connector](/microsoft-identity-manager/microsoft-identity-manager-2016-connector-graph), as the internal interfaces used by the Azure AD Connector for FIM are being removed from Azure AD. [Learn more](/microsoft-identity-manager/microsoft-identity-manager-2016-deprecated-features).
+++
+### Retirement of older Azure AD Connect versions
+
+**Type:** Deprecated
+**Service category:** AD Connect
+**Product capability:** User Management
+
+Starting August 31 2022, all V1 versions of Azure AD Connect will be retired. If you haven't already done so, you need to update your server to Azure AD Connect V2.0. You need to make sure you're running a recent version of Azure AD Connect to receive an optimal support experience.
+
+If you run a retired version of Azure AD Connect it may unexpectedly stop working. You may also not have the latest security fixes, performance improvements, troubleshooting, and diagnostic tools and service enhancements. Also, if you require support we can't provide you with the level of service your organization needs.
+
+See [Azure Active Directory Connect V2.0](../hybrid/whatis-azure-ad-connect-v2.md), what has changed in V2.0 and how this change impacts you.
+++
+### Retirement of support for installing MIM on Windows Server 2008 R2 or SQL Server 2008 R2
+
+**Type:** Deprecated
+**Service category:** Microsoft Identity Manager
+**Product capability:** Identity Lifecycle Management
+
+Deploying MIM Sync, Service, Portal or CM on Windows Server 2008 R2, or using SQL Server 2008 R2 as the underlying database, is deprecated as these platforms are no longer in mainstream support. Installing MIM Sync and other components on Windows Server 2016 or later, and with SQL Server 2016 or later, is recommended.
+
+Deploying MIM for Privileged Access Management with a Windows Server 2012 R2 domain controller in the PRIV forest is deprecated. Use Windows Server 2016 or later Active Directory, with Windows Server 2016 functional level, for your PRIV forest domain. The Windows Server 2012 R2 functional level is still permitted for a CORP forest's domain. [Learn more](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms).
+++ ## July 2021 ### New Google sign-in integration for Azure AD B2C and B2B self-service sign-up and invited external users will stop working starting July 12, 2021
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
-[1776632](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1776632&triage=true&fullScreen=false&_a=edit)
### General Availability - France digital accessibility requirement
This change provides users who are signing into Azure Active Directory on iOS, A
-[1424495](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1424495&triage=true&fullScreen=false&_a=edit)
### General Availability - Downloadable access review history report
With Azure Active Directory (Azure AD) Access Reviews, you can create a download
-[1309010](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1309010&triage=true&fullScreen=false&_a=edit)
### Public Preview of Identity Protection for Workload Identities
Azure AD Identity Protection is extending its core capabilities of detecting, in
-[1213729](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1213729&triage=true&fullScreen=false&_a=edit)
### Public Preview - Cross-tenant access settings for B2B collaboration
Cross-tenant access settings enable you to control how users in your organizatio
-[1424498](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1424498&triage=true&fullScreen=false&_a=edit)
### Public preview - Create Azure AD access reviews with multiple stages of reviewers
Use multi-stage reviews to create Azure AD access reviews in sequential stages,
-[1775818](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1775818&triage=true&fullScreen=false&_a=edit)
- ### New Federated Apps available in Azure AD Application gallery - February 2022 **Type:** New feature
For listing your application in the Azure AD app gallery, please read the detail
-[1242804](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1242804&triage=true&fullScreen=false&_a=edit)
- ### Two new MDA detections in Identity Protection **Type:** New feature
Identity Protection has added two new detections from Microsoft Defender for Clo
-[1780796](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1780796&triage=true&fullScreen=false&_a=edit)
- ### Public preview - New provisioning connectors in the Azure AD Application Gallery - February 2022 **Type:** New feature
Identity Protection has added two new detections from Microsoft Defender for Clo
You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
-[BullseyeTDP](../saas-apps/bullseyetdp-provisioning-tutorial.md)
-[GitHub Enterprise Managed User (OIDC)](../saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md)
-[Gong](../saas-apps/gong-provisioning-tutorial.md)
-[LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md)
-[ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)
+- [BullseyeTDP](../saas-apps/bullseyetdp-provisioning-tutorial.md)
+- [GitHub Enterprise Managed User (OIDC)](../saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md)
+- [Gong](../saas-apps/gong-provisioning-tutorial.md)
+- [LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md)
+- [ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)
For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
-[1686037](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1686037&triage=true&fullScreen=false&_a=edit)
- ### General Availability - Privileged Identity Management (PIM) role activation for SharePoint Online enhancements **Type:** Changed feature
The text and design on the Conditional Access blocking screen shown to users whe
-## August 2021
-
-### New major version of AADConnect available
-
-**Type:** Fixed
-**Service category:** AD Connect
-**Product capability:** Identity Lifecycle Management
-
-We've released a new major version of Azure Active Directory Connect. This version contains several updates of foundational components to the latest versions and is recommended for all customers using Azure AD Connect. [Learn more](../hybrid/whatis-azure-ad-connect-v2.md).
-
--
-### Public Preview - Azure AD single Sign on and device-based Conditional Access support in Firefox on Windows 10
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** SSO
-
-
-We now support native single sign-on (SSO) support and device-based Conditional Access to the Firefox browser on Windows 10 and Windows Server 2019. Support is available in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
-
--
-### Public preview - beta MS Graph APIs for Azure AD access reviews returns list of contacted reviewer names
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-
-We've released beta MS Graph API for Azure AD access reviews. The API has methods to return a list of contacted reviewer names in addition to the reviewer type. [Learn more](/graph/api/resources/accessreviewinstance).
-
--
-### General Availability - "Register or join devices" user action in Conditional Access
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-
-The "Register or join devices" user action is generally available in Conditional access. This user action allows you to control multi-factor authentication policies for Azure Active Directory (AD) device registration. Currently, this user action only allows you to enable multi-factor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
---
-### General Availability - customers can scope reviews of privileged roles to eligible or permanent assignments
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-Administrators can now create access reviews of only permanent or eligible assignments to privileged Azure AD or Azure resource roles. [Learn more](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md).
-
-
-
-### General availability - assign roles to Azure Active Directory (AD) groups
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-
-Assigning roles to Azure AD groups is now generally available. This feature can simplify the management of role assignments in Azure AD for Global Administrators and Privileged Role Administrators. [Learn more](../roles/groups-concept.md).
-
--
-### New Federated Apps available in Azure AD Application gallery - Aug 2021
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In August 2021, we have added following 46 new applications in our App gallery with Federation support:
-
-[Siriux Customer Dashboard](https://portal.siriux.tech/login), [STRUXI](https://struxi.app/), [Autodesk Construction Cloud - Meetings](https://acc.autodesk.com/), [Eccentex AppBase for Azure](../saas-apps/eccentex-appbase-for-azure-tutorial.md), [Bookado](https://adminportal.bookado.io/), [FilingRamp](https://app.filingramp.com/login), [BenQ IAM](../saas-apps/benq-iam-tutorial.md), [Rhombus Systems](../saas-apps/rhombus-systems-tutorial.md), [CorporateExperience](../saas-apps/corporateexperience-tutorial.md), [TutorOcean](../saas-apps/tutorocean-tutorial.md), [Bookado Device](https://adminportal.bookado.io/), [HiFives-AD-SSO](https://app.hifives.in/login/azure), [Darzin](https://au.darzin.com/), [Simply Stakeholders](https://au.simplystakeholders.com/), [KACTUS HCM - Smart People](https://kactusspc.digitalware.co/), [Five9 UC Adapter for Microsoft Teams V2](https://uc.five9.net/?vendor=msteams), [Automation Center](https://automationcenter.cognizantgoc.com/portal/boot/signon), [Cirrus Identity Bridge for Azure AD](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md), [ShiftWizard SAML](../saas-apps/shiftwizard-saml-tutorial.md), [Safesend Returns](https://www.safesendwebsites.com/), [Brushup](../saas-apps/brushup-tutorial.md), [directprint.io Cloud Print Administration](../saas-apps/directprint-io-cloud-print-administration-tutorial.md), [plain-x](https://app.plain-x.com/#/login),[X-point Cloud](../saas-apps/x-point-cloud-tutorial.md), [SmartHub INFER](../saas-apps/smarthub-infer-tutorial.md), [Fresh Relevance](../saas-apps/fresh-relevance-tutorial.md), [FluentPro G.A. Suite](https://gas.fluentpro.com/Account/SSOLogin?provider=Microsoft), [Clockwork Recruiting](../saas-apps/clockwork-recruiting-tutorial.md), [WalkMe SAML2.0](../saas-apps/walkme-saml-tutorial.md), [Sideways 6](https://app.sideways6.com/account/login?ReturnUrl=/), [Kronos Workforce Dimensions](../saas-apps/kronos-workforce-dimensions-tutorial.md), [SysTrack Cloud Edition](https://cloud.lakesidesoftware.com/Cloud/Account/Login), [mailworx Dynamics CRM Connector](https://www.mailworx.info/), [Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](../saas-apps/palo-alto-networks-cloud-identity-enginecloud-authentication-service-tutorial.md), [Peripass](https://accounts.peripass.app/v1/sso/challenge), [JobDiva](https://www.jobssos.com/index_azad.jsp?SSO=AZURE&ID=1), [Sanebox For Office365](https://sanebox.com/login), [Tulip](../saas-apps/tulip-tutorial.md), [HP Wolf Security](https://bec-pocda37b439.bromium-online.com/gui/), [Genesys Engage cloud Email](https://login.microsoftonline.com/common/oauth2/authorize?prompt=consent&accessType=offline&state=07e035a7-6fb0-4411-afd9-efa46c9602f9&resource=https://graph.microsoft.com/&response_type=code&redirect_uri=https://iwd.api01-westus2.dev.genazure.com/iwd/v3/emails/oauth2/microsoft/callback&client_id=36cd21ab-862f-47c8-abb6-79facad09dda), [Meta Wiki](https://meta.dunkel.eu/), [Palo Alto Networks Cloud Identity Engine Directory Sync](https://directory-sync.us.paloaltonetworks.com/directory?instance=L2qoLVONpBHgdJp1M5K9S08Z7NBXlpi54pW1y3DDu2gQqdwKbyUGA11EgeaDfZ1dGwn397S8eP7EwQW3uyE4XL), [Valarea](https://www.valarea.com/en/download), [LanSchool Air](../saas-apps/lanschool-air-tutorial.md), [Catalyst](https://www.catalyst.org/sso-login/), [Webcargo](../saas-apps/webcargo-tutorial.md)
-
-You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
-
-For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
---
-### New provisioning connectors in the Azure AD Application Gallery - August 2021
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Chatwork](../saas-apps/chatwork-provisioning-tutorial.md)-- [Freshservice](../saas-apps/freshservice-provisioning-tutorial.md)-- [InviteDesk](../saas-apps/invitedesk-provisioning-tutorial.md)-- [Maptician](../saas-apps/maptician-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see Automate user provisioning to SaaS applications with Azure AD.
-
--
-### Multifactor fraud report ΓÇô new audit event
-
-**Type:** Changed feature
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-
-To help administrators understand that their users are blocked for multi-factor authentication as a result of fraud report, weΓÇÖve added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multi-factor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
---
-### Improved Low-Risk Detections
-
-**Type:** Changed feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-To improve the quality of low risk alerts that Identity Protection issues, we've modified the algorithm to issue fewer low risk Risky Sign-Ins. Organizations may see a significant reduction in low risk sign-in in their environment. [Learn more](../identity-protection/concept-identity-protection-risks.md).
-
--
-### Non-interactive risky sign-ins
-
-**Type:** Changed feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-Identity Protection now emits risky sign-ins on non-interactive sign-ins. Admins can find these risky sign-ins using the **sign-in type** filter in the risky sign-ins report. [Learn more](../identity-protection/howto-identity-protection-investigate-risk.md).
-
--
-### Change from User Administrator to Identity Governance Administrator in Entitlement Management
-
-**Type:** Changed feature
-**Service category:** Roles
-**Product capability:** Identity Governance
-
-The permissions assignments to manage access packages and other resources in Entitlement Management are moving from the User Administrator role to the Identity Governance administrator role.
-
-Users that have been assigned the User administrator role can longer create catalogs or manage access packages in a catalog they don't own. If users in your organization have been assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, they will need a new assignment. You should instead assign these users the Identity Governance administrator role. [Learn more](../governance/entitlement-management-delegate.md)
---
-### Windows Azure Active Directory connector is deprecated
-
-**Type:** Deprecated
-**Service category:** Microsoft Identity Manager
-**Product capability:** Identity Lifecycle Management
-
-The Windows Azure AD Connector for FIM is at feature freeze and deprecated. The solution of using FIM and the Azure AD Connector has been replaced. Existing deployments should migrate to [Azure AD Connect](../hybrid/whatis-hybrid-identity.md), Azure AD Connect Sync, or the [Microsoft Graph Connector](/microsoft-identity-manager/microsoft-identity-manager-2016-connector-graph), as the internal interfaces used by the Azure AD Connector for FIM are being removed from Azure AD. [Learn more](/microsoft-identity-manager/microsoft-identity-manager-2016-deprecated-features).
---
-### Retirement of older Azure AD Connect versions
-
-**Type:** Deprecated
-**Service category:** AD Connect
-**Product capability:** User Management
-
-Starting August 31 2022, all V1 versions of Azure AD Connect will be retired. If you haven't already done so, you need to update your server to Azure AD Connect V2.0. You need to make sure you're running a recent version of Azure AD Connect to receive an optimal support experience.
-
-If you run a retired version of Azure AD Connect it may unexpectedly stop working. You may also not have the latest security fixes, performance improvements, troubleshooting, and diagnostic tools and service enhancements. Also, if you require support we can't provide you with the level of service your organization needs.
-
-See [Azure Active Directory Connect V2.0](../hybrid/whatis-azure-ad-connect-v2.md), what has changed in V2.0 and how this change impacts you.
---
-### Retirement of support for installing MIM on Windows Server 2008 R2 or SQL Server 2008 R2
-
-**Type:** Deprecated
-**Service category:** Microsoft Identity Manager
-**Product capability:** Identity Lifecycle Management
-
-Deploying MIM Sync, Service, Portal or CM on Windows Server 2008 R2, or using SQL Server 2008 R2 as the underlying database, is deprecated as these platforms are no longer in mainstream support. Installing MIM Sync and other components on Windows Server 2016 or later, and with SQL Server 2016 or later, is recommended.
-
-Deploying MIM for Privileged Access Management with a Windows Server 2012 R2 domain controller in the PRIV forest is deprecated. Use Windows Server 2016 or later Active Directory, with Windows Server 2016 functional level, for your PRIV forest domain. The Windows Server 2012 R2 functional level is still permitted for a CORP forest's domain. [Learn more](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms).
--
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
For more information, see [License requirements](access-reviews-overview.md#lice
1. Use the **At end of review, send notification to** option to send notifications to other users or groups with completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, choose **Select User(s) or Group(s)** and add another user or group for which you want to receive the status of completion.
-1. In the **Enable review decision helpers** section, choose whether you want your reviewer to receive recommendations during the review process. When enabled, users who have signed in during the previous 30-day period are recommended for approval. Users who haven't signed in during the past 30 days are recommended for denial.
+1. In the **Enable review decision helpers** section, choose whether you want your reviewer to receive recommendations during the review process. When enabled, users who have signed in during the previous 30-day period are recommended for approval. Users who haven't signed in during the past 30 days are recommended for denial. This 30-day interval is irrespective of whether the sign-ins were interactive or not. The last sign-in date for the specified user will also display along with the recommendation.
> [!NOTE]
- > If you create an access review based on applications, your recommendations are based on the 30-day interval period depending on when the user last signed in to the application rather than the tenant.
+ > If you create an access review based on applications, your recommendations are based on the 30-day interval period depending on when the user last signed in to the application rather than the tenant.
![Screenshot that shows the Enable reviewer decision helpers option.](./media/create-access-review/helpers.png)
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
These risks can be calculated in real-time or calculated offline using Microsoft
| Activity from anonymous IP address | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address. | | Suspicious inbox forwarding | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address. | | Azure AD threat intelligence | Offline | This risk detection type indicates sign-in activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
+| Mass Access to Sensitive Files | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-file-access-by-user). This detection profiles your environment and triggers alerts when users access multiple files from Microsoft SharePoint or Microsoft OneDrive. An alert is triggered only if the number of accessed files is uncommon for the user and the files might contain sensitive information|
+ ### Other risk detections
Microsoft finds leaked credentials in various places, including:
- Law enforcement agencies. - Other groups at Microsoft doing dark web research.
-#### Why are not I seeing any leaked credentials?
+#### Why am I not seeing any leaked credentials?
Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs is not done.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
## Scenario description
-This scenario looks at the classic legacy application using HTTP authorization headers to control access to protected content.
+This scenario looks at the classic legacy application using **HTTP authorization headers** to manage access to protected content.
Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be [trusted by the Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
-The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights 2. From the left navigation pane, select the **Azure Active Directory** service
The Easy Button client must also be registered in Azure AD, before it is allowed
4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button* 5. Specify who can use the application > **Accounts in this organizational directory only** 6. Select **Register** to complete the initial app registration
-7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
* Application.Read.All * Application.ReadWrite.All
The Easy Button client must also be registered in Azure AD, before it is allowed
## Configure Easy Button
-Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP
### Configuration Properties
-The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
Some of these are global settings so can be re-used for publishing more applicat
### Service Provider
-The Service Provider settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA
-1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
+1. Enter **Host**. This is the public FQDN of the application being secured
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
The optional **Security Settings** specify whether Azure AD should encrypt issue
### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
-
-The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP, but weΓÇÖll use the generic SHA template by selecting **F5 BIG-IP APM Azure AD Integration > Add**.
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **F5 BIG-IP APM Azure AD Integration > Add**.
![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png)
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
-4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
Enabling SSO allows users to access BIG-IP published services without having to
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-header/sso-http-headers.png) >[!NOTE]
-> APM session variables defined within curly brackets are CASE sensitive. If you enter EmployeeID when the Azure AD attribute name is being defined as employeeid, it will cause an attribute mapping failure.
+>APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure
### Session Management
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
-
-What isnΓÇÖt covered there however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the client are terminated after a user has logged out.
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
-When the Easy Button wizard deploys a SAML application to Azure AD, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the MyApps portal also terminate the session between the BIG-IP and a client.
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
-During deployment, the SAML applications federation metadata is also imported, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs also terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out.
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
-Consider a scenario where the BIG-IP web portal isnΓÇÖt used, the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO. For this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to the Azure AD SAML sign-out endpoint. The SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
-If making a change to the app is a no go, then consider having the BIG-IP listen for the apps sign-out call, and upon detecting the request have it trigger SLO. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
## Scenario description
-For this scenario, we have an application using **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**, to gate access to protected content.
+This scenario looks at the classic legacy application using **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**, to gate access to protected content.
Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be [trusted by the Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
-The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
The Easy Button client must also be registered in Azure AD, before it is allowed
6. Select **Register** to complete the initial app registration
-7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
* Application.Read.All * Application.ReadWrite.All
The Easy Button client must also be registered in Azure AD, before it is allowed
## Configure Easy Button
-Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP
### Configuration Properties
-The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
Before you select **Next**, confirm the BIG-IP can successfully connect to your
### Service Provider
-The Service Provider settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
-1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
+1. Enter **Host**. This is the public FQDN of the application being secured
2. Enter **Entity ID.** This is the identifier Azure AD will use to identify the SAML SP requesting a token
The optional **Security Settings** specify whether Azure AD should encrypt issue
### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
-
-The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP, but you can use the generic SHA template by selecting **F5 BIG-IP APM Azure AD Integration > Add.**
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **F5 BIG-IP APM Azure AD Integration > Add.**
![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-kerberos-easy-button/azure-config-add-app.png)
The Easy Button wizard provides a set of pre-defined application templates for O
![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. **Add** a user or group that you can use later for testing, otherwise all access will be denied![Graphical user interface, text, application, email
+7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-kerberos-easy-button/azure-configuration-add-user-groups.png)
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
-4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
![Screenshot for Virtual server](./media/f5-big-ip-kerberos-easy-button/virtual-server.png)
Enable **Kerberos** and **Show Advanced Setting** to enter the following:
![Screenshot for SSO method configuration](./media/f5-big-ip-kerberos-easy-button/sso-method-config.png) ### Session Management
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. For more details, consult [F5 documentation](https://support.f5.com/csp/article/K18390492).
-
-However, this documentation does not cover the Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the client are terminated after a user has logged out.
-
-When the Easy Button wizard deploys a SAML application to Azure AD, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the MyApps portal also terminates the session between the BIG-IP and a client.
-
-During deployment, the SAML applications federation metadata is also imported, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs also terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out.
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
-Consider a scenario where the BIG-IP web portal isnΓÇÖt used, and the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO.
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
-For this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to the Azure AD SAML sign-out endpoint. The SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints.**
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
-If making a change to the app is a no go, then consider having the BIG-IP listen for the apps sign-out call, and upon detecting the request have it trigger SLO. For more information on using BIG-IP iRules to achieve this scenario, refer to F5 knowledge articles [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
For increased security, organizations using this pattern could also consider blo
### Azure AD B2B guest access
-SHA also supports [Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md). Azure AD B2B guest access is also possible by having guest identities flowed down from your Azure AD tenant to the directory that your application. It is necessary to have a local representation of guest objects for BIG-IP to perform KCD SSO to the backend application.
+[Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md) is supported for this scenario, by having guest identities flowed down from your Azure AD tenant to the directory the application uses for authorisation. Without a local representation of a guest object in AD, the BIG-IP would fail to recieve a kerberos ticket for KCD SSO to the backend application.
## Advanced deployment
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
## Scenario description
-This scenario looks at the classic legacy application using HTTP authorization headers to control access to protected content.
+This scenario looks at the classic legacy application using **HTTP authorization headers** sourced from LDAP directory attributes, to manage access to protected content.
-Being legacy, the application lacks any form of modern protocols to support a direct integration with Azure AD. Modernizing the app is also costly, requires careful planning, and introduces risk of potential downtime.
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
-One option would be to consider [Azure AD Application Proxy](../app-proxy/application-proxy.md), to gate remote access to the application.
-
-Another approach is to use an F5 BIG-IP Application Delivery Controller (ADC), as it too provides the protocol transitioning required to bridge legacy applications to the modern ID control plane.
-
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application for both remote and local access.
+Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
## Scenario architecture
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be [trusted by the Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
-The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com) using an account with Application Administrative rights
The Easy Button client must also be registered in Azure AD, before it is allowed
6. Select **Register** to complete the initial app registration
-7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
* Application.Read.All * Application.ReadWrite.All
The Easy Button client must also be registered in Azure AD, before it is allowed
## Configure Easy Button
-Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP
### Configuration Properties
-The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
Some of these are global settings so can be re-used for publishing more applicat
### Service Provider
-The Service Provider settings define the SAML SP properties for the APM instance representing the application protected through secure hybrid access.
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA
-1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
+1. Enter **Host**. This is the public FQDN of the application being secured
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
The optional **Security Settings** specify whether Azure AD should encrypt issue
### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
-
-The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP, but weΓÇÖll use the generic secure hybrid access template by selecting **F5 BIG-IP APM Azure AD Integration > Add**.
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **F5 BIG-IP APM Azure AD Integration > Add**.
![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png)
The Easy Button wizard provides a set of pre-defined application templates for O
![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. **Add** a user or group that you can use later for testing, otherwise all access will be denied
+7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
-4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
Enabling SSO allows users to access BIG-IP published services without having to
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-ldap/sso-headers.png) >[!NOTE]
->APM session variables defined within curly brackets are CASE sensitive. If you enter EventRoles when the Azure AD attribute name is being defined as eventroles, it will cause an attribute mapping failure.
+>APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure
### Session Management
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
-
-What isnΓÇÖt covered there however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the client are terminated after a user has logged out.
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
-When the Easy Button wizard deploys a SAML application to Azure AD, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the MyApps portal also terminate the session between the BIG-IP and a client.
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
-During deployment, the SAML applications federation metadata is also imported, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs also terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out.
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
-Consider a scenario where the BIG-IP web portal isnΓÇÖt used, the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO. For this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to the Azure AD SAML sign-out endpoint. The SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
-If making a change to the app is a no go, then consider having the BIG-IP listen for the apps sign-out call, and upon detecting the request have it trigger SLO. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform](../develop/quickstart-register-app.md).
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
-A BIG-IP must also be registered as a client in Azure AD, before it is allowed to establish a trust in between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
A BIG-IP must also be registered as a client in Azure AD, before it is allowed t
6. Select **Register** to complete the initial app registration
-7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
* Application.Read.All * Application.ReadWrite.All
A BIG-IP must also be registered as a client in Azure AD, before it is allowed t
## Configure Easy Button
-Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP
### Configuration Properties
-The **Configuration Properties** tab creates up a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
Some of these are global settings so can be re-used for publishing more applicat
### Service Provider
-The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
-1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
+1. Enter **Host**. This is the public FQDN of the application being secured
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
The **Service Provider** settings define the SAML SP properties for the APM inst
### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. In this example, select **Oracle E-Business Suite > Add**. This adds the template for the Oracle E-business Suite
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **Oracle E-Business Suite > Add**.
![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-oracle/azure-configuration-add-big-ip-application.png)
This section defines all properties that you would normally use to manually conf
![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-6. **User and User Groups** are used to authorize access to the application. They are dynamically added from the tenant. **Add** a user or group that you can use later for testing, otherwise all access will be denied
+6. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
-4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorizati
![ Screenshot for SSO and HTTP headers](./media/f5-big-ip-oracle/sso-and-http-headers.png) >[!NOTE]
->APM session variables defined within curly brackets are CASE sensitive. If you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
+>APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure
### Session Management
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
-What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
-During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign outs terminate the session between a client and Azure AD.
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
+
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform](../develop/quickstart-register-app.md).
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
-A BIG-IP must also be registered as a client in Azure AD, before it is allowed to establish a trust in between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
A BIG-IP must also be registered as a client in Azure AD, before it is allowed t
6. Select **Register** to complete the initial app registration
-7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
* Application.Read.All * Application.ReadWrite.All
A BIG-IP must also be registered as a client in Azure AD, before it is allowed t
## Configure Easy Button
-Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP
### Configuration Properties
-The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
Some of these are global settings can be re-used for publishing more applications, further reducing deployment time and effort.
Some of these are global settings can be re-used for publishing more application
### Service Provider
-The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
-1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
+1. Enter **Host**. This is the public FQDN of the application being secured
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
The **Service Provider** settings define the SAML SP properties for the APM inst
### Azure Active Directory
-This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. In this example, select **JD Edwards Protected by F5 BIG-IP > Add**. This adds the template for the Oracle JD Edwards.
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **JD Edwards Protected by F5 BIG-IP > Add**.
![ Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-oracle-jde/azure-configuration-add-big-ip-application.png)
This section defines all properties that you would normally use to manually conf
![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
-6. **User and User Groups** are used to authorize access to the application. They are dynamically added from the tenant. **Add** a user or group that you can use later for testing, otherwise all access will be denied
+6. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
-4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorizati
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-oracle-jde/sso-and-http-headers.png) >[!NOTE]
->APM session variables defined within curly brackets are CASE sensitive. If you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
+>APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure
### Session Management
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
-The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
-What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
-During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign outs terminate the session between a client and Azure AD.
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](./f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
active-directory F5 Big Ip Oracle Peoplesoft Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to Oracle PeopleSoft
+description: Learn to implement SHA with header-based SSO to Oracle PeopleSoft using F5 BIG-IP Easy Button guided configuration.
+++++++ Last updated : 02/26/2022++++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft
+
+In this article, learn to secure Oracle PeopleSoft (PeopleSoft) using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Integrating a BIG-IP with Azure AD provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
+
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+For this scenario, we have a **PeopleSoft application using HTTP authorization headers** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+
+> [!NOTE]
+> Organizations can also gain remote access to this type of application with [Azure AD Application Proxy](/azure/active-directory/app-proxy/application-proxy).
+
+## Scenario architecture
+
+The secure hybrid access solution for this scenario is made up of several components:
+
+**PeopleSoft Application:** BIG-IP published service to be protected by Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the PeopleSoft service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-oracle-peoplesoft/sp-initiated-flow.png)
+
+| Steps| Description |
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected back to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP injects Azure AD attributes as headers in request to the application |
+| 6| Application authorizes request and returns payload |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but you need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP license SKUs
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+
+* An existing PeopleSoft environment
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template.
+
+With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform](../develop/quickstart-register-app.md).
+
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Throught these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. Go to **Certificates & Secrets**, generate a new **Client secret** and note it down
+
+10. Go to **Overview**, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate the APM Guided Configuration to launch the Easy Button template.
+
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+
+2. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+
+3. Follow the sequence of steps required to publish your application
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+The **Configuration Properties** tab creates a new application config and SSO object.
+
+Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
+
+Some of these are global settings can be re-used for publishing more applications, further reducing deployment time and effort
+
+1. Provide a unique **Configuration Name** that enables an admin to easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted down from your registered application
+
+4. Before you select **Next**, confirm the BIG-IP can successfully connect to your tenant
+
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-oracle-peoplesoft/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-oracle-jde/service-provider-settings.png)
+
+ Next, under optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM uses to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP uploads to Azure AD for encrypting the issued SAML assertions
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. The Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps.
+
+For this scenario, select **Oracle PeopleSoft > Add**
+
+![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-oracle-peoplesoft/azure-configuration-add-big-ip-application.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** for the app that the BIG-IP creates in your Azure AD tenant, and the icon that the users see on MyApps portal
+
+2. In the **Sign On URL (optional)** enter the public FQDN of the PeopleSoft application being secured
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-oracle-peoplesoft/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+4. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+5. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+
+6. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+
+ #### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.The Easy Button template will display the pre-defined employee ID claim required by PeopleSoft.
+
+![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-oracle-peoplesoft/user-attributes-claims.png)
+
+You can include additional Azure AD attributes, if necessary, but this sample PeopleSoft application only requires the pre-defined attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes.
+
+ #### Conditional Access Policy
+
+Conditional Access policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all Conditional Access policies that do not include user-based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+
+2. Select the right arrow and move it to the **Selected Policies** list
+
+ The selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the policy is not enforced.
+
+ ![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select **Client SSL Profile** you created as part of the prerequisites or leave the default if testing.
+
+ ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+
+ ### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. For **Pool Servers** select an existing node or specify an IP and port for the servers hosting the PeopleSoft application.
+
+ ![Screenshot for Application pool](./media/f5-big-ip-easy-button-oracle-peoplesoft/application-pool.png)
+
+ #### Single Sign-On & HTTP Headers
+
+The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the PeopleSoft application expects headers, enable **HTTP Headers** and enter the following properties.
+
+* **Header Operation:** replace
+* **Header Name:** PS_SSO_UID
+* **Header Value:** %{session.sso.token.last.username}
+
+![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-oracle-peoplesoft/sso-and-http-headers.png)
+
+>[!NOTE]
+>APM session variables defined within curly brackets are CASE sensitive. For example, If you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
+
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in App Registrations > Endpoints.
+
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. To achieve this refer to the [PeopleSoft Single Logout](#peoplesoft-single-logout) in the next section.
++
+### Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications. Your application should then be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals.
+
+## Configure PeopleSoft
+
+Oracle Access Manager provides identity and access management across PeopleSoft applications. See [Integrating PeopleSoft with Oracle Access Manager](https://docs.oracle.com/cd/E12530_01/oam.1014/e10356/people.htm) for more info.
+
+### Configure Oracle Access Manager SSO
+
+For this scenario, you will configure Oracle Access Manager to accept SSO from the BIG-IP.
+
+1. Sign into the PeopleSoft console with admin credentials
+
+ ![Screenshot for PeopleSoft console](./media/f5-big-ip-easy-button-oracle-peoplesoft/peoplesoft-console.png)
+
+2. Navigate to **PeopleTools > Security > User Profiles > User Profiles** and create a new user profile
+
+3. Enter **User ID** as **OAMPSFT**
+
+4. Assign **User Role** as *Peoplesoft User* and select **Save**
+
+ ![Screenshot for User Profiles](./media/f5-big-ip-easy-button-oracle-peoplesoft/user-profiles.png)
+
+5. Navigate to **People Tools** > **Web Profile** to select the web profile being used
+
+6. Select the **Security** tab and in the **Public Users** section check **Allow Public Access**
+
+7. Enter **User ID** as *OAMPSFT* along with the accounts **Password**
+
+ ![Screenshot for Web Profile configuration](./media/f5-big-ip-easy-button-oracle-peoplesoft/web-profiles.png)
+
+8. Leave the Peoplesoft console and launch the **PeopleTools Application Designer**
+
+9. Right-click the **LDAPAUTH** field and select **View People Code**
+
+ ![Screenshot for Application Designer](./media/f5-big-ip-easy-button-oracle-peoplesoft/application-designer.png)
+
+10. When the **LDAPAUTH** code windows opens, locate the **OAMSSO_AUTHENTICATION** function
+
+11. Replace the value that is assigned to the default **&defaultUserId** with **OAMPSFT**. that was defined in the web profile
+
+ ![Screenshot for OAMSSO_AUTHENTICATION function](./media/f5-big-ip-easy-button-oracle-peoplesoft/oamsso-authentication-function.png)
+
+12. Save the record and navigate to **PeopleTools > Security > Security Objects > Signon PeopleCode**
+
+13. Enable the **OAMSSO_AUTHENTICATION** function
+
+ ![Screenshot for enabling Signon PeopleCode](./media/f5-big-ip-easy-button-oracle-peoplesoft/enabling-sign-on people-code.png)
+
+### PeopleSoft Single Logout
+
+PeopleSoft SLO is initiated when you sign out of the Microsoft MyApps portal, which in turn calls the BIG-IP SLO endpoint.
+
+**BIG-IP needs instructions to perform SLO on behalf of the application**. One way is to modify the applications sign-out function to call the BIG-IP SLO endpoint, but this isnΓÇÖt possible with Peoplesoft. Have the BIG-IP listen for when users perform a sign-out request to PeopleSoft, to trigger SLO.
+
+To add SLO support for all PeopleSoft users
+
+1. Obtain the correct logout URL for PeopleSoft portal
+
+2. Open the portal through a web browser with debug tools enabled. Find the element with the **PT_LOGOUT_MENU** id and save the URL path with the query parameters. In this example, we have: /psp/ps/?cmd=logout
+
+ ![Screenshot for PeopleSoft logout URL](./media/f5-big-ip-easy-button-oracle-peoplesoft/peoplesoft-logout-url.png)
+
+Next, create a BIG-IP iRule for redirecting users to the SAML SP logout endpoint: /my.logout.php3
+
+1. Navigate to **Local Traffic > iRules List > Create** and provide a name for your rule
+
+2. Enter the following command lines and then select **Finished**
+
+ ```when HTTP_REQUEST {switch -glob -- [HTTP::uri] { "/psp/ps/?cmd=logout" {HTTP::redirect "/my.logout.php3" }}} ```
+
+To assign this iRule to the BIG-IP Virtual Server
+
+1. Navigate to **Access > Guided Configuration**
+
+2. Select the configuration link for your PeopleSoft application
+
+ ![Screenshot for link for your PeopleSoft application](./media/f5-big-ip-easy-button-oracle-peoplesoft/link-peoplesoft-application.png)
+
+3. From the top navigation bar, select **Virtual Server** and enable **Advanced Settings**
+
+ ![Screenshot for Enable Advanced settings](./media/f5-big-ip-easy-button-oracle-peoplesoft/enable-advanced-settings.png)
+
+4. Scroll down to the bottom and add the iRule you just created
+
+ ![Screenshot for PeopleSoft irule](./media/f5-big-ip-easy-button-oracle-peoplesoft/peoplesoft-irule.png)
+
+5. Select **Save and Next** and continue to deploy your new settings.
+
+For more details, refer to the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+
+### Default to PeopleSoft landing page
+
+While itΓÇÖs best having an application redirect user to its landing page, you can also create a similar iRule to achieve this on the BIG-IP. In this scenario, redirect all user requests from the root (ΓÇ£/ΓÇ¥) to the external PeopleSoft portal which is usually located here: ΓÇ£/psc/ps/EXTERNAL/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBLΓÇ¥
+
+1. Navigate to **Local Traffic > iRule**, select **iRule_PeopleSoft** and add these command lines:
+
+ ```when HTTP_REQUEST {switch -glob -- [HTTP::uri] {"/" {HTTP::redirect "/psc/ps/EXTERNAL/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GB"/psp/ps/?cmd=logout" {HTTP::redirect "/my.logout.php3"} } }```
+
+2. Assign the iRule to the BIG-IP Virtual Server as done in the steps above
+
+## Next steps
+
+From a browser, connect to the **PeopleSoft** applicationΓÇÖs external URL or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](./f5-big-ip-header-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+Failure to access a SHA protected application can be due to any number of factors. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list then **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data. If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see if the logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from session variables
+
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
The need for access to privileged Azure resource and Azure AD roles by employees
:::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/advanced-settings.png" alt-text="Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders screenshot.":::
-1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information.
+1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information. Recommendations are based on a 30-day interval period where users who have logged in the past 30 days are recommended access, while users who have not are recommended denial of access. These sign-ins are irrespective of whether they were interactive. The last sign-in of the user is also displayed along with the recommendation.
1. Set **Require reason on approval** to **Enable** to require the reviewer to supply a reason for approval.
active-directory Overview Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md
na Previously updated : 02/28/2022 Last updated : 03/01/2022
Azure AD recommendations:
- Is the Azure AD specific implementation of Azure Advisor. - Supports you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state.
-## What is a recommendation object?
+## Recommendation object
Azure AD tracks the status of a recommendation in a related object. This object includes attributes that are used to characterize the recommendation and a body to store the actionable guidance.
To manage your Azure AD recommendations:
- ### Update the status of a resource To update the status of a resource, you have to right click a resource to bring up the edit menu.
+## Who can access it?
+
+The Azure AD recommendations feature supports all editions of Azure AD. In other words, there is no specific subscription or license required to use this feature.
+
+To (re-) view your recommendations, you need to be:
+
+- Global reader
+
+- Security reader
+
+- Reports reader
++
+To manage your recommendations, you need to be:
+
+- Global admin
+
+- Security admin
+
+- Security operator
+
+- Cloud app admin
+
+- App admin
++++
+## What you should know
+
+- On the recommendations page, you might not see all supported recommendations. This is because Azure AD only displays the recommendations that apply to your tenant.
+
+- Some recommendations have a list of impacted resources associated. This list of resources gives you more context on how the recommendation applies to you and/or which resources you need to address.
+
+**Right now:**
+
+- You can update the status of a recommendation with a read only roles (global reader, security reader, reports reader). This is a known issue that will be fixed.
+
+- The only action recorded in the audit log is completing recommendations.
+
+- Audit logs do not capture actions taken by reader roles.
++ ## Next steps
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
Previously updated : 02/10/2022 Last updated : 03/01/2022
Role-assignable groups have the following restrictions:
- You can only set the `isAssignableToRole` property or the **Azure AD roles can be assigned to the group** option for new groups. - The `isAssignableToRole` property is **immutable**. Once a group is created with this property set, it can't be changed. - You can't make an existing group a role-assignable group.-- A maximum of 400 role-assignable groups can be created in a single Azure AD organization (tenant).
+- A maximum of 500 role-assignable groups can be created in a single Azure AD organization (tenant).
## How are role-assignable groups protected?
active-directory Bonusly Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bonusly-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* A user account in Bonusly with Admin permissions > [!NOTE]
-> The Azure AD provisioning integration relies on the [Bonusly Rest API](https://konghq.com/solutions/gateway/), which is available to Bonusly developers.
+> The Azure AD provisioning integration relies on the [Bonusly REST API](https://konghq.com/solutions/gateway/), which is available to Bonusly developers.
## Adding Bonusly from the gallery
active-directory Tableau Online Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableau-online-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you have:
* A user account in Tableau Online with admin permissions. > [!NOTE]
-> The Azure AD provisioning integration relies on the [Tableau Online Rest API](https://onlinehelp.tableau.com/current/api/rest_api/en-us/help.htm). This API is available to Tableau Online developers.
+> The Azure AD provisioning integration relies on the [Tableau Online REST API](https://onlinehelp.tableau.com/current/api/rest_api/en-us/help.htm). This API is available to Tableau Online developers.
## Add Tableau Online from the Azure Marketplace Before you configure Tableau Online for automatic user provisioning with Azure AD, add Tableau Online from the Azure Marketplace to your list of managed SaaS applications.
advisor Advisor Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-operational-excellence-recommendations.md
Azure Advisor detects that too many of your host pools have validation environme
Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in Azure. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow. With traffic analytics, you can view top talkers across Azure and non Azure deployments, investigate open ports, protocols and malicious flows in your environment and optimize your network deployment for performance. You can process flow logs at 10 mins and 60 mins processing intervals, giving you faster analytics on your traffic. It's a good practice to enable Traffic Analytics for your Azure resources. ## Increase vCPU limits for your deployments for Pay-As-You-Go Subscription (Preview)
-This experience has been created to provide an easy way to increase the quota to help you with your growing needs and avoid any deployment issues due to quota limitations. We have enabled a ΓÇ£Quick FixΓÇ¥ option for limited subscriptions for providing an easy one-click option to increase the quota for the vCPUs from 10 to 20. This simplified approach calls the [Quota Rest API](https://techcommunity.microsoft.com/t5/azure-governance-and-management/using-the-new-quota-rest-api/ba-p/2183670) on behalf of the user to increase the quota.
+This experience has been created to provide an easy way to increase the quota to help you with your growing needs and avoid any deployment issues due to quota limitations. We have enabled a ΓÇ£Quick FixΓÇ¥ option for limited subscriptions for providing an easy one-click option to increase the quota for the vCPUs from 10 to 20. This simplified approach calls the [Quota REST API](https://techcommunity.microsoft.com/t5/azure-governance-and-management/using-the-new-quota-rest-api/ba-p/2183670) on behalf of the user to increase the quota.
## Next steps
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
+
+ Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
+description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks.
++ Last updated : 02/25/2022+++
+#Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
++
+# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
+
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access AKS nodes using SSH, including Windows Server nodes. You can also [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp]. For security purposes, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+
+This article shows you how to create a connection to an AKS node.
+
+## Before you begin
+
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+
+This article also assumes you have an SSH key. You can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. If you use PuTTY Gen to create the key pair, save the key pair in an OpenSSH format rather than the default PuTTy private key format (.ppk file).
+
+You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Create an interactive shell connection to a Linux node
+
+To create an interactive shell connection to a Linux node, use `kubectl debug` to run a privileged container on your node. To list your nodes, use `kubectl get nodes`:
+
+```output
+$ kubectl get nodes -o wide
+
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
+```
+
+Use `kubectl debug` to run a container image on the node to connect to it.
+
+```azurecli-interactive
+kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
+```
+
+This command starts a privileged container on your node and connects to it.
+
+```output
+$ kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
+Creating debugging pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx with container debugger on node aks-nodepool1-12345678-vmss000000.
+If you don't see a command prompt, try pressing enter.
+root@aks-nodepool1-12345678-vmss000000:/#
+```
+
+This privileged container gives access to the node.
+
+> [!NOTE]
+> You can interact with the node session by running `chroot /host` from the privileged container.
+
+### Remove Linux node access
+
+When done, `exit` the interactive shell session. After the interactive container session closes, delete the pod used for access with `kubectl delete pod`.
+
+```output
+kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
+```
+
+## Create the SSH connection to a Windows node
+
+At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
+
+To connect to another node in the cluster, use `kubectl debug`. For more information, see [Create an interactive shell connection to a Linux node][ssh-linux-kubectl-debug].
+
+To create the SSH connection to the Windows Server node from another node, use the SSH keys provided when you created the AKS cluster and the internal IP address of the Windows Server node.
+
+Open a new terminal window and use `kubectl get pods` to get the name of the pod started by `kubectl debug`.
+
+```output
+$ kubectl get pods
+
+NAME READY STATUS RESTARTS AGE
+node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 1/1 Running 0 21s
+```
+
+In the above example, *node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx* is the name of the pod started by `kubectl debug`.
+
+Using `kubectl port-forward`, you can open a connection to the deployed pod:
+
+```
+$ kubectl port-forward node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 2022:22
+Forwarding from 127.0.0.1:2022 -> 22
+Forwarding from [::1]:2022 -> 22
+```
+
+The above example begins forwarding network traffic from port 2022 on your development computer to port 22 on the deployed pod. When using `kubectl port-forward` to open a connection and forward network traffic, the connection remains open until you stop the `kubectl port-forward` command.
+
+Open a new terminal and use `kubectl get nodes` to show the internal IP address of the Windows Server node:
+
+```output
+$ kubectl get nodes -o wide
+
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
+```
+
+In the above example, *10.240.0.67* is the internal IP address of the Windows Server node.
+
+Create an SSH connection to the Windows Server node using the internal IP address. The default username for AKS nodes is *azureuser*. Accept the prompt to continue with the connection. You are then provided with the bash prompt of your Windows Server node:
+
+```output
+$ ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' azureuser@10.240.0.67
+
+The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established.
+ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG.
+Are you sure you want to continue connecting (yes/no)? yes
+
+[...]
+
+Microsoft Windows [Version 10.0.17763.1935]
+(c) 2018 Microsoft Corporation. All rights reserved.
+
+azureuser@aksnpwin000000 C:\Users\azureuser>
+```
+
+The above example connects to port 22 on the Windows Server node through port 2022 on your development computer.
+
+> [!NOTE]
+> If you prefer to use password authentication, use `-o PreferredAuthentications=password`. For example:
+>
+> ```console
+> ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' -o PreferredAuthentications=password azureuser@10.240.0.67
+> ```
+
+### Remove SSH access
+
+When done, `exit` the SSH session, stop any port forwarding, and then `exit` the interactive container session. After the interactive container session closes, delete the pod used for SSH access with `kubectl delete pod`.
+
+```output
+kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
+```
+
+## Next steps
+
+If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
++
+<!-- INTERNAL LINKS -->
+[view-kubelet-logs]: kubelet-logs.md
+[view-master-logs]: monitor-aks-reference.md#resource-logs
+[aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[aks-windows-rdp]: rdp.md
+[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md
+[ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
+[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
Alternative remediations are investigated by AKS engineers if auto-repair is uns
If AKS finds multiple unhealthy nodes during a health check, each node is repaired individually before another repair begins.
+## Node Autodrain
+[Scheduled Events][scheduled-events] can occur on the underlying virtual machines (VMs) in any of your node pools. For [spot node pools][spot-node-pools], scheduled events may cause a *preempt* node event for the node. Certain node events, such as *preempt*, cause AKS node autodrain to attempt a cordon and drain of the affected node, which allows for a graceful reschedule of any affected workloads on that node.
++
+The following table shows the node events, and the actions they cause for AKS node autodrain.
+
+| Event | Description | Action |
+| | | |
+| Freeze | The VM is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there is no impact on memory or open files | No action |
+| Reboot | The VM is scheduled for reboot. The VM's non-persistent memory is lost. | No action |
+| Redeploy | The VM is scheduled to move to another node. The VM's ephemeral disks are lost. | Cordon and drain |
+| Preempt | The spot VM is being deleted. The VM's ephemeral disks are lost. | Cordon and drain |
+| Terminate | The VM is scheduled to be deleted.| Cordon and drain |
+++ ## Limitations In many cases, AKS can determine if a node is unhealthy and attempt to repair the issue, but there are cases where AKS either can't repair the issue or can't detect that there is an issue. For example, AKS can't detect issues if a node status is not being reported due to error in network configuration, or has failed to initially register as a healthy node.
In many cases, AKS can determine if a node is unhealthy and attempt to repair th
Use [Availability Zones][availability-zones] to increase high availability with your AKS cluster workloads. <!-- LINKS - External -->- <!-- LINKS - Internal --> [availability-zones]: ./availability-zones.md [vm-updates]: ../virtual-machines/maintenance-and-updates.md
+[scheduled-events]: ../virtual-machines/linux/scheduled-events.md
+[spot-node-pools]: spot-node-pool.md
aks Open Service Mesh Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-azure-monitor.md
osm metrics enable --namespace bookwarehouse
## Apply ConfigMap
-Create the following ConfigMap in `kube-system`, which will tell AzMon what namespaces should be monitored. For instance, for the bookbuyer / bookstore demo, the ConfigMap would look as follows:
+Create the following ConfigMap in `kube-system`, which will tell Azure Monitor what namespaces should be monitored. For instance, for the bookbuyer / bookstore demo, the ConfigMap would look as follows:
```yaml kind: ConfigMap
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
+
+ Title: Resize node pools in Azure Kubernetes Service (AKS)
+description: Learn how to resize node pools for a cluster in Azure Kubernetes Service (AKS) by cordoning and draining.
++ Last updated : 02/24/2022
+#Customer intent: As a cluster operator, I want to resize my node pools so that I can run more or larger workloads.
++
+# Resize node pools in Azure Kubernetes Service (AKS)
+
+Due to an increasing number of deployments or to run a larger workload, you may want to change the virtual machine scale set plan or resize AKS instances. However, as per [support policies for AKS][aks-support-policies]:
+
+> AKS agent nodes appear in the Azure portal as regular Azure IaaS resources. But these virtual machines are deployed into a custom Azure resource group (usually prefixed with MC_*). You cannot do any direct customizations to these nodes using the IaaS APIs or resources. Any custom changes that are not done via the AKS API will not persist through an upgrade, scale, update or reboot.
+
+This lack of persistence also applies to the resize operation, thus, resizing AKS instances in this manner isn't supported. In this how-to guide, you'll learn the recommended method to address this scenario.
+
+> [!IMPORTANT]
+> This method is specific to virtual machine scale set-based AKS clusters. When using virtual machine availability sets, you are limited to only one node pool per cluster.
+
+## Example resources
+
+Suppose you want to resize an existing node pool, called `nodepool1`, from SKU size Standard_DS2_v2 to Standard_DS3_v2. To accomplish this task, you'll need to create a new node pool using Standard_DS3_v2, move workloads from `nodepool1` to the new node pool, and remove `nodepool1`. In this example, we'll call this new node pool `mynodepool`.
++
+```bash
+kubectl get nodes
+
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-31721111-vmss000000 Ready agent 10d v1.21.9
+aks-nodepool1-31721111-vmss000001 Ready agent 10d v1.21.9
+aks-nodepool1-31721111-vmss000002 Ready agent 10d v1.21.9
+```
+
+```bash
+kubectl get pods -o wide -A
+
+NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+default sampleapp2-74b4b974ff-676sz 1/1 Running 0 93m 10.244.1.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+default sampleapp2-76b6c4c59b-pfgbh 1/1 Running 0 94m 10.244.1.5 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system azure-ip-masq-agent-4n66k 1/1 Running 0 10d 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system azure-ip-masq-agent-9p4c8 1/1 Running 0 10d 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system azure-ip-masq-agent-nb7mx 1/1 Running 0 10d 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system coredns-845757d86-dtvvs 1/1 Running 0 10d 10.244.0.2 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system coredns-845757d86-x27pp 1/1 Running 0 10d 10.244.2.3 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system coredns-autoscaler-5f85dc856b-nfrmh 1/1 Running 0 10d 10.244.2.4 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system csi-azuredisk-node-9nfzt 3/3 Running 0 10d 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system csi-azuredisk-node-bblsb 3/3 Running 0 10d 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system csi-azuredisk-node-tjhj4 3/3 Running 0 10d 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system csi-azurefile-node-9pcr8 3/3 Running 0 3d10h 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system csi-azurefile-node-bh2pc 3/3 Running 0 3d10h 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system csi-azurefile-node-h75gq 3/3 Running 0 3d10h 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system konnectivity-agent-6cd55c69cf-ngdlb 1/1 Running 0 10d 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system konnectivity-agent-6cd55c69cf-rvvqt 1/1 Running 0 10d 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system kube-proxy-4wzx7 1/1 Running 0 10d 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system kube-proxy-g5tvr 1/1 Running 0 10d 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system kube-proxy-mrv54 1/1 Running 0 10d 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system metrics-server-774f99dbf4-h52hn 1/1 Running 1 3d10h 10.244.1.3 aks-nodepool1-31721111-vmss000002 <none> <none>
+```
+
+## Create a new node pool with the desired SKU
+
+Use the [az aks nodepool add][az-aks-nodepool-add] command to create a new node pool called `mynodepool` with three nodes using the `Standard_DS3_v2` VM SKU:
+
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 3 \
+ --node-vm-size Standard_DS3_v2 \
+ --mode System \
+ --no-wait
+```
+
+> [!NOTE]
+> Every AKS cluster must contain at least one system node pool with at least one node. In the below example, we are using a `--mode` of `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
+
+When resizing, be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, see the [az aks nodepool add][az-aks-nodepool-add] reference page.
+
+After a few minutes, the new node pool has been created:
++
+```bash
+kubectl get nodes
+
+NAME STATUS ROLES AGE VERSION
+aks-mynodepool-20823458-vmss000000 Ready agent 23m v1.21.9
+aks-mynodepool-20823458-vmss000001 Ready agent 23m v1.21.9
+aks-mynodepool-20823458-vmss000002 Ready agent 23m v1.21.9
+aks-nodepool1-31721111-vmss000000 Ready agent 10d v1.21.9
+aks-nodepool1-31721111-vmss000001 Ready agent 10d v1.21.9
+aks-nodepool1-31721111-vmss000002 Ready agent 10d v1.21.9
+```
+
+## Cordon the existing nodes
+
+Cordoning marks specified nodes as unschedulable and prevents any more pods from being added to the nodes.
+
+First, obtain the names of the nodes you'd like to cordon with `kubectl get nodes`. Your output should look similar to the following:
+
+```bash
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-31721111-vmss000000 Ready agent 7d21h v1.21.9
+aks-nodepool1-31721111-vmss000001 Ready agent 7d21h v1.21.9
+aks-nodepool1-31721111-vmss000002 Ready agent 7d21h v1.21.9
+```
+
+Next, using `kubectl cordon <node-names>`, specify the desired nodes in a space-separated list:
+
+```bash
+kubectl cordon aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002
+```
+
+```bash
+node/aks-nodepool1-31721111-vmss000000 cordoned
+node/aks-nodepool1-31721111-vmss000001 cordoned
+node/aks-nodepool1-31721111-vmss000002 cordoned
+```
+
+## Drain the existing nodes
+
+> [!IMPORTANT]
+> To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod replica to be moved at a time, otherwise the drain/evict operation will fail. To check this, you can run `kubectl get pdb -A` and make sure `ALLOWED DISRUPTIONS` is at least 1 or higher.
+
+Draining nodes will cause pods running on them to be evicted and recreated on the other, schedulable nodes.
+
+To drain nodes, use `kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data`, again using a space-separated list of node names:
+
+> [!IMPORTANT]
+> Using `--delete-emptydir-data` is required to evict the AKS-created `coredns` and `metrics-server` pods. If this flag isn't used, an error is expected. For more information, see the [documentation on emptydir][empty-dir].
+
+```bash
+kubectl drain aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002 --ignore-daemonsets --delete-emptydir-data
+```
+
+After the drain operation finishes, all pods other than those controlled by daemon sets are running on the new node pool:
+
+```bash
+kubectl get pods -o wide -A
+
+NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+default sampleapp2-74b4b974ff-676sz 1/1 Running 0 15m 10.244.4.5 aks-mynodepool-20823458-vmss000002 <none> <none>
+default sampleapp2-76b6c4c59b-rhmzq 1/1 Running 0 16m 10.244.4.3 aks-mynodepool-20823458-vmss000002 <none> <none>
+kube-system azure-ip-masq-agent-4n66k 1/1 Running 0 10d 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system azure-ip-masq-agent-9p4c8 1/1 Running 0 10d 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system azure-ip-masq-agent-nb7mx 1/1 Running 0 10d 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system azure-ip-masq-agent-sxn96 1/1 Running 0 49m 10.240.0.9 aks-mynodepool-20823458-vmss000002 <none> <none>
+kube-system azure-ip-masq-agent-tsq98 1/1 Running 0 49m 10.240.0.8 aks-mynodepool-20823458-vmss000001 <none> <none>
+kube-system azure-ip-masq-agent-xzrdl 1/1 Running 0 49m 10.240.0.7 aks-mynodepool-20823458-vmss000000 <none> <none>
+kube-system coredns-845757d86-d2pkc 1/1 Running 0 17m 10.244.3.2 aks-mynodepool-20823458-vmss000000 <none> <none>
+kube-system coredns-845757d86-f8g9s 1/1 Running 0 17m 10.244.5.2 aks-mynodepool-20823458-vmss000001 <none> <none>
+kube-system coredns-autoscaler-5f85dc856b-f8xh2 1/1 Running 0 17m 10.244.4.2 aks-mynodepool-20823458-vmss000002 <none> <none>
+kube-system csi-azuredisk-node-7md2w 3/3 Running 0 49m 10.240.0.7 aks-mynodepool-20823458-vmss000000 <none> <none>
+kube-system csi-azuredisk-node-9nfzt 3/3 Running 0 10d 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system csi-azuredisk-node-bblsb 3/3 Running 0 10d 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system csi-azuredisk-node-lcmtz 3/3 Running 0 49m 10.240.0.9 aks-mynodepool-20823458-vmss000002 <none> <none>
+kube-system csi-azuredisk-node-mmncr 3/3 Running 0 49m 10.240.0.8 aks-mynodepool-20823458-vmss000001 <none> <none>
+kube-system csi-azuredisk-node-tjhj4 3/3 Running 0 10d 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system csi-azurefile-node-29w6z 3/3 Running 0 49m 10.240.0.9 aks-mynodepool-20823458-vmss000002 <none> <none>
+kube-system csi-azurefile-node-4nrx7 3/3 Running 0 49m 10.240.0.7 aks-mynodepool-20823458-vmss000000 <none> <none>
+kube-system csi-azurefile-node-9pcr8 3/3 Running 0 3d11h 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system csi-azurefile-node-bh2pc 3/3 Running 0 3d11h 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system csi-azurefile-node-gqqnv 3/3 Running 0 49m 10.240.0.8 aks-mynodepool-20823458-vmss000001 <none> <none>
+kube-system csi-azurefile-node-h75gq 3/3 Running 0 3d11h 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system konnectivity-agent-6cd55c69cf-2bbp5 1/1 Running 0 17m 10.240.0.7 aks-mynodepool-20823458-vmss000000 <none> <none>
+kube-system konnectivity-agent-6cd55c69cf-7xzxj 1/1 Running 0 16m 10.240.0.8 aks-mynodepool-20823458-vmss000001 <none> <none>
+kube-system kube-proxy-4wzx7 1/1 Running 0 10d 10.240.0.4 aks-nodepool1-31721111-vmss000000 <none> <none>
+kube-system kube-proxy-7h8r5 1/1 Running 0 49m 10.240.0.7 aks-mynodepool-20823458-vmss000000 <none> <none>
+kube-system kube-proxy-g5tvr 1/1 Running 0 10d 10.240.0.6 aks-nodepool1-31721111-vmss000002 <none> <none>
+kube-system kube-proxy-mrv54 1/1 Running 0 10d 10.240.0.5 aks-nodepool1-31721111-vmss000001 <none> <none>
+kube-system kube-proxy-nqmnj 1/1 Running 0 49m 10.240.0.9 aks-mynodepool-20823458-vmss000002 <none> <none>
+kube-system kube-proxy-zn77s 1/1 Running 0 49m 10.240.0.8 aks-mynodepool-20823458-vmss000001 <none> <none>
+kube-system metrics-server-774f99dbf4-2x6x8 1/1 Running 0 16m 10.244.4.4 aks-mynodepool-20823458-vmss000002 <none> <none>
+```
+
+### Troubleshooting
+
+You may see an error like the following:
+> Error when evicting pods/[podname] -n [namespace] (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
+
+By default, your cluster has AKS_managed pod disruption budgets (such as `coredns-pdb` or `konnectivity-agent`) with a `MinAvailable` of 1. If, for example, there are two `coredns` pods running, while one of them is getting recreated and is unavailable, the other is unable to be affected due to the pod disruption budget. This resolves itself after the initial `coredns` pod is scheduled and running, allowing the second pod to be properly evicted and recreated.
+
+> [!TIP]
+> Consider draining nodes one-by-one for a smoother eviction experience and to avoid throttling. For more information, see:
+> * [Plan for availability using a pod disruption budget][pod-disruption-budget]
+> * [Specifying a Disruption Budget for your Application][specify-disruption-budget]
+> * [Disruptions][disruptions]
+
+## Remove the existing node pool
+
+To delete the existing node pool, use the Azure portal or the [az aks delete][az-aks-delete] command:
+
+```bash
+kubectl delete nodepool /
+ --resource-group myResourceGroup /
+ --cluster-name myAKSCluster /
+ --name nodepool1
+```
+
+After completion, the final result is the AKS cluster having a single, new node pool with the new, desired SKU size and all the applications and pods properly running:
++
+```bash
+kubectl get nodes
+
+NAME STATUS ROLES AGE VERSION
+aks-mynodepool-20823458-vmss000000 Ready agent 63m v1.21.9
+aks-mynodepool-20823458-vmss000001 Ready agent 63m v1.21.9
+aks-mynodepool-20823458-vmss000002 Ready agent 63m v1.21.9
+```
+
+## Next steps
+
+After resizing a node pool by cordoning and draining, learn more about [using multiple node pools][use-multiple-node-pools].
+
+<!-- LINKS -->
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-delete]: /cli/azure/aks#az_aks_delete
+[aks-support-policies]: support-policies.md#user-customization-of-agent-nodes
+[update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools
+[pod-disruption-budget]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets
+[empty-dir]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
+[specify-disruption-budget]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
+[disruptions]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
+[use-multiple-node-pools]: use-multiple-node-pools.md
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
You can also decide to use both availability zones and fault domains.
Not all host SKUs are available in all regions, and availability zones. You can list host availability, and any offer restrictions before you start provisioning dedicated hosts. ```azurecli-interactive
-az vm list-skus -l eastus2 -r hostGroups/hosts -o table
+az vm list-skus -l eastus -r hostGroups/hosts -o table
``` ## Create a Host Group
az vm host group create \
-g myDHResourceGroup \ -z 1\ --platform-fault-domain-count 1
+--automatic-placement true
``` ## Create a Dedicated Host
az vm host create \
--host-group myHostGroup \ --name myHost \ --sku DSv3-Type1 \platform-fault-domain 1 \
+--platform-fault-domain 0 \
-g myDHResourceGroup ```
az role assignment create --assignee <id> --role "Contributor" --scope <Resource
Create an AKS cluster, and add the Host Group you just configured. ```azurecli-interactive
-az aks create -g MyResourceGroup -n MyManagedCluster --location westus2 --kubernetes-version 1.20.13 --nodepool-name agentpool1 --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3 --enable-managed-identity --assign-identity <id>
+az aks create -g MyResourceGroup -n MyManagedCluster --location eastus --kubernetes-version 1.20.13 --nodepool-name agentpool1 --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3 --enable-managed-identity --assign-identity <id>
``` ## Add a Dedicated Host Node Pool to an existing AKS cluster
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
It takes a few minutes for the scale operation to complete.
AKS offers a separate feature to automatically scale node pools with a feature called the [cluster autoscaler](cluster-autoscaler.md). This feature can be enabled per node pool with unique minimum and maximum scale counts per node pool. Learn how to [use the cluster autoscaler per node pool](cluster-autoscaler.md#use-the-cluster-autoscaler-with-multiple-node-pools-enabled).
-## Resize a node pool
-
-To increase of number of deployments or run a larger workload, you may want to change the virtual machine scale set plan or resize AKS instances. However, you should not do any direct customizations to these nodes using the IaaS APIs or resources, as any custom changes that are not done via the AKS API will not persist through an upgrade, scale, update or reboot. This means resizing your AKS instances in this manner is not supported.
-
-The recommended method to resize a node pool to the desired SKU size is as follows:
-
-* Create a new node pool with the new SKU size
-* Cordon and drain the nodes in the old node pool in order to move workloads to the new nodes
-* Remove the old node pool.
-
-> [!IMPORTANT]
-> This method is specific to virtual machine scale set-based AKS clusters. When using virtual machine availability sets, you are limited to only one node pool per cluster.
-
-### Create a new node pool with the desired SKU
-
-The following command creates a new node pool with 2 nodes using the `Standard_DS3_v2` VM SKU:
-
-> [!NOTE]
-> Every AKS cluster must contain at least one system node pool with at least one node. In the below example, we are using a `--mode` of `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --node-count 2 \
- --node-vm-size Standard_DS3_v2 \
- --mode System \
- --no-wait
-```
-
-Be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, please see the [az aks nodepool add][az-aks-nodepool-add] reference page.
-
-### Cordon the existing nodes
-
-Cordoning marks specified nodes as unschedulable and prevents any additional pods from being added to the nodes.
-
-First, obtain the names of the nodes you'd like to cordon with `kubectl get nodes`. Your output should look similar to the following:
-
-```bash
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-31721111-vmss000000 Ready agent 7d21h v1.21.9
-aks-nodepool1-31721111-vmss000001 Ready agent 7d21h v1.21.9
-aks-nodepool1-31721111-vmss000002 Ready agent 7d21h v1.21.9
-```
-
-Next, using `kubectl cordon <node-names>`, specify the desired nodes in a space-separated list:
-
-```bash
-kubectl cordon aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002
-```
-
-If succesful, your output should look similar to the following:
-
-```bash
-node/aks-nodepool1-31721111-vmss000000 cordoned
-node/aks-nodepool1-31721111-vmss000001 cordoned
-node/aks-nodepool1-31721111-vmss000002 cordoned
-```
-
-### Drain the existing nodes
-
-> [!IMPORTANT]
-> To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod replica to be moved at a time, otherwise the drain/evict operation will fail. To check this, you can run `kubectl get pdb -A` and make sure `ALLOWED DISRUPTIONS` is at least 1 or higher.
-
-Draining nodes will cause pods running on them to be evicted and recreated on the other, schedulable nodes.
-
-To drain nodes, use `kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data`, again using a space-separated list of node names:
-
-> [!IMPORTANT]
-> Using `--delete-emptydir-data` is required to evict the AKS-created `coredns` and `metrics-server` pods. If this flag isn't used, an error is expected. Please see the [documentation on emptydir][empty-dir] for more information.
-
-```bash
-kubectl drain aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002 --ignore-daemonsets --delete-emptydir-data
-```
-
-> [!TIP]
-> By default, your cluster has AKS_managed pod disruption budgets (such as `coredns-pdb` or `konnectivity-agent`) with a `MinAvailable` of 1. If, for example, there are two `coredns` pods running, while one of them is getting recreated and is unavailable, the other is unable to be affected due to the pod disruption budget. This resolves itself after the initial `coredns` pod is scheduled and running, allowing the second pod to be properly evicted and recreated.
->
-> Consider draining nodes one-by-one for a smoother eviction experience and to avoid throttling. For more information, see [plan for availability using a pod disruption budget][pod-disruption-budget].
-
-After the drain operation finishes, verify pods are running on the new nodepool:
-
-```bash
-kubectl get pods -o wide -A
-```
-
-### Remove the existing node pool
-
-To delete the existing node pool, see the section on [Deleting a node pool](#delete-a-node-pool).
-
-After completion, the final result is the AKS cluster having a single, new node pool with the new, desired SKU size and all the applications and pods properly running.
- ## Delete a node pool
-If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [az aks node pool delete][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynoodepool* created in the previous steps:
+If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [az aks node pool delete][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynodepool* created in the previous steps:
> [!CAUTION] > There are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications are unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[kubernetes-labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ [kubernetes-label-syntax]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set [capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set
-[empty-dir]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
<!-- INTERNAL LINKS --> [aks-windows]: windows-container-cli.md
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[node-image-upgrade]: node-image-upgrade.md [fips]: /azure/compliance/offerings/offering-fips-140-2 [use-tags]: use-tags.md
-[update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools
-[pod-disruption-budget]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of A
```console $ helm search repo azure-apim-gateway NAME CHART VERSION APP VERSION DESCRIPTION
- azure-apim-gateway/azure-api-management-gateway 0.3.0 1.1.2 A Helm chart to deploy an Azure API Management ...
+ azure-apim-gateway/azure-api-management-gateway 1.0.0 2.0.0 A Helm chart to deploy an Azure API Management ...
``` ## Deploy the self-hosted gateway to Kubernetes
This article provides the steps for deploying self-hosted gateway component of A
```console helm install azure-api-management-gateway \
- --set gateway.endpoint='<your configuration url>' \
- --set gateway.authKey='<your token>' \
+ --set gateway.configuration.uri='<your configuration url>' \
+ --set gateway.auth.key='<your token>' \
azure-apim-gateway/azure-api-management-gateway ```
api-management Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md
Currently, depending on how you delete an API Management instance, the instance
Recovery and other operations on a soft-deleted instance are enabled through [REST API](/rest/api/apimanagement/current-ga/api-management-service/restore) version `2020-06-01-preview` or later, or the Azure SDK for .NET, Go, or Python. > [!TIP]
-> Refer to [Azure REST API Reference](/rest/api/azure/) for tips and tools for calling Azure REST APIs.
+> Refer to [Azure REST API Reference](/rest/api/azure/) for tips and tools for calling Azure REST APIs and [API Management REST](/rest/api/apimanagement/) for additional information specific to API Management.
| Operation | Description | API Management namespace | Minimum API version | |--|--|--|--|
app-service Configure Authentication Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-github.md
+
+ Title: Configure GitHub authentication
+description: Learn how to configure GitHub authentication as an identity provider for your App Service or Azure Functions app.
+ Last updated : 03/01/2022++
+# Configure your App Service or Azure Functions app to use GitHub login
++
+This article shows how to configure Azure App Service or Azure Functions to use GitHub as an authentication provider.
+
+To complete the procedure in this article, you need a GitHub account. To create a new GitHub account, go to [GitHub].
+
+## <a name="register"> </a>Register your application with GitHub
+
+1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your GitHub app.
+1. Follow the instructions for [creating an OAuth app on GitHub](https://docs.github.com/developers/apps/building-oauth-apps/creating-an-oauth-app). In the **Authorization callback URL** section, enter the HTTPS URL of your app and append the path `/.auth/login/github/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/github/callback`.
+1. On the application page, make note of the **Client ID**, which you will need later.
+1. Under **Client Secrets**, select **Generate a new client secret**.
+1. Make note of the client secret value, which you will need later.
+
+ > [!IMPORTANT]
+ > The client secret is an important security credential. Do not share this secret with anyone or distribute it with your app.
+
+## <a name="secrets"> </a>Add GitHub information to your application
+
+1. Sign in to the [Azure portal] and navigate to your app.
+1. Select **Authentication** in the menu on the left. Click **Add identity provider**.
+1. Select **GitHub** in the identity provider dropdown. Paste in the `Client ID` and `Client secret` values that you obtained previously.
+
+ The secret will be stored as a slot-sticky [application setting](./configure-common.md#configure-app-settings) named `GITHUB_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
+
+1. If this is the first identity provider configured for the application, you will also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step.
+
+ These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to log in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow).
+
+1. Click **Add**.
+
+You're now ready to use GitHub for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+
+<!-- URLs. -->
+
+[GitHub]:https://github.com/
+[Azure portal]: https://portal.azure.com/
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
http://<back-end-app-name>.azurewebsites.net
http://<front-end-app-name>.azurewebsites.net ``` > [!NOTE] > If your app restarts, you may have noticed that new data has been erased. This behavior by design because the sample ASP.NET Core app uses an in-memory database.
In this step, you point the front-end app's server code to access the back-end A
1. Navigate to `http://<back-end-app-name>.azurewebsites.net` to see the items added from the front-end app. Also, add a few items, such as `from back end 1` and `from back end 2`, then refresh the front-end app to see if it reflects the changes.
- :::image type="content" source="./media/tutorial-auth-aad/remote-api-call-run.png" alt-text="Screenshot of an Azure App Service Rest API Sample in a browser window, which shows a To do list app with items added from the front-end app.":::
+ :::image type="content" source="./media/tutorial-auth-aad/remote-api-call-run.png" alt-text="Screenshot of an Azure App Service REST API Sample in a browser window, which shows a To do list app with items added from the front-end app.":::
## Configure auth
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
Title: 'Tutorial: Build and run a custom image in Azure App Service' description: A step-by-step guide to build a custom Linux or Windows image, push the image to Azure Container Registry, and then deploy that image to Azure App Service. Learn how to migrate custom software to App Service in a custom container. Previously updated : 08/04/2021 Last updated : 02/10/2022 keywords: azure app service, web app, linux, windows, docker, container-+ zone_pivot_groups: app-service-containers-windows-linux
zone_pivot_groups: app-service-containers-windows-linux
::: zone pivot="container-windows"
-[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from administrative access, software installations, changes to the global assembly cache, and so on (see [Operating system functionality on Azure App Service](operating-system-functionality.md)). However, using a custom Windows container in App Service lets you make OS changes that your app needs, so it's easy to migrate on-premises app that requires custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](../container-registry/index.yml), and then run it in App Service.
+[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from:
+- Administrative access.
+- Software installations.
+- Changes to the global assembly cache.
-![Shows the web app running in a Windows container.](media/tutorial-custom-container/app-running.png)
+For more information, see [Operating system functionality on Azure App Service](operating-system-functionality.md).
+
+You can deploy a custom-configured Windows image from Visual Studio to make OS changes that your app needs. So it's easy to migrate on-premises app that requires custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](../container-registry/index.yml), and then run it in App Service.
+ ## Prerequisites
To complete this tutorial:
- <a href="https://hub.docker.com/" target="_blank">Sign up for a Docker Hub account</a> - <a href="https://docs.docker.com/docker-for-windows/install/" target="_blank">Install Docker for Windows</a>. - <a href="/virtualization/windowscontainers/quick-start/quick-start-windows-10" target="_blank">Switch Docker to run Windows containers</a>.-- <a href="https://www.visualstudio.com/downloads/" target="_blank">Install Visual Studio 2019</a> with the **ASP.NET and web development** and **Azure development** workloads. If you've installed Visual Studio 2019 already:
+- <a href="https://www.visualstudio.com/downloads/" target="_blank">Install Visual Studio 2022</a> with the **ASP.NET and web development** and **Azure development** workloads. If you've installed Visual Studio 2022 already:
- Install the latest updates in Visual Studio by clicking **Help** > **Check for Updates**. - Add the workloads in Visual Studio by clicking **Tools** > **Get Tools and Features**.
To complete this tutorial:
In this step, you set up the local .NET project. - [Download the sample project](https://github.com/Azure-Samples/custom-font-win-container/archive/master.zip).-- Extract (unzip) the *custom-font-win-container.zip* file.
+- Extract (unzip) the *custom-font-win-container-master.zip* file.
-The sample project contains a simple ASP.NET application that uses a custom font that is installed into the Windows font library. It's not necessary to install fonts, but it's an example of an app that is integrated with the underlying OS. To migrate such an app to App Service, you either rearchitect your code to remove the integration, or migrate it as-is in a custom Windows container.
+The sample project contains a simple ASP.NET application that uses a custom font that is installed into the Windows font library. It's not necessary to install fonts. However, the sample is an example of an app that is integrated with the underlying OS. To migrate such an app to App Service, you either rearchitect your code to remove the integration, or migrate it as-is in a custom Windows container.
### Install the font
This font is publicly available from [Google Fonts](https://fonts.google.com/spe
### Run the app
-Open the *custom-font-win-container/CustomFontSample.sln* file in Visual Studio.
+Open the *custom-font-win-container-master/CustomFontSample.sln* file in Visual Studio.
Type `Ctrl+F5` to run the app without debugging. The app is displayed in your default browser. :::image type="content" source="media/tutorial-custom-container/local-app-in-browser.png" alt-text="Screenshot showing the app displayed in the default browser.":::
-Because it uses an installed font, the app can't run in the App Service sandbox. However, you can deploy it using a Windows container instead, because you can install the font in the Windows container.
+As the app uses an installed font, the app can't run in the App Service sandbox. However, you can deploy it using a Windows container instead, because you can install the font in the Windows container.
### Configure Windows container
In Solution Explorer, right-click the **CustomFontSample** project and select **
Select **Docker Compose** > **OK**.
-Your project is now set up to run in a Windows container. A _Dockerfile_ is added to the **CustomFontSample** project, and a **docker-compose** project is added to the solution.
+Your project is now set to run in a Windows container. A _Dockerfile_ is added to the **CustomFontSample** project, and a **docker-compose** project is added to the solution.
From the Solution Explorer, open **Dockerfile**.
RUN ${source:-obj/Docker/publish/InstallFont.ps1}
You can find _InstallFont.ps1_ in the **CustomFontSample** project. It's a simple script that installs the font. You can find a more complex version of the script in the [Script Center](https://gallery.technet.microsoft.com/scriptcenter/fb742f92-e594-4d0c-8b79-27564c575133). > [!NOTE]
-> To test the Windows container locally, make sure that Docker is started on your local machine.
+> To test the Windows container locally, ensure that Docker is started on your local machine.
> ## Publish to Azure Container Registry
In the publish wizard, select **Container Registry** > **Create New Azure Contai
In the **Create a new Azure Container Registry** dialog, select **Add an account**, and sign in to your Azure subscription. If you're already signed in, select the account containing the desired subscription from the dropdown.
-![Sign in to Azure](./media/tutorial-custom-container/add-an-account.png)
### Configure the registry
-Configure the new container registry based on the suggested values in the following table. When finished, click **Create**.
+Configure the new container registry based on the suggested values in the following table. When finished, select **Create**.
| Setting | Suggested value | For more information | | -- | | -| |**DNS Prefix**| Keep the generated registry name, or change it to another unique name. | |
-|**Resource Group**| Click **New**, type **myResourceGroup**, and click **OK**. | |
+|**Resource Group**| Select **New**, type **myResourceGroup**, and select **OK**. | |
|**SKU**| Basic | [Pricing tiers](https://azure.microsoft.com/pricing/details/container-registry/)| |**Registry Location**| West Europe | |
-![Configure Azure container registry](./media/tutorial-custom-container/configure-registry.png)
A terminal window is opened and displays the image deployment progress. Wait for the deployment to complete.
From the left menu, select **Create a resource** > **Web** > **Web App for Conta
### Configure app basics
-In the **Basics** tab, configure the settings according to the following table, then click **Next: Docker**.
+In the **Basics** tab, configure the settings according to the following table, then select **Next: Docker**.
| Setting | Suggested value | For more information | | -- | | -| |**Subscription**| Make sure the correct subscription is selected. | |
-|**Resource Group**| Select **Create new**, type **myResourceGroup**, and click **OK**. | |
+|**Resource Group**| Select **Create new**, type **myResourceGroup**, and select **OK**. | |
|**Name**| Type a unique name. | The URL of the web app is `https://<app-name>.azurewebsites.net`, where `<app-name>` is your app name. | |**Publish**| Docker container | | |**Operating System**| Windows | | |**Region**| West Europe | |
-|**Windows Plan**| Select **Create new**, type **myAppServicePlan**, and click **OK**. | |
+|**Windows Plan**| Select **Create new**, type **myAppServicePlan**, and select **OK**. | |
Your **Basics** tab should look like this:
-![Shows the Basics tab used to configure the web app.](media/tutorial-custom-container/configure-app-basics.png)
### Configure Windows container
In the **Docker** tab, configure your custom Windows container as shown in the f
### Complete app creation
-Click **Create** and wait for Azure to create the required resources.
+Select **Create** and wait for Azure to create the required resources.
## Browse to the web app When the Azure operation is complete, a notification box is displayed.
-![Shows that the Azure operation is complete.](media/tutorial-custom-container/portal-create-finished.png)
-1. Click **Go to resource**.
+1. Select **Go to resource**.
-2. In the app page, click the link under **URL**.
+2. In the app page, select the link under **URL**.
A new browser page is opened to the following page:
-![Shows the new browser page for the web app.](media/tutorial-custom-container/app-starting.png)
Wait a few minutes and try again, until you get the homepage with the beautiful font you expect:
-![Shows the homepage with the font you configured.](media/tutorial-custom-container/app-running.png)
**Congratulations!** You've migrated an ASP.NET application to Azure App Service in a Windows container. ## See container start-up logs
-It may take some time for the Windows container to load. To see the progress, navigate to the following URL by replacing *\<app-name>* with the name of your app.
+It might take some time for the Windows container to load. To see the progress, go to the following URL by replacing *\<app-name>* with the name of your app.
``` https://<app-name>.scm.azurewebsites.net/api/logstream ```
The streamed logs look like this:
::: zone pivot="container-linux"
-Azure App Service uses the Docker container technology to host both built-in images and custom images. To see a list of built-in images, run the Azure CLI command, ['az webapp list-runtimes --linux'](/cli/azure/webapp#az_webapp_list_runtimes). If those images don't satisfy your needs, you can build and deploy a custom image.
+Azure App Service uses the Docker container technology to host both built-in images and custom images. To see a list of built-in images, run the Azure CLI command, ['az webapp list-runtimes--linux'](/cli/azure/webapp#az_webapp_list_runtimes). If those images don't satisfy your needs, you can build and deploy a custom image.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Enable CI/CD from Azure Container Registry to App Service > * Connect to the container using SSH
-Completing this tutorial incurs a small charge in your Azure account for the container registry and can incur additional costs for hosting the container for longer than a month.
+Completing this tutorial incurs a small charge in your Azure account for the container registry and can incur more costs for hosting the container for longer than a month.
## Set up your initial environment
-This tutorial requires version 2.0.80 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+This tutorial requires version 2.0.80 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed.
- Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
- - Install [Docker](https://docs.docker.com/get-started/#setup), which you use to build Docker images. Installing Docker may require a computer restart.
+ - Install [Docker](https://docs.docker.com/get-started/#setup), which you use to build Docker images. Installing Docker might require a computer restart.
-After installing Docker, open a terminal window and verify that docker is installed:
+After installing Docker, open a terminal window and verify that the docker is installed:
```bash docker --version
Clone the sample repository:
git clone https://github.com/Azure-Samples/docker-django-webapp-linux.git --config core.autocrlf=input ```
-Be sure to include the `--config core.autocrlf=input` argument to guarantee proper line endings in files that are used inside the Linux container:
+Ensure that you include the `--config core.autocrlf=input` argument to guarantee proper line endings in files that are used inside the Linux container:
-Then go into that folder:
+Then, navigate to the folder:
```terminal cd docker-django-webapp-linux
Instead of using git clone, you can visit [https://github.com/Azure-Samples/dock
Unpack the ZIP file into a folder named *docker-django-webapp-linux*.
-Then open a terminal window in that *docker-django-webapp-linux* folder.
+Then, open a terminal window in the*docker-django-webapp-linux* folder.
## (Optional) Examine the Docker file
ENTRYPOINT ["init.sh"]
This [`docker run`](https://docs.docker.com/engine/reference/commandline/run/) command specifies the port with the `-p` argument followed by the name of the image. `-it` lets you stop it with `Ctrl+C`. > [!TIP]
- > If you are running on Windows and see the error, *standard_init_linux.go:211: exec user process caused "no such file or directory"*, the *init.sh* file contains CR-LF line endings instead of the expected LF endings. This error happens if you used git to clone the sample repository but omitted the `--config core.autocrlf=input` parameter. In this case, clone the repository again with the `--config`` argument. You might also see the error if you edited *init.sh* and saved it with CRLF endings. In this case, save the file again with LF endings only.
+ > If you're running on Windows and see the error, *standard_init_linux.go:211: exec user process caused "no such file or directory"*, the *init.sh* file contains CR-LF line endings instead of the expected LF endings. This error happens if you used git to clone the sample repository but omitted the `--config core.autocrlf=input` parameter. In this case, clone the repository again with the `--config`` argument. You might also see the error if you edited *init.sh* and saved it with CRLF endings. In this case, save the file again with LF endings only.
1. Browse to `http://localhost:8000` to verify the web app and container are functioning correctly.
- ![Test web app locally](./media/app-service-linux-using-custom-docker-image/app-service-linux-browse-local.png)
+ :::image type="content" source="./media/app-service-linux-using-custom-docker-image/app-service-linux-browse-local.png" alt-text="Test web app locally.":::
## Create a resource group
-In this section and those that follow, you provision resources in Azure to which you push the image and then deploy a container to Azure App Service. You start by creating a resource group in which to collect all these resources.
+In this section and the following sections, you prepare resources in Azure to which you push the image and then deploy a container to Azure App Service. You can start by creating a resource group in which you want to collect all the resources.
Run the [az group create](/cli/azure/group#az_group_create) command to create a resource group:
In this section, you push the image to Azure Container Registry from which App S
az acr create --name <registry-name> --resource-group myResourceGroup --sku Basic --admin-enabled true ```
- Replace `<registry-name>` with a suitable name for your registry. The name must contain only letters and numbers and must be unique across all of Azure.
+ Replace `<registry-name>` with a suitable name for your registry. The name must contain only letters, numbers, and must be unique across all of Azure.
1. Run the [`az acr show`](/cli/azure/acr#az_acr_show) command to retrieve credentials for the registry:
In this section, you push the image to Azure Container Registry from which App S
You use the same registry name in all the remaining steps of this section.
-1. Once the login succeeds, tag your local Docker image for the registry:
+1. When the log in is successful, tag your local Docker image to the registry:
```bash docker tag appsvc-tutorial-custom-image <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest
To deploy a container to Azure App Service, you first create a web app on App Se
``` Replace the following values:
- - `<principal-id>` with the service principal ID from the `az webapp identity assign` command
- - `<registry-name>` with the name of your container registry
- - `<subscription-id>` with the subscription ID retrieved from the `az account show` command
+ - `<principal-id>` with the service principal ID from the `az webapp identity assign` command.
+ - `<registry-name>` with the name of your container registry.
+ - `<subscription-id>` with the subscription ID retrieved from the `az account show` command.
For more information about these permissions, see [What is Azure role-based access control](../role-based-access-control/overview.md).
You can complete these steps once the image is pushed to the container registry
Replace `<app-name>` with the name of your web app and replace `<registry-name>` in two places with the name of your registry. - When using a registry other than Docker Hub (as this example shows), `--docker-registry-server-url` must be formatted as `https://` followed by the fully qualified domain name of the registry.
- - The message, "No credential was provided to access Azure Container Registry. Trying to look up..." tells you that Azure is using the app's managed identity to authenticate with the container registry rather than asking for a username and password.
- - If you encounter the error, "AttributeError: 'NoneType' object has no attribute 'reserved'", make sure your `<app-name>` is correct.
+ - The message, "No credential was provided to access Azure Container Registry. Trying to look up..." indicates that Azure is using the app's managed identity to authenticate with the container registry rather than asking for a username and password.
+ - If you encounter the error, "AttributeError: 'NoneType' object has no attribute 'reserved'", ensure that your `<app-name>` is correct.
> [!TIP] > You can retrieve the web app's container settings at any time with the command `az webapp config container show --name <app-name> --resource-group myResourceGroup`. The image is specified in the property `DOCKER_CUSTOM_IMAGE_NAME`. When the web app is deployed through Azure DevOps or Azure Resource Manager templates, the image can also appear in a property named `LinuxFxVersion`. Both properties serve the same purpose. If both are present in the web app's configuration, `LinuxFxVersion` takes precedence.
-1. Once the `az webapp config container set` command completes, the web app should be running in the container on App Service.
+1. When the `az webapp config container set` command completes, the web app should be running in the container on App Service.
- To test the app, browse to `https://<app-name>.azurewebsites.net`, replacing `<app-name>` with the name of your web app. On first access, it may take some time for the app to respond because App Service must pull the entire image from the registry. If the browser times out, just refresh the page. Once the initial image is pulled, subsequent tests will run much faster.
+ To test the app, browse to `https://<app-name>.azurewebsites.net`, replacing `<app-name>` with the name of your web app. On first access, it might take some time for the app to respond because App Service must pull the entire image from the registry. If the browser times out, just refresh the page. Once the initial image is pulled, subsequent tests will run much faster.
- ![Successful test of the web app on Azure](./media/app-service-linux-using-custom-docker-image/app-service-linux-browse-azure.png)
+ :::image type="content" source="./media/app-service-linux-using-custom-docker-image/app-service-linux-browse-azure.png" alt-text="Successful test of the web app on Azure.":::
## Access diagnostic logs
-While you're waiting for App Service to pull in the image, it's helpful to see exactly what App Service is doing by streaming the container logs to your terminal.
+While you're waiting for the App Service to pull in the image, it's helpful to see exactly what App Service is doing by streaming the container logs to your terminal.
1. Turn on container logging:
While you're waiting for App Service to pull in the image, it's helpful to see e
## Configure continuous deployment
-Your App Service app now can pull the container image securely from your private container registry. However, it doesn't know when that image is updated in your registry. Each time you push the updated image to the registry, you must manually trigger an image pull by restarting the App Service app. In this step, you enable CI/CD, so that App Service gets notified of a new image and triggers a pull automatically.
+Your App Service app now can pull the container image securely from your private container registry. However, it doesn't know when that image is updated in your registry. Each time you push the updated image to the registry, you must manually trigger an image pull by restarting the App Service app. In this step, you enable CI/CD, so that the App Service gets notified of a new image and triggers a pull automatically.
1. Enable CI/CD in App Service.
Your App Service app now can pull the container image securely from your private
> [!TIP] > To see all information about all webhook events, remove the `--query` parameter. >
- > If you're streaming the container log, you should see the message after the webhook ping: `Starting container for site`, because the webhook triggers the app to restart. Since you haven't made anything updates to the image, there's nothing new for App Service to pull.
+ > If you're streaming the container log, you should see the message after the webhook ping: `Starting container for site`, because the webhook triggers the app to restart. Since you haven't made any updates to the image, there's nothing new for App Service to pull.
## Modify the app code and redeploy
In this section, you make a change to the web app code, rebuild the image, and t
docker push <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest ```
-1. Once the image push is complete, the webhook notifies App Service about the push, and App Service tries to pull in the updated image. Wait a few minutes, and then verify that the update has been deployed by browsing to `https://<app-name>.azurewebsites.net`.
+1. When the image push is complete, the webhook notifies the App Service about the push, and App Service tries to pull in the updated image. Wait a few minutes, and then verify that the update has been deployed by browsing to `https://<app-name>.azurewebsites.net`.
## Connect to the container using SSH
-SSH enables secure communication between a container and a client. To enable SSH connection to your container, your custom image must be configured for it. Once the container is running, you can open an SSH connection.
+SSH enables secure communication between a container and a client. To enable SSH connection to your container, your custom image must be configured for it. When the container is running, you can open an SSH connection.
### Configure the container for SSH
-The sample app used in this tutorial already has the necessary configuration in the *Dockerfile*, which installs the SSH server and also sets the login credentials. This section is informational only. To connect to the container, skip to the next section
+The sample app used in this tutorial already has the necessary configuration in the *Dockerfile*, which installs the SSH server and also sets the login credentials. This section is informational only. To connect to the container, skip to the next section.
```Dockerfile ENV SSH_PASSWD "root:Docker!"
service ssh start
1. Browse to `https://<app-name>.scm.azurewebsites.net/webssh/host` and sign in with your Azure account. Replace `<app-name>` with the name of your web app.
-1. Once signed in, you're redirected to an informational page for the web app. Select **SSH** at the top of the page to open the shell and use commands.
+1. When you sign in, you're redirected to an informational page for the web app. Select **SSH** at the top of the page to open the shell and use commands.
For example, you can examine the processes running within it using the `top` command. ## Clean up resources
-The resources you created in this article may incur ongoing costs. to clean up the resources, you need only delete the resource group that contains them:
+The resources you created in this article might incur ongoing costs. To clean up the resources, you only need to delete the resource group that contains them:
```azurecli az group delete --name myResourceGroup
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
In your Python code, you access these settings as environment variables with sta
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
+> [!NOTE]
+> If you want to try an alternative approach to connect your app to the Postgres database in Azure, see the [Service Connector version](../service-connector/tutorial-django-webapp-postgres-cli.md) of this tutorial. Service Connector is a new Azure service that is currently in public preview. [Section 4.2](../service-connector/tutorial-django-webapp-postgres-cli.md#42-configure-environment-variables-to-connect-the-database) of that tutorial introduces a simplified process for creating the connection.
+ ### 4.3 Run Django database migrations Django database migrations ensure that the schema in the PostgreSQL on Azure database match those described in your code.
application-gateway Application Gateway Autoscaling Zone Redundant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-autoscaling-zone-redundant.md
Previously updated : 01/18/2022 Last updated : 03/01/2022
Application Gateway and WAF can be configured to scale in two modes: -- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale up or down based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak-provisioned capacity for expected maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even without traffic. Each instance is roughly equivalent to 10 more reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You'll only be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 20 if not specified.
+- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale out or in based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak-provisioned capacity for expected maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even without traffic. Each instance is roughly equivalent to 10 more reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You'll only be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 20 if not specified.
- **Manual** - You can also choose Manual mode where the gateway won't autoscale. In this mode, if there's more traffic than what Application Gateway or WAF can handle, it could result in traffic loss. With manual mode, specifying instance count is mandatory. Instance count can vary from 1 to 125 instances. ## Autoscaling and High Availability
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
Azure provides the most extensive global footprint of any cloud provider and is
| Central US | North Europe | | Japan East | | East US | Norway East | | Korea Central | | East US 2 | UK South | | Southeast Asia |
-| South Central US | West Europe | | East Asia |
-| US Gov Virginia | Sweden Central| | |
+| South Central US | West Europe | | East Asia |
+| US Gov Virginia | Sweden Central| | China North 3 |
| West US 2 | | | | | West US 3 | | | |
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
Azure provides the most extensive global footprint of any cloud provider and is
| East US | Norway East | | Korea Central | | East US 2 | UK South | | Southeast Asia | | South Central US | West Europe | | East Asia |
-| US Gov Virginia | Sweden Central | | |
+| US Gov Virginia | Sweden Central | | China North 3 |
| West US 2 | | | | | West US 3 | | | |
availability-zones Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/cross-region-replication-azure.md
description: Learn about Cross-region replication in Azure.
Previously updated : 12/10/2021 Last updated : 3/01/2022
Regions are paired for cross-region replication based on proximity and other fac
| Canada |Canada Central |Canada East | | China |China North |China East| | China |China North 2 |China East 2|
+| China |China North 3 |China East 3|
| Europe |North Europe (Ireland) |West Europe (Netherlands) | | France |France Central|France South\*| | Germany |Germany West Central |Germany North\* |
azure-app-configuration Rest Api Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-azure-ad.md
Before acquiring an Azure AD token, you must identify what user you want to auth
### Audience
-Request the Azure AD token with a proper audience. For Azure App Configuration, use one of the following audiences. The audience can also be referred to as the *resource* that the token is being requested for.
+Request the Azure AD token with a proper audience. For Azure App Configuration use the following audience. The audience can also be referred to as the *resource* that the token is being requested for.
-- {configurationStoreName}.azconfig.io-- *.azconfig.io-
-> [!IMPORTANT]
-> When the audience requested is `{configurationStoreName}.azconfig.io`, it must exactly match the `Host` request header (case sensitive) used to send the request.
+`https://azconfig.io`
### Azure AD authority
azure-arc Create Sql Managed Instance Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
Title: Create a SQL managed instance using Kubernetes tools
-description: Create a SQL managed instance using Kubernetes tools
+ Title: Create a SQL Managed Instance using Kubernetes tools
+description: Deploy Azure Arc-enabled SQL Managed Instance using Kubernetes tools.
Previously updated : 07/30/2021 Last updated : 02/28/2022
-# Create Azure SQL managed instance using Kubernetes tools
+# Create Azure Arc-enabled SQL Managed Instance using Kubernetes tools
+This article demonstrates how to deploy Azure SQL Managed Instance for Azure Arc with Kubernetes tools.
## Prerequisites You should have already created a [data controller](plan-azure-arc-data-services.md).
-To create a SQL managed instance using Kubernetes tools, you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
+To create a SQL managed instance using Kubernetes tools, you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) ## Overview
-To create a SQL managed instance, you need to create a Kubernetes secret to store your system administrator login and password securely and a SQL managed instance custom resource based on the SqlManagedInstance custom resource definition.
+To create a SQL Managed Instance, you need to:
+1. Create a Kubernetes secret to store your system administrator login and password securely
+1. Create a SQL Managed Instance custom resource based on the `SqlManagedInstance` custom resource definition
+
+Define both of these items in a yaml file.
## Create a yaml file
-You can use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/sqlmi.yaml) file as a starting point to create your own custom SQL managed instance yaml file. Download this file to your local computer and open it in a text editor. It is useful to use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files.
+Use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/sqlmi.yaml) file as a starting point to create your own custom SQL managed instance yaml file. Download this file to your local computer and open it in a text editor. Use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files.
-This is an example yaml file:
+> [!NOTE]
+> Beginning with the February, 2022 release, `ReadWriteMany` (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes).
+> If no storage class is specified for backups, the default storage class in Kubernetes is used. If the default is not RWX capable, the SQL Managed Instance installation may not succeed.
-```yaml
-apiVersion: v1
-data:
- password: <your base64 encoded password>
- username: <your base64 encoded username>
-kind: Secret
-metadata:
- name: sql1-login-secret
-type: Opaque
-
-apiVersion: sql.arcdata.microsoft.com/v1
-kind: SqlManagedInstance
-metadata:
- name: sql1
- annotations:
- exampleannotation1: exampleannotationvalue1
- exampleannotation2: exampleannotationvalue2
- labels:
- examplelabel1: examplelabelvalue1
- examplelabel2: examplelabelvalue2
-spec:
- security:
- adminLoginSecret: sql1-login-secret
- scheduling:
- default:
- resources:
- limits:
- cpu: "2"
- memory: 4Gi
- requests:
- cpu: "1"
- memory: 2Gi
-
- primary:
- type: LoadBalancer
- storage:
- backups:
- volumes:
- - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
- size: 5Gi
- data:
- volumes:
- - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
- size: 5Gi
- datalogs:
- volumes:
- - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
- size: 5Gi
- logs:
- volumes:
- - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
- size: 5Gi
-```
+### Example yaml file
+
+See the following example of a yaml file:
+ ### Customizing the login and password
-A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. You will need to base64 encode a system administrator login and password and place them in the placeholder location at `data.password` and `data.username`. Do not include the `<` and `>` symbols provided in the template.
+A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password. You will need to base64 encode a system administrator login and password and place them in the placeholder location at `data.password` and `data.username`. Do not include the `<` and `>` symbols provided in the template.
> [!NOTE]
-> For optimum security, using the value 'sa' is not allowed for the login .
+> For optimum security, using the value `sa` is not allowed for the login .
> Follow the [password complexity policy](/sql/relational-databases/security/password-policy#password-complexity). You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform.
echo -n '<your string to encode here>' | base64
### Customizing the name
-The template has a value of 'sql1' for the name attribute. You can change this but it must be characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the SQL managed instance to 'sql2', you must change the name of the secret from 'sql1-login-secret' to 'sql2-login-secret'
+The template has a value of `sql1` for the name attribute. You can change this value, but it must include characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the SQL managed instance to `sql2`, you must change the name of the secret from `sql1-login-secret` to `sql2-login-secret`
### Customizing the resource requirements
-You can change the resource requirements - the RAM and core limits and requests - as needed.
+You can change the resource requirements - the RAM and core limits and requests - as needed.
> [!NOTE] > You can learn more about [Kubernetes resource governance](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes).
Requirements for resource limits and requests:
- The cores limit value is **required** for billing purposes. - The rest of the resource requests and limits are optional. - The cores limit and request must be a positive integer value, if specified.-- The minimum of 1 cores is required for the cores request, if specified.-- The memory value format follows the Kubernetes notation. -- A minimum of 2Gi is required for memory request, if specified.-- As a general guideline, you should have 4GB of RAM for each 1 core for production use cases.
+- The minimum of 1 core is required for the cores request, if specified.
+- The memory value format follows the Kubernetes notation.
+- A minimum of 2 Gi is required for memory request, if specified.
+- As a general guideline, you should have 4 GB of RAM for each 1 core for production use cases.
### Customizing service type
-The service type can be changed to NodePort if desired. A random port number will be assigned.
+The service type can be changed to NodePort if desired. A random port number will be assigned.
### Customizing storage
-You can customize the storage classes for storage to match your environment. If you are not sure which storage classes are available you can run the command `kubectl get storageclass` to view them. The template has a default value of 'default'. This means that there is a storage class _named_ 'default' not that there is a storage class that _is_ the default. You can also optionally change the size of your storage. You can read more about [storage configuration](./storage-configuration.md).
+You can customize the storage classes for storage to match your environment. If you are not sure which storage classes are available, run the command `kubectl get storageclass` to view them.
+
+The template has a default value of `default`.
+
+For example
+
+```yml
+storage:
+ data:
+ volumes:
+ - className: default
+```
+
+This example means that there is a storage class named `default` - not that there is a storage class that is the default. You can also optionally change the size of your storage. For more information, see [storage configuration](./storage-configuration.md).
## Creating the SQL managed instance
kubectl create -n <your target namespace> -f <path to your yaml file>
Creating the SQL managed instance will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: > [!NOTE]
-> The example commands below assume that you created a SQL managed instance named 'sql1' and Kubernetes namespace with the name 'arc'. If you used a different namespace/SQL managed instance name, you can replace 'arc' and 'sqlmi' with your names.
+> The example commands below assume that you created a SQL managed instance named `sql1` and Kubernetes namespace with the name `arc`. If you used a different namespace/SQL managed instance name, you can replace `arc` and `sqlmi` with your names.
```console kubectl get sqlmi/sql1 --namespace arc
kubectl get sqlmi/sql1 --namespace arc
kubectl get pods --namespace arc ```
-You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
+You can also check on the creation status of any particular pod. Run `kubectl describe pod ...`. Use this command to troubleshoot any issues. For example:
```console kubectl describe pod/<pod name> --namespace arc
kubectl describe pod/<pod name> --namespace arc
#kubectl describe pod/sql1-0 --namespace arc ```
-## Troubleshooting creation problems
+## Troubleshoot deployment problems
-If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md).
+If you encounter any troubles with the deployment, please see the [troubleshooting guide](troubleshoot-guide.md).
## Next steps
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
AAD_ENTITY_OBJECT_ID=$(az ad sp show --id <id> --query objectId -o tsv) ```
-1. Authorize the AAD entity with appropriate permissions:
+1. Authorize the entity with appropriate permissions:
- If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example:
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Title: Azure Arc overview description: Learn about what Azure Arc is and how it helps customers enable management and governance of their hybrid resources with other Azure services and features. Previously updated : 05/25/2021 Last updated : 03/01/2022 # Azure Arc overview
-Today, companies struggle to control and govern increasingly complex environments. These environments extend across data centers, multiple clouds, and edge. Each environment and cloud possesses its own set of disjointed management tools that you need to learn and operate.
+Today, companies struggle to control and govern increasingly complex environments that extend across data centers, multiple clouds, and edge. Each environment and cloud possesses its own set of management tools, and new DevOps and ITOps operational models can be hard to implement across resources.
-In parallel, new DevOps and ITOps operational models are hard to implement, as existing tools fail to provide support for new cloud native patterns.
+Azure Arc simplifies governance and management by delivering a consistent multi-cloud and on-premises management platform.
-Azure Arc simplifies governance and management by delivering a consistent multi-cloud and on-premises management platform. Azure Arc enables you to:
-* Manage your entire environment, with a single pane of glass, by projecting your existing non-Azure, on-premises, or other-cloud resources into Azure Resource Manager.
-* Manage virtual machines, Kubernetes clusters, and databases as if they are running in Azure.
-* Use familiar Azure services and management capabilities, regardless of where they live.
-* Continue using traditional ITOps, while introducing DevOps practices to support new cloud native patterns in your environment.
-* Configure Custom Locations as an abstraction layer on top of Azure Arc-enabled Kubernetes cluster, cluster connect, and cluster extensions.
+Azure Arc provides a centralized, unified way to:
+
+* Manage your entire environment together by projecting your existing non-Azure and/or on-premises resources into Azure Resource Manager.
+* Manage virtual machines, Kubernetes clusters, and databases as if they are running in Azure.
+* Use familiar Azure services and management capabilities, regardless of where they live.
+* Continue using traditional ITOps while introducing DevOps practices to support new cloud native patterns in your environment.
+* Configure custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters and cluster extensions.
:::image type="content" source="./media/overview/azure-arc-control-plane.png" alt-text="Azure Arc management control plane diagram" border="false":::
-Today, Azure Arc allows you to manage the following resource types hosted outside of Azure:
+Currently, Azure Arc allows you to manage the following resource types hosted outside of Azure:
-* Servers - both physical and virtual machines running Windows or Linux.
-* Kubernetes clusters - supporting multiple Kubernetes distributions.
-* Azure data services - Azure SQL Managed Instance and PostgreSQL Hyperscale services.
-* SQL Server - enroll instances from any location with [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview).
+* [Servers](servers/overview.md): Manage Windows and Linux physical servers and virtual machines hosted outside of Azure.
+* [Kubernetes clusters](kubernetes/overview.md): Attach and configure Kubernetes clusters running anywhere, with multiple supported distributions.
+* [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance
+and PostgreSQL Hyperscale (preview) services are currently available.
+* [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure.
-## What does Azure Arc deliver?
+## Key features and benefits
-Key features of Azure Arc include:
+Some of the key scenarios that Azure Arc supports are:
-* Implement consistent inventory, management, governance, and security for your servers across your environment.
+* Implement consistent inventory, management, governance, and security for servers across your environment.
* Configure [Azure VM extensions](./servers/manage-vm-extensions.md) to use Azure management services to monitor, secure, and update your servers.
Key features of Azure Arc include:
* Use GitOps to deploy configuration across one or more clusters from Git repositories.
-* Zero-touch compliance and configuration for your Kubernetes clusters using Azure Policy.
+* Zero-touch compliance and configuration for Kubernetes clusters using Azure Policy.
* Run [Azure data services](../azure-arc/kubernetes/custom-locations.md) on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL Hyperscale, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure. * Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled Data Services](./dat).
-* A unified experience viewing your Azure Arc-enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
+* A unified experience viewing your Azure Arc-enabled resources, whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
-## How much does Azure Arc cost?
+## Pricing
-The following are pricing details for the features available today with Azure Arc.
+Below is pricing information for the features available today with Azure Arc.
### Azure Arc-enabled servers The following Azure Arc control plane functionality is offered at no extra cost:
-* Resource organization through Azure management groups and tags.
-
-* Searching and indexing through Azure Resource Graph.
-
-* Access and security through Azure RBAC and subscriptions.
-
-* Environments and automation through templates and extensions.
-
-* Update management.
+* Resource organization through Azure management groups and tags
+* Searching and indexing through Azure Resource Graph
+* Access and security through Azure RBAC and subscriptions
+* Environments and automation through templates and extensions
+* Update management
Any Azure service that is used on Azure Arc-enabled servers, such as Microsoft Defender for Cloud or Azure Monitor, will be charged as per the pricing for that service. For more information, see the [Azure pricing page](https://azure.microsoft.com/pricing/). ### Azure Arc-enabled Kubernetes
-Any Azure service that is used on Azure Arc-enabled Kubernetes, such as Microsoft Defender for Cloud or Azure Monitor, will be charged as per the pricing for that service. For more information on pricing for configurations on top of Azure Arc-enabled Kubernetes, see [Azure pricing page](https://azure.microsoft.com/pricing/).
+Any Azure service that is used on Azure Arc-enabled Kubernetes, such as Microsoft Defender for Cloud or Azure Monitor, will be charged as per the pricing for that service.
+
+For more information on pricing for configurations on top of Azure Arc-enabled Kubernetes, see the [Azure pricing page](https://azure.microsoft.com/pricing/).
### Azure Arc-enabled data services
-For information, see [Azure pricing page](https://azure.microsoft.com/pricing/).
+For information, see the [Azure pricing page](https://azure.microsoft.com/pricing/).
## Next steps
-* To learn more about Azure Arc-enabled servers, see the following [overview](./servers/overview.md)
-
-* To learn more about Azure Arc-enabled Kubernetes, see the following [overview](./kubernetes/overview.md)
-
-* To learn more about Azure Arc-enabled data services, see the following [overview](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/)
-
-* To learn more about SQL Server on Azure Arc-enabled servers, see the following [overview](/sql/sql-server/azure-arc/overview)
-
-* Experience Azure Arc-enabled services from the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/)
+* Learn about [Azure Arc-enabled servers](./servers/overview.md).
+* Learn about [Azure Arc-enabled Kubernetes](./kubernetes/overview.md).
+* Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/).
+* Learn about [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview).
+* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 02/23/2022 Last updated : 03/01/2022
The following versions of the Windows and Linux operating system are officially
> The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md). > [!NOTE]
-> While Azure Arc-enabled servers supports Amazon Linux, the following do not support this distribution:
+> While Azure Arc-enabled servers supports Amazon Linux, the following features are not support by this distribution:
> > * The Dependency agent used by Azure Monitor VM insights > * Azure Automation Update Management
Service Tags:
URLs:
-| Agent resource | Description |
-|||
-|`azgn*.servicebus.windows.net`|Notification service for extensions|
-|`management.azure.com`|Azure Resource Manager|
-|`login.windows.net`|Azure Active Directory|
-|`login.microsoftonline.com`|Azure Active Directory|
-|`pas.windows.net`|Azure Active Directory|
-|`*.guestconfiguration.azure.com` |Extension and guest configuration services|
-|`*.his.arc.azure.com`|Metadata and hybrid identity services|
-|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|
-|`dc.services.visualstudio.com`|Agent telemetry|
-|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service|
+| Agent resource | Description | When required| Endpoint used with private link |
+|||--||
+|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
+|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
+|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
+|`login.windows.net`|Azure Active Directory|Always| Public |
+|`login.microsoftonline.com`|Azure Active Directory|Always| Public |
+|`pas.windows.net`|Azure Active Directory|Always| Public |
+|`management.azure.com`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
+|`*.his.arc.azure.com`|Metadata and hybrid identity services|Always| Private |
+|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private |
+|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private |
+|`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
+|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
+|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
The Connected Machine agent for Windows can be installed by using one of the fol
* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell. * From a PowerShell session using a scripted method.
+Installing, updating, and removing the Connected Machine agent will not require you to restart your server.
+ After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied. * The following installation folders are created during setup.
After installing the Connected Machine agent for Windows, the following system-w
The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
+Installing, updating, and removing the Connected Machine agent will not require you to restart your server.
+ After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied. * The following installation folders are created during setup.
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 08/27/2021 Last updated : 02/28/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.10 - August 2021
+
+### Fixed
+
+- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/policy/concepts/guest-configuration-policy-effects.md).
+- The guest configuration policy agent now restarts every 48 hours instead of every 6 hours.
+ ## Version 1.9 - July 2021 ### New features
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 09/01/2021 Last updated : 02/28/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.15 - February 2022
+
+### New features
+
+- Network check improvements during onboarding:
+ - Added TLS 1.2 check
+ - Azure Arc network endpoints are now required, onboarding will abort if they are not accessible
+ - New `--skip-network-check` flag to override the new network check behavior
+- [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints.
+
+### Fixed
+
+- Improved reliability when disconnecting the agent from Azure
+- Extended the device login timeout to 5 minutes
+- Removed resource constraints for Azure Monitor Agent to support high throughput scenarios
+ ## Version 1.14 - January 2022 ### Fixed
This page is updated monthly, so revisit it regularly. If you're looking for ite
- The guest configuration policy agent will now automatically retry if an error is encountered during service start or restart events. - Fixed an issue that prevented guest configuration audit policies from successfully executing on Linux machines.
-## Version 1.10 - August 2021
-
-### Fixed
--- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/policy/concepts/guest-configuration-policy-effects.md).-- The guest configuration policy agent now restarts every 48 hours instead of every 6 hours.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Tutorial Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-assign-policy-portal.md
before you begin.
## Create a policy assignment
-In this tutorial, you create a policy assignment and assign the _\[Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines_ policy definition.
+In this tutorial, you create a policy assignment and assign the _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition.
1. Launch the Azure Policy service in the Azure portal by clicking **All services**, then searching for and selecting **Policy**.
In this tutorial, you create a policy assignment and assign the _\[Preview]: Log
For a partial list of available built-in policies, see [Azure Policy samples](../../../governance/policy/samples/index.md).
-1. Search through the policy definitions list to find the _\[Preview]: Log Analytics agent should be installed on your Windows Azure Arc machines_
- definition if you have enabled the Arc-enabled servers agent on a Windows-based machine. For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**.
-
- :::image type="content" source="./media/tutorial-assign-policy-portal/select-available-definition.png" alt-text="Find the correct policy definition" border="false":::
+1. Search through the policy definitions list to find the _\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines_
+ definition if you have enabled the Arc-enabled servers agent on a Windows-based machine. For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**.
1. The **Assignment name** is automatically populated with the policy name you selected, but you can
- change it. For this example, leave _\[Preview]: Log Analytics agent should be installed on your Windows Azure Arc machines_ or _\[Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines_ depending on which one you selected. You can also add an optional **Description**. The description provides details about this policy assignment.
+ change it. For this example, leave _\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines_ or _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ depending on which one you selected. You can also add an optional **Description**. The description provides details about this policy assignment.
**Assigned by** will automatically fill based on who is logged in. This field is optional, so custom values can be entered.
environment.
## Identify non-compliant resources
-Select **Compliance** in the left side of the page. Then locate the **\[Preview]: Log Analytics agent should be installed on your Windows Azure Arc machines** or **\[Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines** policy assignment you created.
+Select **Compliance** in the left side of the page. Then locate the **\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines** or **\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines** policy assignment you created.
:::image type="content" source="./media/tutorial-assign-policy-portal/policy-compliance.png" alt-text="Compliance details on the Policy Compliance page" border="false":::
condition triggers evaluation of the existence condition for the related resourc
To remove the assignment created, follow these steps: 1. Select **Compliance** (or **Assignments**) in the left side of the Azure Policy page and locate
- the **\[Preview]: Log Analytics agent should be installed on your Windows Azure Arc machines** or **\[Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines** policy assignment you created.
+ the **\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines** or **\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines** policy assignment you created.
1. Right-click the policy assignment and select **Delete assignment**.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 02/17/2022 Last updated : 02/28/2022
To change a configuration property, run the following command:
`azcmagent config set <propertyName> <propertyValue>`
+If the property you're changing supports a list of values, you can use the `--add` and `--remove` flags to add or remove specific items without having to re-type the entire list.
+
+`azcmagent config set <propertyName> <propertyValue> --add`
+
+`azcmagent config set <propertyName> <propertyValue> --remove`
+ To clear a configuration property's value, run the following command: `azcmagent config clear <propertyName>` ## Upgrading agent
-The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of machine agent and recommends that you upgrade to the latest version. It won't notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
+The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of machine agent and recommends that you upgrade to the latest version. It will notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
-The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements.
+The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, and uninstalling the Azure Connected Machine Agent will not require you to restart your server.
The following table describes the methods supported to perform the agent upgrade.
azcmagent config clear proxy.url
You do not need to restart any services when reconfiguring the proxy settings with the `azcmagent config` command.
+### Proxy bypass for private endpoints
+
+Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Azure Active Directory and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Arc traffic to skip the proxy and communicate with a private IP address on your network.
+
+The proxy bypass feature does not require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that should not use the proxy server.
+
+| Proxy bypass value | Affected endpoints |
+| | |
+| AAD | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` |
+| ARM | `management.azure.com` |
+| Arc | `his.arc.azure.com`, `guestconfiguration.azure.com`, `guestnotificationservice.azure.com`, `servicebus.windows.net` |
+
+To send Azure Active Directory and Azure Resource Manager traffic through a proxy server but skip the proxy for Arc traffic, run the following command:
+
+```bash
+azcmagent config set proxy.url "http://ProxyServerFQDN:port"
+azcmagent config set proxy.bypass "Arc"
+```
+
+To provide a list of services, separate the service names by commas:
+
+```bash
+azcmagent config set proxy.bypass "ARM,Arc"
+```
+
+To clear the proxy bypass, run the following command:
+
+```bash
+azcmagent config clear proxy.bypass
+```
+
+You can view the effective proxy server and proxy bypass configuration by running `azcmagent show`.
+ ### Windows environment variables On Windows, the Azure Connected Machine agent will first check the `proxy.url` agent configuration property (starting with agent version 1.13), then the system-wide `HTTPS_PROXY` environment variable to determine which proxy server to use. If both are empty, no proxy server is used, even if the default Windows system-wide proxy setting is configured.
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect networks to Azure Arc description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. Previously updated : 10/01/2021 Last updated : 02/28/2022 # Use Azure Private Link to securely connect networks to Azure Arc
When connecting a machine or server with Azure Arc-enabled servers for the first
1. Select **Next: Tags**.
-1. If you selected **Add multiple servers**, on the **Authentication** page, select the service principal created for Azure Arc-enabled servers from the drop down list. If you have not created a service principal for Azure Arc-enabled servers, first review [how to create a service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to familiarize yourself with permissions required and the steps to create one. Select **Next: Tags** to continue.
+1. If you selected **Add multiple servers**, on the **Authentication** page, select the service principal created for Azure Arc-enabled servers from the drop-down list. If you have not created a service principal for Azure Arc-enabled servers, first review [how to create a service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to familiarize yourself with permissions required and the steps to create one. Select **Next: Tags** to continue.
1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards.
When connecting a machine or server with Azure Arc-enabled servers for the first
After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you may need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent.
-The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager.
+The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager.
The script will return status messages letting you know if onboarding was successful after it completes.
-> [!NOTE]
-> If youΓÇÖre deploying the Connected Machine agent on a Linux server, there may be a five minute delay during the network connectivity check followed by an error saying that `you do not have access to login.windows.net`, even if your firewall is configured correctly. This is a known issue and will be fixed in a future agent release. Onboarding should still succeed if your firewall is configured correctly.
+> [!TIP]
+> Network traffic from the Azure Connected Machine agent to Azure Active Directory and Azure Resource Manager will continue to use public endpoints. If your server needs to communicate through a proxy server to reach these endpoints, [configure the agent with the proxy server URL](manage-agent.md#update-or-remove-proxy-settings) before connecting it to Azure. You may also need to [configure a proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) for the Azure Arc services if your private endpoint is not accessible from your proxy server.
### Configure an existing Azure Arc-enabled server
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-development.md
description: Learn how to develop code for Azure Cache for Redis.
Previously updated : 11/3/2021 Last updated : 02/25/2022
Azure Cache for Redis requires TLS encrypted communications by default. TLS vers
If your client library or tool doesn't support TLS, then enabling unencrypted connections is possible through the [Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/2021-06-01/redis/update). In cases where encrypted connections aren't possible, we recommend placing your cache and client application into a virtual network. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
+### Azure TLS Certificate Change
+
+Microsoft is updating Azure services to use TLS server certificates from a different set of Certificate Authorities (CAs). This change is rolled out in phases from August 13, 2020 to October 26, 2020 (estimated). Azure is making this change because [the current CA certificates don't one of the CA/Browser Forum Baseline requirements](https://bugzilla.mozilla.org/show_bug.cgi?id=1649951). The problem was reported on July 1, 2020 and applies to multiple popular Public Key Infrastructure (PKI) providers worldwide. Most TLS certificates used by Azure services today come from the *Baltimore CyberTrust Root* PKI. The Azure Cache for Redis service will continue to be chained to the Baltimore CyberTrust Root. Its TLS server certificates, however, will be issued by new Intermediate Certificate Authorities (ICAs) starting on October 12, 2020.
+
+> [!NOTE]
+> This change is limited to services in public [Azure regions](https://azure.microsoft.com/global-infrastructure/geographies/). It excludes sovereign (e.g., China) or government clouds.
+>
+>
+
+#### Does this change affect me?
+
+We expect that most Azure Cache for Redis customers aren't affected by the change. Your application may be impacted if it explicitly specifies a list of acceptable certificates, a practice known as ΓÇ£certificate pinningΓÇ¥. If it's pinned to an intermediate or leaf certificate instead of the Baltimore CyberTrust Root, you should **take immediate actions** to change the certificate configuration.
+
+The following table provides information about the certificates that are being rolled. Depending on which certificate your application uses, you might need to update it to prevent loss of connectivity to your Azure Cache for Redis instance.
+
+| CA Type | Current | Post Rolling (Oct 12, 2020) | Action |
+| -- | -- | -- | -- |
+| Root | Thumbprint: d4de20d05e66fc53fe1a50882c78db2852cae474<br><br> Expiration: Monday, May 12, 2025, 4:59:00 PM<br><br> Subject Name:<br> CN = Baltimore CyberTrust Root<br> OU = CyberTrust<br> O = Baltimore<br> C = IE | Not changing | None |
+| Intermediates | Thumbprints:<br> CN = Microsoft IT TLS CA 1<br> Thumbprint: 417e225037fbfaa4f95761d5ae729e1aea7e3a42<br><br> CN = Microsoft IT TLS CA 2<br> Thumbprint: 54d9d20239080c32316ed9ff980a48988f4adf2d<br><br> CN = Microsoft IT TLS CA 4<br> Thumbprint: 8a38755d0996823fe8fa3116a277ce446eac4e99<br><br> CN = Microsoft IT TLS CA 5<br> Thumbprint: Ad898ac73df333eb60ac1f5fc6c4b2219ddb79b7<br><br> Expiration: ΓÇÄFriday, ΓÇÄMay ΓÇÄ20, ΓÇÄ2024 5:52:38 AM<br><br> Subject Name:<br> OU = Microsoft IT<br> O = Microsoft Corporation<br> L = Redmond<br> S = Washington<br> C = US<br> | Thumbprints:<br> CN = Microsoft RSA TLS CA 01<br> Thumbprint: 703d7a8f0ebf55aaa59f98eaf4a206004eb2516a<br><br> CN = Microsoft RSA TLS CA 02<br> Thumbprint: b0c2d2d13cdd56cdaa6ab6e2c04440be4a429c75<br><br> Expiration: ΓÇÄTuesday, ΓÇÄOctober ΓÇÄ8, ΓÇÄ2024 12:00:00 AM;<br><br> Subject Name:<br> O = Microsoft Corporation<br> C = US<br> | Required |
+
+#### What actions should I take?
+
+If your application uses the operating system certificate store or pins the Baltimore root among others, no action is needed.
+
+If your application pins any intermediate or leaf TLS certificate, we recommend you pin the following roots:
+
+| Certificate | Thumbprint |
+| -- | -- |
+| [Baltimore Root CA](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | d4de20d05e66fc53fe1a50882c78db2852cae474 |
+| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 73a5e64a3bff8316ff0edccc618a906e4eae4d74 |
+| [Digicert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | df3c24f9bfd666761b268073fe06d1cc8d4f82a4 |
+
+> [!TIP]
+> Both the intermediate and leaf certificates are expected to change frequently. We recommend not to take a dependency on them. Instead pin your application to a root certificate since it rolls less frequently.
+>
+>
+
+To continue to pin intermediate certificates, add the following to the pinned intermediate certificates list, which includes few more to minimize future changes:
+
+| Common name of the CA | Thumbprint |
+| -- | -- |
+| [Microsoft RSA TLS CA 01](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2001.crt) | 703d7a8f0ebf55aaa59f98eaf4a206004eb2516a |
+| [Microsoft RSA TLS CA 02](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2002.crt) | b0c2d2d13cdd56cdaa6ab6e2c04440be4a429c75 |
+| [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 2f2877c5d778c31e0f29c7e371df5471bd673173 |
+| [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | e7eea674ca718e3befd90858e09f8372ad0ae2aa |
+| [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 6c3af02e7f269aa73afd0eff2a88a4a1f04ed1e5 |
+| [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 30e01761ab97e59a06b41ef20af6f2de7ef4f7b0 |
+
+If your application validates certificate in code, you need to modify it to recognize the properties for example, Issuers, Thumbprint of the newly pinned certificates. This extra verification should cover all pinned certificates to be more future-proof.
+ ## Client library-specific guidance - [StackExchange.Redis (.NET)](cache-best-practices-connection.md#using-forcereconnect-with-stackexchangeredis)
azure-cache-for-redis Cache Best Practices Memory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-memory-management.md
## Eviction policy
-Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application. The default policy for Azure Cache for Redis is `volatile-lru`, which means that only keys that have a TTL value set are eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the `allkeys-lru` policy.
+Choose an [eviction policy](https://redis.io/topics/lru-cache) that works for your application. The default policy for Azure Cache for Redis is `volatile-lru`, which means that only keys that have a TTL value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then you may want to consider the `allkeys-lru` policy.
## Keys expiration
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
Previously updated : 3/31/2021 Last updated : 02/28/2022 # Azure Cache for Redis with Azure Private Link
In this section, you'll create a new Azure Cache for Redis instance with a priva
10. Select the **Review + create** tab or select the **Review + create** button.
-11. Verify that all the information is correct and select **Create** to provision the virtual network.
+11. Verify that all the information is correct and select **Create** to create the virtual network.
### Create an Azure Cache for Redis instance with a private endpoint
It takes a while for the cache to create. You can monitor progress on the Azure
In this section, you'll add a private endpoint to an existing Azure Cache for Redis instance.
-### Create a virtual network for you existing cache
+### Create a virtual network for your existing cache
To create a virtual network, follow these steps.
To create a virtual network, follow these steps.
1. Select the **Review + create** tab or select the **Review + create** button.
-1. Verify that all the information is correct and select **Create** to provision the virtual network.
+1. Verify that all the information is correct and select **Create** to create the virtual network.
### Create a private endpoint
az network private-endpoint delete --name MyPrivateEndpoint --resource-group MyR
## FAQ
+- [How do I connect to my cache with private endpoint?](#how-do-i-connect-to-my-cache-with-private-endpoint)
- [Why can't I connect to a private endpoint?](#why-cant-i-connect-to-a-private-endpoint) - [What features aren't supported with private endpoints?](#what-features-arent-supported-with-private-endpoints) - [How do I verify if my private endpoint is configured correctly?](#how-do-i-verify-if-my-private-endpoint-is-configured-correctly)
az network private-endpoint delete --name MyPrivateEndpoint --resource-group MyR
- [Are network security groups (NSG) enabled for private endpoints?](#are-network-security-groups-nsg-enabled-for-private-endpoints) - [My private endpoint instance isn't in my VNet, so how is it associated with my VNet?](#my-private-endpoint-instance-isnt-in-my-vnet-so-how-is-it-associated-with-my-vnet)
+### How do I connect to my cache with private endpoint?
+
+Your application should connect to `<cachename>.redis.cache.windows.net` on port `6380`. We recommend avoiding the use of `<cachename>.privatelink.redis.cache.windows.net` in configuration or connection string.
+
+A private DNS zone, named `*.privatelink.redis.cache.windows.net`, is automatically created in your subscription. The private DNS zone is vital for establishing the TLS connection with the private endpoint.
+
+For more information, see [Azure services DNS zone configuration](/azure/private-link/private-endpoint-dns).
+ ### Why can't I connect to a private endpoint? - Private endpoints can't be used with your cache instance if your cache is already a VNet injected cache.
Trying to connect from the Azure portal console is an unsupported scenario where
### How do I verify if my private endpoint is configured correctly?
-You can run a command like `nslookup` from within the VNet that is linked to the private endpoint to verify that the command resolves to the private IP address for the cache. The private IP address is found by selecting your **Private endpoint** from your resources. On the resource menu on the left, select **DNS configuration**. In the working pane on the right, you see the IP address for the **Network Interface**.
+ Go to **Overview** in the Resource menu on the portal. You see the **Host name** for your cache in the working pane. Run a command like `nslookup <hostname>` from within the VNet that is linked to the private endpoint to verify that the command resolves to the private IP address for the cache.
:::image type="content" source="media/cache-private-link/cache-private-ip-address.png" alt-text="In the Azure portal, private endpoint D N S settings.":::
Refer to our [migration guide](cache-vnet-migration.md) for different approaches
### How can I have multiple endpoints in different virtual networks?
-To have multiple private endpoints in different virtual networks, the private DNS zone must be manually configured to the multiple virtual networks _before_ creating the private endpoint. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
+To have multiple private endpoints in different virtual networks, the private DNS zone must be manually configured to the multiple virtual networks *before* creating the private endpoint. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
### What happens if I delete all the private endpoints on my cache?
It's only linked to your VNet. Because it's not in your VNet, NSG rules don't ne
## Next steps - To learn more about Azure Private Link, see the [Azure Private Link documentation](../private-link/private-link-overview.md).-- To compare various network isolation options for your cache instance, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
+- To compare various network isolation options for your cache, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 02/02/2022 Last updated : 02/25/2022
Last updated 02/02/2022
## February 2022
+### TLS Certificate Change
+
+As of May 2022, Azure Cache for Redis rolls over to TLS certificates issued by DigiCert Global G2 CA Root. The current Baltimore CyberTrust Root expires in May 2025, requiring this change.
+
+We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), which is known as *certificate pinning*.
+
+For more information, read this blog that contains instructions on [how to check whether your client application is affected](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-cache-for-redis-tls-upcoming-migration-to-digicert-global/ba-p/3171086). We recommend taking the actions recommended in the blog to avoid cache connectivity loss.
+ ### Active geo-replication for Azure Cache For Redis Enterprise GA Active geo-replication for Azure Cache for Redis Enterprise is now generally available (GA).
Active geo-replication is a powerful tool that enables Azure Cache for Redis clu
### Support for managed identity in Azure Cache for Redis
-Azure Cache for Redis now supports authenticating storage account connections using managed identity. Identity is established through Azure Active Directory, and both system-assigned and user-assigned identities are supported. This further allows the service to establish trusted access to storage for uses including data persistence and importing/exporting cache data.
+Azure Cache for Redis now supports authenticating storage account connections using managed identity. Identity is established through Azure Active Directory, and both system-assigned and user-assigned identities are supported. Support for managed identity further allows the service to establish trusted access to storage for uses including data persistence and importing/exporting cache data.
For more information, see [Managed identity with Azure Cache for Redis (Preview)](cache-managed-identity.md).
Get started with Azure Cache for Redis 6.0, today, and select Redis 6.0 during c
### Diagnostics for connected clients
-Azure Cache for Redis now integrates with Azure diagnostic settings to log information on all client connections to your cache. Logging and then analyzing this diagnostic setting helps you understand who is connecting to your caches and the timestamp of those connections. This data could be used to identify the scope of a security breach and for security auditing purposes. Users can route these logs to a destination of their choice, such as a storage account or Event Hub.
+Azure Cache for Redis now integrates with Azure diagnostic settings to log information on all client connections to your cache. Logging and then analyzing this diagnostic setting helps you understand who is connecting to your caches and the timestamp of those connections. This data could be used to identify the scope of a security breach and for security auditing purposes. Users can route these logs to a destination of their choice, such as a storage account or Event Hubs.
For more information, see [Monitor Azure Cache for Redis data using diagnostic settings](cache-monitor-diagnostic-settings.md).
Active geo-replication public preview now supports:
### Azure TLS Certificate Change
-Microsoft is updating Azure services to use TLS server certificates from a different set of Certificate Authorities (CAs). This change is rolled out in phases from August 13, 2020 to October 26, 2020 (estimated). Azure is making this change because [the current CA certificates don't one of the CA/Browser Forum Baseline requirements](https://bugzilla.mozilla.org/show_bug.cgi?id=1649951). The problem was reported on July 1, 2020 and applies to multiple popular Public Key Infrastructure (PKI) providers worldwide. Most TLS certificates used by Azure services today come from the *Baltimore CyberTrust Root* PKI. The Azure Cache for Redis service will continue to be chained to the Baltimore CyberTrust Root. Its TLS server certificates, however, will be issued by new Intermediate Certificate Authorities (ICAs) starting on October 12, 2020.
-
-> [!NOTE]
-> This change is limited to services in public [Azure regions](https://azure.microsoft.com/global-infrastructure/geographies/). It excludes sovereign (e.g., China) or government clouds.
->
->
-
-#### Does this change affect me?
-
-We expect that most Azure Cache for Redis customers aren't affected by the change. Your application may be impacted if it explicitly specifies a list of acceptable certificates, a practice known as ΓÇ£certificate pinningΓÇ¥. If it's pinned to an intermediate or leaf certificate instead of the Baltimore CyberTrust Root, you should **take immediate actions** to change the certificate configuration.
-
-The following table provides information about the certificates that are being rolled. Depending on which certificate your application uses, you may need to update it to prevent loss of connectivity to your Azure Cache for Redis instance.
-
-| CA Type | Current | Post Rolling (Oct 12, 2020) | Action |
-| -- | -- | -- | -- |
-| Root | Thumbprint: d4de20d05e66fc53fe1a50882c78db2852cae474<br><br> Expiration: Monday, May 12, 2025, 4:59:00 PM<br><br> Subject Name:<br> CN = Baltimore CyberTrust Root<br> OU = CyberTrust<br> O = Baltimore<br> C = IE | Not changing | None |
-| Intermediates | Thumbprints:<br> CN = Microsoft IT TLS CA 1<br> Thumbprint: 417e225037fbfaa4f95761d5ae729e1aea7e3a42<br><br> CN = Microsoft IT TLS CA 2<br> Thumbprint: 54d9d20239080c32316ed9ff980a48988f4adf2d<br><br> CN = Microsoft IT TLS CA 4<br> Thumbprint: 8a38755d0996823fe8fa3116a277ce446eac4e99<br><br> CN = Microsoft IT TLS CA 5<br> Thumbprint: Ad898ac73df333eb60ac1f5fc6c4b2219ddb79b7<br><br> Expiration: ΓÇÄFriday, ΓÇÄMay ΓÇÄ20, ΓÇÄ2024 5:52:38 AM<br><br> Subject Name:<br> OU = Microsoft IT<br> O = Microsoft Corporation<br> L = Redmond<br> S = Washington<br> C = US<br> | Thumbprints:<br> CN = Microsoft RSA TLS CA 01<br> Thumbprint: 703d7a8f0ebf55aaa59f98eaf4a206004eb2516a<br><br> CN = Microsoft RSA TLS CA 02<br> Thumbprint: b0c2d2d13cdd56cdaa6ab6e2c04440be4a429c75<br><br> Expiration: ΓÇÄTuesday, ΓÇÄOctober ΓÇÄ8, ΓÇÄ2024 12:00:00 AM;<br><br> Subject Name:<br> O = Microsoft Corporation<br> C = US<br> | Required |
-
-#### What actions should I take?
-
-If your application uses the operating system certificate store or pins the Baltimore root among others, no action is needed.
-
-If your application pins any intermediate or leaf TLS certificate, we recommend you pin the following roots:
-
-| Certificate | Thumbprint |
-| -- | -- |
-| [Baltimore Root CA](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | d4de20d05e66fc53fe1a50882c78db2852cae474 |
-| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 73a5e64a3bff8316ff0edccc618a906e4eae4d74 |
-| [Digicert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | df3c24f9bfd666761b268073fe06d1cc8d4f82a4 |
-
-> [!TIP]
-> Both the intermediate and leaf certificates are expected to change frequently. We recommend not to take a dependency on them. Instead pin your application to a root certificate since it rolls less frequently.
->
->
-
-To continue to pin intermediate certificates, add the following to the pinned intermediate certificates list, which includes few more to minimize future changes:
-
-| Common name of the CA | Thumbprint |
-| -- | -- |
-| [Microsoft RSA TLS CA 01](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2001.crt) | 703d7a8f0ebf55aaa59f98eaf4a206004eb2516a |
-| [Microsoft RSA TLS CA 02](https://www.microsoft.com/pki/mscorp/Microsoft%20RSA%20TLS%20CA%2002.crt) | b0c2d2d13cdd56cdaa6ab6e2c04440be4a429c75 |
-| [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 2f2877c5d778c31e0f29c7e371df5471bd673173 |
-| [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | e7eea674ca718e3befd90858e09f8372ad0ae2aa |
-| [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 6c3af02e7f269aa73afd0eff2a88a4a1f04ed1e5 |
-| [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 30e01761ab97e59a06b41ef20af6f2de7ef4f7b0 |
+Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates don't comply with one of the CA/Browser Forum Baseline requirements. For full details, see [Azure TLS Certificate Changes](/azure/security/fundamentals/tls-certificate-changes).
-If your application validates certificate in code, you need to modify it to recognize the properties for example, Issuers, Thumbprint of the newly pinned certificates. This extra verification should cover all pinned certificates to be more future-proof.
+For more information on the effect to Azure Cache for Redis, see [Azure TLS Certificate Change](cache-best-practices-development.md#azure-tls-certificate-change).
## Next steps
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
func durable get-history --id 0ab8c55a66644d68a3a8b220b12d209c
## Query all instances
-You can use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync) (.NET), [getStatusAll](/javascript/api/durable-functions/durableorchestrationclient#getstatusall--) (JavaScript), or `get_status_all` (Python) method to query the statuses of all orchestration instances in your [task hub](durable-functions-task-hubs.md). This method returns a list of objects that represent the orchestration instances matching the query parameters.
+You can use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync) (.NET), [getStatusAll](/javascript/api/durable-functions/durableorchestrationclient#durable-functions-durableorchestrationclient-getstatusall) (JavaScript), or `get_status_all` (Python) method to query the statuses of all orchestration instances in your [task hub](durable-functions-task-hubs.md). This method returns a list of objects that represent the orchestration instances matching the query parameters.
# [C#](#tab/csharp)
func durable get-instances
What if you don't really need all the information that a standard instance query can provide? For example, what if you're just looking for the orchestration creation time, or the orchestration runtime status? You can narrow your query by applying filters.
-Use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync#Microsoft_Azure_WebJobs_Extensions_DurableTask_IDurableOrchestrationClient_ListInstancesAsync_Microsoft_Azure_WebJobs_Extensions_DurableTask_OrchestrationStatusQueryCondition_System_Threading_CancellationToken_) (.NET) or [getStatusBy](/javascript/api/durable-functions/durableorchestrationclient#getstatusby-dateundefined--dateundefined--orchestrationruntimestatus) (JavaScript) method to get a list of orchestration instances that match a set of predefined filters.
+Use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync) (.NET) or [getStatusBy](/javascript/api/durable-functions/durableorchestrationclient#durable-functions-durableorchestrationclient-getstatusby) (JavaScript) method to get a list of orchestration instances that match a set of predefined filters.
# [C#](#tab/csharp)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Only used when deploying to a Premium plan or to a Consumption plan running on W
When using an Azure Resource Manager template to create a function app during deployment, don't include WEBSITE_CONTENTSHARE in the template. This slot setting is generated during deployment. To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
+## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
+
+The WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE settings have additional validation checks to ensure that the app can be properly started. Creation of application settings will fail if the Function App cannot properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
+
+|Key|Sample value|
+|||
+|WEBSITE_SKIP_CONTENTSHARE_VALIDATION|`1`|
+
+If validation is skipped and either the connection string or content share are not valid, the app will be unable to start properly and will only serve HTTP 500 errors.
+ ## WEBSITE\_DNS\_SERVER Sets the DNS server used by an app when resolving IP addresses. This setting is often required when using certain networking functionality, such as [Azure DNS private zones](functions-networking-options.md#azure-dns-private-zones) and [private endpoints](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-vnet.md
To use your function app with virtual networks, you need to join it to a subnet.
## Deploy a Service Bus trigger and HTTP trigger
+> [!NOTE]
+> Enabling Private Endpoints on a Function App also makes the Source Control Manager (SCM) site publicly inaccessible. The following instructions give deployment directions using the Deployment Center within the Function App. Alternatively, use [zip deploy](functions-deployment-technologies.md#zip-deploy) or [self-hosted](/azure/devops/pipelines/agents/docker) agents that are deployed into a subnet on the virtual network.
+ 1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP trigger, and a Service Bus queue trigger. <https://github.com/Azure-Samples/functions-vnet-tutorial>
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
As with triggers, input and output bindings are added to your function as bindin
The connection to Queue storage is obtained from the `AzureWebJobsStorage` setting. For more information, see the reference article for the specific binding.
+For a full list of the bindings supported by Functions, see [Supported bindings](functions-triggers-bindings.md?tabs=csharp#supported-bindings).
-## Testing functions
+## Run functions locally
-Azure Functions Core Tools lets you run Azure Functions project on your local development computer. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
+Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project the local Functions host (func.exe) is started listening on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
-To test your function in Visual Studio:
+To start your function in Visual Studio in debug mode:
1. Press F5. If prompted, accept the request from Visual Studio to download and install Azure Functions Core (CLI) tools. You might also need to enable a firewall exception so that the tools can handle HTTP requests. 2. With the project running, test your code as you would test a deployed function.
- For more information, see [Strategies for testing your code in Azure Functions](functions-test-a-function.md). When you run Visual Studio in debug mode, breakpoints are hit as expected.
-
-<!
-For an example of how to test a queue triggered function, see the [queue triggered function quickstart tutorial](functions-create-storage-queue-triggered-function.md#test-the-function).
>-
+ When you run Visual Studio in debug mode, breakpoints are hit as expected.
+
+For a more detailed testing scenario using Visual Studio, see [Testing functions](#testing-functions).
## Publish to Azure
The recommended way to monitor the execution of your functions is by integrating
To learn more about monitoring using Application Insights, see [Monitor Azure Functions](functions-monitoring.md).
+## Testing functions
+
+This section describes how to create a C# function app project in Visual Studio and run and tests with [xUnit](https://github.com/xunit/xunit).
+
+![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
+
+### Setup
+
+To set up your environment, create a function and test the app. The following steps help you create the apps and functions required to support the tests:
+
+1. [Create a new Functions app](functions-get-started.md) and name it **Functions**
+2. [Create an HTTP function from the template](functions-get-started.md) and name it **MyHttpTrigger**.
+3. [Create a timer function from the template](functions-create-scheduled-function.md) and name it **MyTimerTrigger**.
+4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**.
+5. Use NuGet to add a reference from the test app to [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/)
+6. [Reference the *Functions* app](/visualstudio/ide/managing-references-in-a-project) from *Functions.Tests* app.
+
+### Create test classes
+
+Now that the projects are created, you can create the classes used to run the automated tests.
+
+Each function takes an instance of [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) to handle message logging. Some tests either don't log messages or have no concern for how logging is implemented. Other tests need to evaluate messages logged to determine whether a test is passing.
+
+You'll create a new class named `ListLogger` which holds an internal list of messages to evaluate during a testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
+
+Create a new class in *Functions.Tests* project named **NullScope.cs** and enter the following code:
+
+```csharp
+using System;
+
+namespace Functions.Tests
+{
+ public class NullScope : IDisposable
+ {
+ public static NullScope Instance { get; } = new NullScope();
+
+ private NullScope() { }
+
+ public void Dispose() { }
+ }
+}
+```
+
+Next, create a new class in *Functions.Tests* project named **ListLogger.cs** and enter the following code:
+
+```csharp
+using Microsoft.Extensions.Logging;
+using System;
+using System.Collections.Generic;
+using System.Text;
+
+namespace Functions.Tests
+{
+ public class ListLogger : ILogger
+ {
+ public IList<string> Logs;
+
+ public IDisposable BeginScope<TState>(TState state) => NullScope.Instance;
+
+ public bool IsEnabled(LogLevel logLevel) => false;
+
+ public ListLogger()
+ {
+ this.Logs = new List<string>();
+ }
+
+ public void Log<TState>(LogLevel logLevel,
+ EventId eventId,
+ TState state,
+ Exception exception,
+ Func<TState, Exception, string> formatter)
+ {
+ string message = formatter(state, exception);
+ this.Logs.Add(message);
+ }
+ }
+}
+```
+
+The `ListLogger` class implements the following members as contracted by the `ILogger` interface:
+
+- **BeginScope**: Scopes add context to your logging. In this case, the test just points to the static instance on the `NullScope` class to allow the test to function.
+
+- **IsEnabled**: A default value of `false` is provided.
+
+- **Log**: This method uses the provided `formatter` function to format the message and then adds the resulting text to the `Logs` collection.
+
+The `Logs` collection is an instance of `List<string>` and is initialized in the constructor.
+
+Next, create a new file in *Functions.Tests* project named **LoggerTypes.cs** and enter the following code:
+
+```csharp
+namespace Functions.Tests
+{
+ public enum LoggerTypes
+ {
+ Null,
+ List
+ }
+}
+```
+
+This enumeration specifies the type of logger used by the tests.
+
+Now create a new class in *Functions.Tests* project named **TestFactory.cs** and enter the following code:
+
+```csharp
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Http.Internal;
+using Microsoft.Extensions.Logging;
+using Microsoft.Extensions.Logging.Abstractions;
+using Microsoft.Extensions.Primitives;
+using System.Collections.Generic;
+
+namespace Functions.Tests
+{
+ public class TestFactory
+ {
+ public static IEnumerable<object[]> Data()
+ {
+ return new List<object[]>
+ {
+ new object[] { "name", "Bill" },
+ new object[] { "name", "Paul" },
+ new object[] { "name", "Steve" }
+
+ };
+ }
+
+ private static Dictionary<string, StringValues> CreateDictionary(string key, string value)
+ {
+ var qs = new Dictionary<string, StringValues>
+ {
+ { key, value }
+ };
+ return qs;
+ }
+
+ public static HttpRequest CreateHttpRequest(string queryStringKey, string queryStringValue)
+ {
+ var context = new DefaultHttpContext();
+ var request = context.Request;
+ request.Query = new QueryCollection(CreateDictionary(queryStringKey, queryStringValue));
+ return request;
+ }
+
+ public static ILogger CreateLogger(LoggerTypes type = LoggerTypes.Null)
+ {
+ ILogger logger;
+
+ if (type == LoggerTypes.List)
+ {
+ logger = new ListLogger();
+ }
+ else
+ {
+ logger = NullLoggerFactory.Instance.CreateLogger("Null Logger");
+ }
+
+ return logger;
+ }
+ }
+}
+```
+
+The `TestFactory` class implements the following members:
+
+- **Data**: This property returns an [IEnumerable](/dotnet/api/system.collections.ienumerable) collection of sample data. The key value pairs represent values that are passed into a query string.
+
+- **CreateDictionary**: This method accepts a key/value pair as arguments and returns a new `Dictionary` used to create `QueryCollection` to represent query string values.
+
+- **CreateHttpRequest**: This method creates an HTTP request initialized with the given query string parameters.
+
+- **CreateLogger**: Based on the logger type, this method returns a logger class used for testing. The `ListLogger` keeps track of logged messages available for evaluation in tests.
+
+Finally, create a new class in *Functions.Tests* project named **FunctionsTests.cs** and enter the following code:
+
+```csharp
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+using Xunit;
+
+namespace Functions.Tests
+{
+ public class FunctionsTests
+ {
+ private readonly ILogger logger = TestFactory.CreateLogger();
+
+ [Fact]
+ public async void Http_trigger_should_return_known_string()
+ {
+ var request = TestFactory.CreateHttpRequest("name", "Bill");
+ var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
+ Assert.Equal("Hello, Bill. This HTTP triggered function executed successfully.", response.Value);
+ }
+
+ [Theory]
+ [MemberData(nameof(TestFactory.Data), MemberType = typeof(TestFactory))]
+ public async void Http_trigger_should_return_known_string_from_member_data(string queryStringKey, string queryStringValue)
+ {
+ var request = TestFactory.CreateHttpRequest(queryStringKey, queryStringValue);
+ var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
+ Assert.Equal($"Hello, {queryStringValue}. This HTTP triggered function executed successfully.", response.Value);
+ }
+
+ [Fact]
+ public void Timer_should_log_message()
+ {
+ var logger = (ListLogger)TestFactory.CreateLogger(LoggerTypes.List);
+ MyTimerTrigger.Run(null, logger);
+ var msg = logger.Logs[0];
+ Assert.Contains("C# Timer trigger function executed at", msg);
+ }
+ }
+}
+```
+
+The members implemented in this class are:
+
+- **Http_trigger_should_return_known_string**: This test creates a request with the query string values of `name=Bill` to an HTTP function and checks that the expected response is returned.
+
+- **Http_trigger_should_return_string_from_member_data**: This test uses xUnit attributes to provide sample data to the HTTP function.
+
+- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer functions. Once the function is run, then the log is checked to ensure the expected message is present.
+
+If you want to access application settings in your tests, you can [inject](functions-dotnet-dependency-injection.md) an `IConfiguration` instance with mocked environment variable values into your function.
+
+### Run tests
+
+To run the tests, navigate to the **Test Explorer** and click **Run all**.
+
+![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
+
+### Debug tests
+
+To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and click **Run > Debug Last Run**.
++ ## Next steps For more information about the Azure Functions Core Tools, see [Work with Azure Functions Core Tools](functions-run-local.md).
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Don't call `TrackRequest` or `StartOperation<RequestTelemetry>` because you'll s
Don't set `telemetryClient.Context.Operation.Id`. This global setting causes incorrect correlation when many functions are running simultaneously. Instead, create a new telemetry instance (`DependencyTelemetry`, `EventTelemetry`) and modify its `Context` property. Then pass in the telemetry instance to the corresponding `Track` method on `TelemetryClient` (`TrackDependency()`, `TrackEvent()`, `TrackMetric()`). This method ensures that the telemetry has the correct correlation details for the current function invocation.
+## Testing functions
-## Testing functions in C# in Visual Studio
+The following articles show how to run an in-process C# class library function locally for testing purposes:
-The following example describes how to create a C# Function app in Visual Studio and run and tests with [xUnit](https://github.com/xunit/xunit).
-
-![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
-
-### Setup
-
-To set up your environment, create a Function and test app. The following steps help you create the apps and functions required to support the tests:
-
-1. [Create a new Functions app](functions-get-started.md) and name it **Functions**
-2. [Create an HTTP function from the template](functions-get-started.md) and name it **MyHttpTrigger**.
-3. [Create a timer function from the template](functions-create-scheduled-function.md) and name it **MyTimerTrigger**.
-4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**.
-5. Use NuGet to add a reference from the test app to [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/)
-6. [Reference the *Functions* app](/visualstudio/ide/managing-references-in-a-project) from *Functions.Tests* app.
-
-### Create test classes
-
-Now that the projects are created, you can create the classes used to run the automated tests.
-
-Each function takes an instance of [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) to handle message logging. Some tests either don't log messages or have no concern for how logging is implemented. Other tests need to evaluate messages logged to determine whether a test is passing.
-
-You'll create a new class named `ListLogger` which holds an internal list of messages to evaluate during a testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
-
-Create a new class in *Functions.Tests* project named **NullScope.cs** and enter the following code:
-
-```csharp
-using System;
-
-namespace Functions.Tests
-{
- public class NullScope : IDisposable
- {
- public static NullScope Instance { get; } = new NullScope();
-
- private NullScope() { }
-
- public void Dispose() { }
- }
-}
-```
-
-Next, create a new class in *Functions.Tests* project named **ListLogger.cs** and enter the following code:
-
-```csharp
-using Microsoft.Extensions.Logging;
-using System;
-using System.Collections.Generic;
-using System.Text;
-
-namespace Functions.Tests
-{
- public class ListLogger : ILogger
- {
- public IList<string> Logs;
-
- public IDisposable BeginScope<TState>(TState state) => NullScope.Instance;
-
- public bool IsEnabled(LogLevel logLevel) => false;
-
- public ListLogger()
- {
- this.Logs = new List<string>();
- }
-
- public void Log<TState>(LogLevel logLevel,
- EventId eventId,
- TState state,
- Exception exception,
- Func<TState, Exception, string> formatter)
- {
- string message = formatter(state, exception);
- this.Logs.Add(message);
- }
- }
-}
-```
-
-The `ListLogger` class implements the following members as contracted by the `ILogger` interface:
--- **BeginScope**: Scopes add context to your logging. In this case, the test just points to the static instance on the `NullScope` class to allow the test to function.--- **IsEnabled**: A default value of `false` is provided.--- **Log**: This method uses the provided `formatter` function to format the message and then adds the resulting text to the `Logs` collection.-
-The `Logs` collection is an instance of `List<string>` and is initialized in the constructor.
-
-Next, create a new file in *Functions.Tests* project named **LoggerTypes.cs** and enter the following code:
-
-```csharp
-namespace Functions.Tests
-{
- public enum LoggerTypes
- {
- Null,
- List
- }
-}
-```
-
-This enumeration specifies the type of logger used by the tests.
-
-Now create a new class in *Functions.Tests* project named **TestFactory.cs** and enter the following code:
-
-```csharp
-using Microsoft.AspNetCore.Http;
-using Microsoft.AspNetCore.Http.Internal;
-using Microsoft.Extensions.Logging;
-using Microsoft.Extensions.Logging.Abstractions;
-using Microsoft.Extensions.Primitives;
-using System.Collections.Generic;
-
-namespace Functions.Tests
-{
- public class TestFactory
- {
- public static IEnumerable<object[]> Data()
- {
- return new List<object[]>
- {
- new object[] { "name", "Bill" },
- new object[] { "name", "Paul" },
- new object[] { "name", "Steve" }
-
- };
- }
-
- private static Dictionary<string, StringValues> CreateDictionary(string key, string value)
- {
- var qs = new Dictionary<string, StringValues>
- {
- { key, value }
- };
- return qs;
- }
-
- public static HttpRequest CreateHttpRequest(string queryStringKey, string queryStringValue)
- {
- var context = new DefaultHttpContext();
- var request = context.Request;
- request.Query = new QueryCollection(CreateDictionary(queryStringKey, queryStringValue));
- return request;
- }
-
- public static ILogger CreateLogger(LoggerTypes type = LoggerTypes.Null)
- {
- ILogger logger;
-
- if (type == LoggerTypes.List)
- {
- logger = new ListLogger();
- }
- else
- {
- logger = NullLoggerFactory.Instance.CreateLogger("Null Logger");
- }
-
- return logger;
- }
- }
-}
-```
-
-The `TestFactory` class implements the following members:
--- **Data**: This property returns an [IEnumerable](/dotnet/api/system.collections.ienumerable) collection of sample data. The key value pairs represent values that are passed into a query string.--- **CreateDictionary**: This method accepts a key/value pair as arguments and returns a new `Dictionary` used to create `QueryCollection` to represent query string values.--- **CreateHttpRequest**: This method creates an HTTP request initialized with the given query string parameters.--- **CreateLogger**: Based on the logger type, this method returns a logger class used for testing. The `ListLogger` keeps track of logged messages available for evaluation in tests.-
-Finally, create a new class in *Functions.Tests* project named **FunctionsTests.cs** and enter the following code:
-
-```csharp
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Logging;
-using Xunit;
-
-namespace Functions.Tests
-{
- public class FunctionsTests
- {
- private readonly ILogger logger = TestFactory.CreateLogger();
-
- [Fact]
- public async void Http_trigger_should_return_known_string()
- {
- var request = TestFactory.CreateHttpRequest("name", "Bill");
- var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
- Assert.Equal("Hello, Bill. This HTTP triggered function executed successfully.", response.Value);
- }
-
- [Theory]
- [MemberData(nameof(TestFactory.Data), MemberType = typeof(TestFactory))]
- public async void Http_trigger_should_return_known_string_from_member_data(string queryStringKey, string queryStringValue)
- {
- var request = TestFactory.CreateHttpRequest(queryStringKey, queryStringValue);
- var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
- Assert.Equal($"Hello, {queryStringValue}. This HTTP triggered function executed successfully.", response.Value);
- }
-
- [Fact]
- public void Timer_should_log_message()
- {
- var logger = (ListLogger)TestFactory.CreateLogger(LoggerTypes.List);
- MyTimerTrigger.Run(null, logger);
- var msg = logger.Logs[0];
- Assert.Contains("C# Timer trigger function executed at", msg);
- }
- }
-}
-```
-
-The members implemented in this class are:
--- **Http_trigger_should_return_known_string**: This test creates a request with the query string values of `name=Bill` to an HTTP function and checks that the expected response is returned.--- **Http_trigger_should_return_string_from_member_data**: This test uses xUnit attributes to provide sample data to the HTTP function.--- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer functions. Once the function is run, then the log is checked to ensure the expected message is present.-
-If you want to access application settings in your tests, you can [inject](functions-dotnet-dependency-injection.md) an `IConfiguration` instance with mocked environment variable values into your function.
-
-### Run tests
-
-To run the tests, navigate to the **Test Explorer** and click **Run all**.
-
-![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
-
-### Debug tests
-
-To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and click **Run > Debug Last Run**.
++ [Visual Studio](functions-develop-vs.md#testing-functions)++ [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp#debugging-functions-locally)++ [Command line](functions-run-local.md?tabs=v4%2Ccsharp%2Cazurecli%2Cbash#start) ## Environment variables
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md
Title: Azure support for export controls
description: Customer guidance for Azure export control support Previously updated : 07/01/2021+
+recommendations: false
Last updated : 02/28/2022 # Azure support for export controls
Both Azure and Azure Government can help you meet your EAR compliance requiremen
You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening).
+Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for EAR, see [Azure EAR compliance offering](/azure/compliance/offerings/offering-ear).
## ITAR
There is no ITAR compliance certification; however, both Azure and Azure Governm
You are responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you are responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect, approve, or monitor your applications deployed on Azure or Azure Government.
-Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening).
+Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to [screened US persons](./documentation-government-plan-security.md#screening). For more information about Azure support for ITAR, see [Azure ITAR compliance offering](/azure/compliance/offerings/offering-itar).
## DoE 10 CFR Part 810 The US Department of Energy (DoE) export control regulation [10 CFR Part 810](http://www.gpo.gov/fdsys/pkg/FR-2015-02-23/pdf/2015-03479.pdf) implements section 57b.(2) of the [Atomic Energy Act of 1954](https://www.nrc.gov/docs/ML1327/ML13274A489.pdf) (AEA), as amended by section 302 of the [Nuclear Nonproliferation Act of 1978](http://www.nrc.gov/docs/ML1327/ML13274A492.pdf#page=19) (NNPA). It is administered by the [National Nuclear Security Administration](https://www.energy.gov/nnsa/national-nuclear-security-administration) (NNSA). The revised Part 810 (final rule) became effective on 25 March 2015, and, among other things, it controls the export of unclassified nuclear technology and assistance. It enables peaceful nuclear trade by helping to assure that nuclear technologies exported from the United States will not be used for non-peaceful purposes. Paragraph 810.7 (b) states that specific DoE authorization is required for providing or transferring sensitive nuclear technology to any foreign entity.
-**Azure Government can help you meet your DoE 10 CFR Part 810 export control requirements** because it is designed to implement specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. If you are deploying data to Azure Government, you are responsible for your own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA.
+**Azure Government can help you meet your DoE 10 CFR Part 810 export control requirements** because it is designed to implement specific controls that restrict access to information and systems to [US persons](./documentation-government-plan-security.md#screening) among Azure operations personnel. If you are deploying data to Azure Government, you are responsible for your own security classification process. For data subject to DoE export controls, the classification system is augmented by the [Unclassified Controlled Nuclear Information](https://www.energy.gov/sites/prod/files/hss/Classification/docs/UCNI-Tri-fold.pdf) (UCNI) controls established by Section 148 of the AEA. For more information about Azure support for DoE 10 CFR Part 810, see [Azure DoE 10 CFR Part 810 compliance offering](/azure/compliance/offerings/offering-doe-10-cfr-part-810).
## NRC 10 CFR Part 110
Learn more about:
- [Azure Security](../security/fundamentals/overview.md) - [Azure Compliance](../compliance/index.yml)
+- [Microsoft government solutions](https://www.microsoft.com/enterprise/government)
- [What is Azure Government?](./documentation-government-welcome.md) - [Explore Azure Government](https://azure.microsoft.com/global-infrastructure/government/)-- [Microsoft government solutions](https://www.microsoft.com/enterprise/government)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure EAR compliance offering](/azure/compliance/offerings/offering-ear)
+- [Azure FedRAMP compliance offering](/azure/compliance/offerings/offering-fedramp)
+- [Azure ITAR compliance offering](/azure/compliance/offerings/offering-itar)
+- [Azure DoE 10 CFR Part 810 compliance offering](/azure/compliance/offerings/offering-doe-10-cfr-part-810)
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Azure Maps have been localized in variety languages across its services. The fol
Make sure you set up the **View** parameter as required for the REST APIs and the SDKs, which your services are using.
-### Rest APIs
+### REST APIs
Ensure that you have set up the View parameter as required. View parameter specifies which set of geopolitically disputed content is returned via Azure Maps services.
azure-monitor Auto Instrumentation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-instrumentation-troubleshoot.md
+
+ Title: Troubleshoot Azure Application Insights auto-instrumentation
+description: Troubleshoot auto-instrumentation in Azure Application Insights
+ Last updated : 02/28/2022++
+# Troubleshooting Azure Application Insights auto-instrumentation
+
+This article will help you troubleshoot problems with auto-instrumentation in Azure Application Insights.
+
+> [!NOTE]
+> Auto-instrumentation used to be known as "codeless attach" before October 2021.
+
+## Telemetry data isn't reported after enabling auto-instrumentation
+
+Review these common scenarios if you've enabled Azure Application Insights auto-instrumentation for your app service but don't see telemetry data reported.
+
+### The Application Insights SDK was previously installed
+
+Auto-instrumentation will fail when .NET and .NET Core apps were already instrumented with the SDK.
+
+Remove the Application Insights SDK if you would like to auto-instrument your app.
+
+### An app was published using an unsupported version of .NET or .NET Core
+
+Verify a supported version of .NET or .NET Core was used to build and publish applications.
+
+Refer to the .NET or .NET core documentation to determine if your version is supported.
+
+ - [Application Monitoring for Azure App Service and ASP.NET](azure-web-apps-net.md#application-monitoring-for-azure-app-service-and-aspnet)
+- [Application Monitoring for Azure App Service and ASP.NET Core](azure-web-apps-net-core.md#application-monitoring-for-azure-app-service-and-aspnet-core)
+
+### A diagnostics library was detected
+
+Auto-instrumentation will fail if it detects the following libraries.
+
+- System.Diagnostics.DiagnosticSource
+- Microsoft.AspNet.TelemetryCorrelation
+- Microsoft.ApplicationInsights
+
+These libraries will need to be removed for auto-instrumentation to succeed.
+
+## More help
+
+If you have questions about Azure Application Insights auto-instrumentation, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
az monitor app-insights component create --app demoApp --location eastus --kind
} ```
-For the full Azure CLI documentation for this command, and to learn how to retrieve the instrumentation key consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az_monitor_app_insights_component_create).
+For the full Azure CLI documentation for this command, and to learn how to retrieve the instrumentation key consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create).
## Next steps * [Diagnostic Search](./diagnostic-search.md)
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
If you only need to modify the behavior for a single Application Insights resour
A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before deploying the new property with Azure Resource Manager, the property won't exist.
-### Rest API
+### REST API
-The [Rest API](/rest/api/azure/) payload to make the same modifications is as follows:
+The [REST API](/rest/api/azure/) payload to make the same modifications is as follows:
``` PATCH https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/microsoft.insights/components/<resource-name>?api-version=2018-05-01-preview HTTP/1.1
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-security.md
Therefore, Private Links created starting September 2021 have new mandatory AMPL
* Open mode - uses Private Link to communicate with resources in the AMPLS, but also allows traffic to continue to other resources as well. See [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks) to learn more. > [!NOTE]
-> Log Analytics ingestion uses resource-specific endpoints. As such, it doesnΓÇÖt adhere to AMPLS access modes. **To assure Log Analytics ingestion requests canΓÇÖt access workspaces out of the AMPLS, set the network firewall to block traffic to public endpoints, regardless of the AMPLS access modes**.
+> While Log Analytics query requests are affected by the AMPLS access mode setting, Log Analytics ingestion requests use resource-specific endpoints, and are therefore not controlled by the AMPLS access mode. **To assure Log Analytics ingestion requests canΓÇÖt access workspaces out of the AMPLS, set the network firewall to block traffic to public endpoints, regardless of the AMPLS access modes**.
> [!NOTE] > If you have configured Log Analytics with Private Link by initially setting the NSG rules to allow outbound traffic by ServiceTag:AzureMonitor, then the connected VMs would send the logs through Public endpoint. Later, if you change the rules to deny outbound traffic by ServiceTag:AzureMonitor, still the connected VMs would keep sending logs until you reboot the VMs or cut the sessions. In order to make sure the desired configuration take immediate effect, the recommendation is to reboot the connected VMs.
azure-monitor Query Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-audit.md
Last updated 10/20/2021
-# Audit queries in Azure Monitor Logs (preview)
+# Audit queries in Azure Monitor Logs
Log query audit logs provide telemetry about log queries run in Azure Monitor. This includes information such as when a query was run, who ran it, what tool was used, the query text, and performance statistics describing the query's execution.
azure-netapp-files Backup Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-delete.md
na Previously updated : 09/27/2021 Last updated : 03/01/2022 # Delete backups of a volume You can delete individual backups that you no longer need to keep for a volume. Deleting backups will delete the associated objects in your Azure Storage account, resulting in space saving.
-You cannot delete the latest backup.
+By design, Azure NetApp Files prevents you from deleting the latest backup. If the latest backup consists of multiple snapshots taken at the same time (for example, the same daily and weekly schedule configuration), they are all considered as the latest snapshot, and deleting those is prevented.
+
+Deleting the latest backup is permitted only when both of the following conditions are met:
+
+* The volume has been deleted.
+* The latest backup is the only remaining backup for the deleted volume.
+
+If you need to delete backups to free up space, select an older backup from the **Backups** list to delete.
## Steps
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md
To deploy to a subscription, use the subscription-level deployment commands.
# [Azure CLI](#tab/azure-cli)
-For Azure CLI, use [az deployment sub create](/cli/azure/deployment/sub#az_deployment_sub_create). The following example deploys a template to create a resource group:
+For Azure CLI, use [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create). The following example deploys a template to create a resource group:
```azurecli-interactive az deployment sub create \
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
The list command output is similar to:
] ```
-### Use Rest API
+### Use REST API
You can get the deployment script resource deployment information at the resource group level and the subscription level by using REST API:
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
If you have the required access, but the delete request fails, it may be because
## Next steps * To understand Resource Manager concepts, see [Azure Resource Manager overview](overview.md).
-* For deletion commands, see [PowerShell](/powershell/module/az.resources/Remove-AzResourceGroup), [Azure CLI](/cli/azure/group#az_group_delete), and [REST API](/rest/api/resources/resourcegroups/delete).
+* For deletion commands, see [PowerShell](/powershell/module/az.resources/Remove-AzResourceGroup), [Azure CLI](/cli/azure/group#az-group-delete), and [REST API](/rest/api/resources/resourcegroups/delete).
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
Title: Move Azure VMs to new subscription or resource group description: Use Azure Resource Manager to move virtual machines to a new resource group or subscription. Previously updated : 01/24/2022 Last updated : 02/28/2022
If [soft delete](../../../backup/soft-delete-virtual-machines.md) is enabled for
1. Temporarily stop the backup and keep backup data. 2. To move virtual machines configured with Azure Backup, do the following steps:
- 1. Find the location of your virtual machine.
- 2. Find a resource group with the following naming pattern: `AzureBackupRG_<VM location>_1`. For example, the name is in the format of *AzureBackupRG_westus2_1*.
- 3. In the Azure portal, check **Show hidden types**.
- 4. Find the resource with type **Microsoft.Compute/restorePointCollections** that has the naming pattern `AzureBackup_<VM name>_###########`.
- 5. Delete this resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
- 6. After the delete operation is complete, you can move your virtual machine.
+ 1. Find the resource group that contains your backups. If you used the default resource group, it has the following naming pattern: `AzureBackupRG_<VM location>_1`. For example, the name is in the format of *AzureBackupRG_westus2_1*.
+
+ If you created a custom resource group, select that resource group. If you can't find the resource group, search for **Restore Point Collections** in the portal. Look for the collection with the naming pattern `AzureBackup_<VM name>_###########`.
+ 1. Select the resource with type **Restore Point Collection** that has the naming pattern `AzureBackup_<VM name>_###########`.
+ 1. Delete this resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
+ 1. After the delete operation is complete, you can move your virtual machine.
3. Move the VM to the target resource group. 4. Reconfigure the backup. ### Script
-1. Find the location of your virtual machine.
+1. Find the resource group that contains your backups. If you used the default resource group, it has the following naming pattern: `AzureBackupRG_<VM location>_1`. For example, the name is in the format of *AzureBackupRG_westus2_1*.
-1. Find a resource group with the naming pattern - `AzureBackupRG_<VM location>_1`. For example, the name might be `AzureBackupRG_westus2_1`.
+ If you created a custom resource group, find that resource group. If you can't find the resource group, use the following command and provide the name of the virtual machine.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli-interactive
+ az resource list --resource-type Microsoft.Compute/restorePointCollections --query "[?starts_with(name, 'AzureBackup_<vm-name>')].resourceGroup"
+ ```
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell-interactive
+ (Get-AzResource -ResourceType Microsoft.Compute/restorePointCollections -Name AzureBackup_<vm-name>*).ResourceGroupName
+ ```
+
+
1. If you're moving only one virtual machine, get the restore point collection for that virtual machine.
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 12/27/2021 Last updated : 02/28/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | streamingjobs / outputs | streaming job | 3-63 | Alphanumerics, hyphens, and underscores. | > | streamingjobs / transformations | streaming job | 3-63 | Alphanumerics, hyphens, and underscores. |
+## Microsoft.Synapse
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | workspaces | global | 1-50 | Lowercase letters, hyphens, and numbers.<br><br>Start and end with letter or number.<br><br>Can't contain `-ondemand` |
+> | workspaces / bigDataPools | workspace | 1-15 | Letters and numbers.<br><br>Start with letter. End with letter or number.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
+> | workspaces / sqlPools | workspace | 1-60 | Can't contain `<>*%&:\/?@-` or control characters.<br><br>Can't end with `.` or space.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
+ ## Microsoft.TimeSeriesInsights > [!div class="mx-tableFixed"]
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Jump to a resource provider namespace:
> > Azure DNS zones and Traffic Manager doesn't support the use of spaces in the tag or a tag that starts with a number. Azure DNS tag names do not support special and unicode characters. The value can contain all characters. >
-> Azure IP Groups and Azure Firewall Policies don't support PATCH operations, which means they don't support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az_network_ip_group_update) command.
+> Azure IP Groups and Azure Firewall Policies don't support PATCH operations, which means they don't support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az-network-ip-group-update) command.
## Microsoft.Notebooks
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-subscription.md
To deploy to a subscription, use the subscription-level deployment commands.
# [Azure CLI](#tab/azure-cli)
-For Azure CLI, use [az deployment sub create](/cli/azure/deployment/sub#az_deployment_sub_create). The following example deploys a template to create a resource group:
+For Azure CLI, use [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create). The following example deploys a template to create a resource group:
```azurecli-interactive az deployment sub create \
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Timeout : PT1H
Using Azure CLI, you can manage deployment scripts at subscription or resource group scope: -- [az deployment-scripts delete](/cli/azure/deployment-scripts#az_deployment_scripts_delete): Delete a deployment script.-- [az deployment-scripts list](/cli/azure/deployment-scripts#az_deployment_scripts_list): List all deployment scripts.-- [az deployment-scripts show](/cli/azure/deployment-scripts#az_deployment_scripts_show): Retrieve a deployment script.-- [az deployment-scripts show-log](/cli/azure/deployment-scripts#az_deployment_scripts_show_log): Show deployment script logs.
+- [az deployment-scripts delete](/cli/azure/deployment-scripts#az-deployment-scripts-delete): Delete a deployment script.
+- [az deployment-scripts list](/cli/azure/deployment-scripts#az-deployment-scripts-list): List all deployment scripts.
+- [az deployment-scripts show](/cli/azure/deployment-scripts#az-deployment-scripts-show): Retrieve a deployment script.
+- [az deployment-scripts show-log](/cli/azure/deployment-scripts#az-deployment-scripts-show-log): Show deployment script logs.
The list command output is similar to:
The list command output is similar to:
] ```
-### Use Rest API
+### Use REST API
You can get the deployment script resource deployment information at the resource group level and the subscription level by using REST API:
azure-resource-manager Template Tutorial Deploy Sql Extensions Bacpac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deploy-sql-extensions-bacpac.md
Title: Import SQL BACPAC files with templates description: Learn how to use Azure SQL Database extensions to import SQL BACPAC files with Azure Resource Manager templates (ARM templates).- Previously updated : 09/30/2021 Last updated : 02/28/2022 --+ #Customer intent: As a database administrator I want use ARM templates so that I can import a SQL BACPAC file.
The BACPAC file must be stored in an Azure Storage account before it can be impo
-Blob $bacpacFileName ` -Context $storageAccount.Context
- Write-Host "The storage account key is $storageAccountKey"
- Write-Host "The BACPAC file URL is https://$storageAccountName.blob.core.windows.net/$containerName/$bacpacFileName"
- Write-Host "The project name and location are $projectName and $location"
+ Write-Host "The project name: $projectName`
+ The location: $location`
+ The storage account key: $storageAccountKey`
+ The BACPAC file URL: https://$storageAccountName.blob.core.windows.net/$containerName/$bacpacFileName`
+ "
Write-Host "Press [ENTER] to continue ..." ```
The template used in this tutorial is stored in [GitHub](https://raw.githubuserc
} ```
- Add a comma after the `adminPassword` property's closing curly brace (`}`). To format the JSON file from Visual Studio Code, select Shift+Alt+F.
+ Add a comma after the `adminPassword` property's closing curly brace (`}`). To format the JSON file from Visual Studio Code, select **Shift+Alt+F**.
1. Add two resources to the template.
Use the project name and location that were used when you prepared the BACPAC fi
```azurepowershell $projectName = Read-Host -Prompt "Enter the same project name that is used earlier"
- $location = Read-Host -Prompt "Enter the location (i.e. centralus)"
$adminUsername = Read-Host -Prompt "Enter the SQL admin username" $adminPassword = Read-Host -Prompt "Enter the admin password" -AsSecureString $storageAccountKey = Read-Host -Prompt "Enter the storage account key"
azure-sql-edge Date Bucket Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/date-bucket-tsql.md
See [Date and Time Data Types and Functions &#40;Transact-SQL&#41;](/sql/t-sql/f
## Syntax ```syntaxsql
-DATE_BUCKET (datePart, number, date, origin)
+DATE_BUCKET (datepart, number, date, origin)
``` ## Arguments
-*datePart*
+*datepart*
The part of *date* that is used with the ΓÇÿnumberΓÇÖ parameter. Ex. Year, month, minute, second etc.
The part of *date* that is used with the ΓÇÿnumberΓÇÖ parameter. Ex. Year, month
*number*
-The integer number that decides the width of the bucket combined with *datePart* argument. This represents the width of the dataPart buckets from the origin time. **`This argument cannot be a negative integer value`**.
+The integer number that decides the width of the bucket combined with *datepart* argument. This represents the width of the datepart buckets from the origin time. **`This argument cannot be a negative integer value`**.
*date*
The return value data type for this method is dynamic. The return type depends o
`Date_Bucket` returns the latest date or time value, corresponding to the datePart and number parameter. For example, in the expressions below, `Date_Bucket` will return the output value of `2020-04-13 00:00:00.0000000`, as the output is calculated based on one week buckets from the default origin time of `1900-01-01 00:00:00.000`. The value `2020-04-13 00:00:00.0000000` is 6276 weeks from the origin value of `1900-01-01 00:00:00.000`. ```sql
-declare @date datetime2 = '2020-04-15 21:22:11'
-Select DATE_BUCKET(wk, 1, @date)
+declare @date datetime2 = '2020-04-15 21:22:11';
+Select DATE_BUCKET(WEEK, 1, @date);
``` For all the expressions below, the same output value of `2020-04-13 00:00:00.0000000` will be returned. This is because `2020-04-13 00:00:00.0000000` is 6276 weeks from the origin date and 6276 is divisible by 2, 3, 4 and 6. ```sql
-declare @date datetime2 = '2020-04-15 21:22:11'
-Select DATE_BUCKET(wk, 2, @date)
-Select DATE_BUCKET(wk, 3, @date)
-Select DATE_BUCKET(wk, 4, @date)
-Select DATE_BUCKET(wk, 6, @date)
+declare @date datetime2 = '2020-04-15 21:22:11';
+Select DATE_BUCKET(WEEK, 2, @date);
+Select DATE_BUCKET(WEEK, 3, @date);
+Select DATE_BUCKET(WEEK, 4, @date);
+Select DATE_BUCKET(WEEK, 6, @date);
``` The output for the expression below is `2020-04-06 00:00:00.0000000`, which is 6275 weeks from the default origin time `1900-01-01 00:00:00.000`. ```sql
-declare @date datetime2 = '2020-04-15 21:22:11'
-Select DATE_BUCKET(wk, 5, @date)
+declare @date datetime2 = '2020-04-15 21:22:11';
+Select DATE_BUCKET(WEEK, 5, @date);
``` The output for the expression below is `2020-06-09 00:00:00.0000000` , which is 75 weeks from the specified origin time `2019-01-01 00:00:00`. ```sql
-declare @date datetime2 = '2020-06-15 21:22:11'
-declare @origin datetime2 = '2019-01-01 00:00:00'
-Select DATE_BUCKET(wk, 5, @date, @origin)
+declare @date datetime2 = '2020-06-15 21:22:11';
+declare @origin datetime2 = '2019-01-01 00:00:00';
+Select DATE_BUCKET(WEEK, 5, @date, @origin);
``` ## datepart Argument
Select DATE_BUCKET(wk, 5, @date, @origin)
The *number* argument cannot exceed the range of positive **int** values. In the following statements, the argument for *number* exceeds the range of **int** by 1. The following statement returns the following error message: "`Msg 8115, Level 16, State 2, Line 2. Arithmetic overflow error converting expression to data type int."` ```sql
-declare @date datetime2 = '2020-04-30 00:00:00'
-Select DATE_BUCKET(dd, 2147483648, @date)
+declare @date datetime2 = '2020-04-30 00:00:00';
+Select DATE_BUCKET(DAY, 2147483648, @date);
``` If a negative value for number is passed to the `Date_Bucket` function, the following error will be returned.
Invalid bucket width value passed to date_bucket function. Only positive values
`DATE_BUCKET` return the base value corresponding to the data type of the `date` argument. In the following example, an output value with datetime2 datatype is returned. ```sql
-Select DATE_BUCKET(dd, 10, SYSUTCDATETIME())
+Select DATE_BUCKET(DAY, 10, SYSUTCDATETIME());
``` ## origin Argument
Each of these statements increments *date_bucket* with a bucket width of 1 from
```sql declare @date datetime2 = '2020-04-30 21:21:21'
-Select 'Week', DATE_BUCKET(wk, 1, @date)
+Select 'Week', DATE_BUCKET(WEEK, 1, @date)
Union All
-Select 'Day', DATE_BUCKET(dd, 1, @date)
+Select 'Day', DATE_BUCKET(DAY, 1, @date)
Union All
-Select 'Hour', DATE_BUCKET(hh, 1, @date)
+Select 'Hour', DATE_BUCKET(HOUR, 1, @date)
Union All
-Select 'Minutes', DATE_BUCKET(mi, 1, @date)
+Select 'Minutes', DATE_BUCKET(MINUTE, 1, @date)
Union All
-Select 'Seconds', DATE_BUCKET(ss, 1, @date)
+Select 'Seconds', DATE_BUCKET(SECOND, 1, @date);
``` Here is the result set.
This example specifies user-defined variables as arguments for *number* and *dat
```sql DECLARE @days int = 365, @datetime datetime2 = '2000-01-01 01:01:01.1110000'; /* 2000 was a leap year */;
-SELECT Date_Bucket(day, @days, @datetime);
+SELECT Date_Bucket(DAY, @days, @datetime);
``` Here is the result set.
In the example below, we are calculating the sum of OrderQuantity and sum of Uni
```sql SELECT
- Date_Bucket(week, 1 ,cast(Shipdate as datetime2)) AS ShippedDateBucket
+ Date_Bucket(WEEK, 1 ,cast(Shipdate as datetime2)) AS ShippedDateBucket
,Sum(OrderQuantity) As SumOrderQuantity ,Sum(UnitPrice) As SumUnitPrice FROM dbo.FactInternetSales FIS where Shipdate between '2011-01-03 00:00:00.000' and '2011-02-28 00:00:00.000' Group by Date_Bucket(week, 1 ,cast(Shipdate as datetime2))
-order by 1
+order by ShippedDateBucket;
``` Here is the result set.
This example specifies `SYSDATETIME` for *date*. The exact value returned depend
day and time of statement execution: ```sql
-SELECT Date_Bucket(wk, 10, SYSDATETIME());
+SELECT Date_Bucket(WEEK, 10, SYSDATETIME());
``` Here is the result set.
Here is the result set.
This example uses scalar subqueries, `MAX(OrderDate)`, as arguments for *number* and *date*. `(SELECT top 1 CustomerKey FROM dbo.DimCustomer where GeographyKey > 100)` serves as an artificial argument for the number parameter, to show how to select a *number* argument from a value list. ```sql
-SELECT DATE_BUCKET(week,(SELECT top 1 CustomerKey FROM dbo.DimCustomer where GeographyKey > 100),
+SELECT DATE_BUCKET(WEEK,(SELECT top 1 CustomerKey FROM dbo.DimCustomer where GeographyKey > 100),
(SELECT MAX(OrderDate) FROM dbo.FactInternetSales)); ```
SELECT DATE_BUCKET(week,(SELECT top 1 CustomerKey FROM dbo.DimCustomer where Geo
This example uses a numeric expression ((10/2)), and scalar system functions (SYSDATETIME) as arguments for number and date. ```sql
-SELECT Date_Bucket(week,(10/2), SYSDATETIME());
+SELECT Date_Bucket(WEEK,(10/2), SYSDATETIME());
``` #### Specifying an aggregate window function as number
This example uses an aggregate window function as an argument for *number*.
```sql Select
- DISTINCT DATE_BUCKET(day, 30, Cast([shipdate] as datetime2)) as DateBucket,
- First_Value([SalesOrderNumber]) OVER (Order by DATE_BUCKET(day, 30, Cast([shipdate] as datetime2))) as First_Value_In_Bucket,
- Last_Value([SalesOrderNumber]) OVER (Order by DATE_BUCKET(day, 30, Cast([shipdate] as datetime2))) as Last_Value_In_Bucket
+ DISTINCT DATE_BUCKET(DAY, 30, Cast([shipdate] as datetime2)) as DateBucket,
+ First_Value([SalesOrderNumber]) OVER (Order by DATE_BUCKET(DAY, 30, Cast([shipdate] as datetime2))) as First_Value_In_Bucket,
+ Last_Value([SalesOrderNumber]) OVER (Order by DATE_BUCKET(DAY, 30, Cast([shipdate] as datetime2))) as Last_Value_In_Bucket
from [dbo].[FactInternetSales] Where ShipDate between '2011-01-03 00:00:00.000' and '2011-02-28 00:00:00.000'
-order by DateBucket
+order by DateBucket;
GO ``` ### C. Using a non-default origin value
GO
This example uses a non-default origin value to generate the date buckets. ```sql
-declare @date datetime2 = '2020-06-15 21:22:11'
-declare @origin datetime2 = '2019-01-01 00:00:00'
-Select DATE_BUCKET(hh, 2, @date, @origin)
+declare @date datetime2 = '2020-06-15 21:22:11';
+declare @origin datetime2 = '2019-01-01 00:00:00';
+Select DATE_BUCKET(HOUR, 2, @date, @origin);
``` ## See also
azure-sql Authentication Aad Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-aad-service-principal.md
To enable an Azure AD object creation in SQL Database on behalf of an Azure AD a
- To check if the server identity is assigned to the server, execute the Get-AzSqlServer command. > [!NOTE]
- > Server identity can be assigned using REST API and CLI commands as well. For more information, see [az sql server create](/cli/azure/sql/server#az_sql_server_create), [az sql server update](/cli/azure/sql/server#az_sql_server_update), and [Servers - REST API](/rest/api/sql/2020-08-01-preview/servers).
+ > Server identity can be assigned using REST API and CLI commands as well. For more information, see [az sql server create](/cli/azure/sql/server#az-sql-server-create), [az sql server update](/cli/azure/sql/server#az-sql-server-update), and [Servers - REST API](/rest/api/sql/2020-08-01-preview/servers).
2. Grant the Azure AD [**Directory Readers**](../../active-directory/roles/permissions-reference.md#directory-readers) permission to the server identity created or assigned to the server.
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
This how-to guide outlines the steps to create a [logical server](logical-server
- Version 2.26.1 or later is needed when using The Azure CLI. For more information on the installation and the latest version, see [Install the Azure CLI](/cli/azure/install-azure-cli). - [Az 6.1.0](https://www.powershellgallery.com/packages/Az/6.1.0) module or higher is needed when using PowerShell.-- If you're provisioning a managed instance using the Azure CLI, PowerShell, or Rest API, a virtual network and subnet needs to be created before you begin. For more information, see [Create a virtual network for Azure SQL Managed Instance](../managed-instance/virtual-network-subnet-create-arm-template.md).
+- If you're provisioning a managed instance using the Azure CLI, PowerShell, or REST API, a virtual network and subnet needs to be created before you begin. For more information, see [Create a virtual network for Azure SQL Managed Instance](../managed-instance/virtual-network-subnet-create-arm-template.md).
## Permissions
New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>"
For more information, see [New-AzSqlServer](/powershell/module/az.sql/new-azsqlserver).
-# [Rest API](#tab/rest-api)
+# [REST API](#tab/rest-api)
-The [Servers - Create Or Update](/rest/api/sql/2020-11-01-preview/servers/create-or-update) Rest API can be used to create a logical server with Azure AD-only authentication enabled during provisioning.
+The [Servers - Create Or Update](/rest/api/sql/2020-11-01-preview/servers/create-or-update) REST API can be used to create a logical server with Azure AD-only authentication enabled during provisioning.
The script below will provision a logical server, set the Azure AD admin as `<AzureADAccount>`, and enable Azure AD-only authentication. The server SQL Administrator login will also be created automatically and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provisioning, the SQL Administrator login won't be used.
New-AzSqlInstance -Name "<managedinstancename>" -ResourceGroupName "<ResourceGro
For more information, see [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance).
-# [Rest API](#tab/rest-api)
+# [REST API](#tab/rest-api)
-The [Managed Instances - Create Or Update](/rest/api/sql/2020-11-01-preview/managed-instances/create-or-update) Rest API can be used to create a managed instance with Azure AD-only authentication enabled during provisioning.
+The [Managed Instances - Create Or Update](/rest/api/sql/2020-11-01-preview/managed-instances/create-or-update) REST API can be used to create a managed instance with Azure AD-only authentication enabled during provisioning.
> [!NOTE] > The script requires a virtual network and subnet be created as a prerequisite.
azure-sql Authentication Azure Ad User Assigned Managed Identity Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity-create-server.md
To check the server status after creation, see the following command:
Get-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -ServerName "<ServerName>" -ExpandActiveDirectoryAdministrator ```
-# [Rest API](#tab/rest-api)
+# [REST API](#tab/rest-api)
-The [Servers - Create Or Update](/rest/api/sql/2020-11-01-preview/servers/create-or-update) Rest API can be used to create a logical server with a user-assigned managed identity.
+The [Servers - Create Or Update](/rest/api/sql/2020-11-01-preview/servers/create-or-update) REST API can be used to create a logical server with a user-assigned managed identity.
The script below will provision a logical server, set the Azure AD admin as `<AzureADAccount>`, and enable [Azure AD-only authentication](authentication-azure-ad-only-authentication.md). The server SQL Administrator login will also be created automatically and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provisioning, the SQL Administrator login won't be used.
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-overview.md
As discussed previously, auto-failover groups can also be managed programmatical
| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group| | [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
-# [Rest API](#tab/rest-api)
+# [REST API](#tab/rest-api)
| API | Description | | | |
As discussed previously, auto-failover groups can also be managed programmatical
| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group| | [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
-# [Rest API](#tab/rest-api)
+# [REST API](#tab/rest-api)
| API | Description | | | |
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automated-backups-overview.md
Last updated 01/10/2022
Database backups are an essential part of any business continuity and disaster recovery strategy, because they protect your data from corruption or deletion. These backups enable database restore to a point in time within the configured retention period. If your data protection rules require that your backups are available for an extended time (up to 10 years), you can configure [long-term retention](long-term-retention-overview.md) for both single and pooled databases.
+## Backup and restore essentials
+
+Databases in Azure SQL Managed instance and non-Hyperscale databases in Azure SQL Database use SQL Server engine technology to back up and restore data. Hyperscale databases have a unique architecture and leverage a different technology for backup and restore: see [Hyperscale backups and storage redundancy](#hyperscale-backups-and-storage-redundancy).
+ ### Backup frequency
-Both SQL Database and SQL Managed Instance use SQL Server technology to create [full backups](/sql/relational-databases/backup-restore/full-database-backups-sql-server) every week, [differential backups](/sql/relational-databases/backup-restore/differential-backups-sql-server) every 12-24 hours, and [transaction log backups](/sql/relational-databases/backup-restore/transaction-log-backups-sql-server) every 5 to 10 minutes. The frequency of transaction log backups is based on the compute size and the amount of database activity.
+Both Azure SQL Database and Azure SQL Managed Instance use SQL Server technology to create [full backups](/sql/relational-databases/backup-restore/full-database-backups-sql-server) every week, [differential backups](/sql/relational-databases/backup-restore/differential-backups-sql-server) every 12-24 hours, and [transaction log backups](/sql/relational-databases/backup-restore/transaction-log-backups-sql-server) every 5 to 10 minutes. The frequency of transaction log backups is based on the compute size and the amount of database activity.
When you restore a database, the service determines which full, differential, and transaction log backups need to be restored.
-### Backup storage redundancy
+Hyperscale databases use [snapshot backup technology](#hyperscale-backups-and-storage-redundancy).
-By default, SQL Database and SQL Managed Instance store data in geo-redundant [storage blobs](../../storage/common/storage-redundancy.md) that are replicated to a [paired region](../../availability-zones/cross-region-replication-azure.md). Geo-redundancy helps to protect against outages impacting backup storage in the primary region and allows you to restore your server to a different region in the event of a disaster.
+### Backup storage redundancy
-The option to configure backup storage redundancy provides the flexibility to choose between locally redundant, zone-redundant, or geo-redundant storage blobs. To ensure that your data stays within the same region where your managed instance or SQL database is deployed, you can change the default geo-redundant backup storage redundancy and configure either locally redundant or zone-redundant storage blobs for backups. Storage redundancy mechanisms store multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failure, network or power outages, or massive natural disasters. The configured backup storage redundancy is applied to both short-term backup retention settings that are used for point in time restore (PITR) and long-term retention backups used for long-term backups (LTR).
+By default, Azure SQL Database and Azure SQL Managed Instance store data in geo-redundant [storage blobs](../../storage/common/storage-redundancy.md) that are replicated to a [paired region](../../availability-zones/cross-region-replication-azure.md). Geo-redundancy helps to protect against outages impacting backup storage in the primary region and allows you to restore your server to a different region in the event of a disaster.
-For SQL Database, the backup storage redundancy can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only. After the backup storage redundancy of an existing database is updated, it may take up to 48 hours for the changes to be applied. Geo-restore is disabled as soon as a database is updated to use local or zone redundant storage.
+The option to configure backup storage redundancy provides the flexibility to choose between locally redundant, zone-redundant, or geo-redundant storage blobs. To ensure that your data stays within the same region where your managed instance or database in Azure SQL Database is deployed, you can change the default geo-redundant backup storage redundancy and configure either locally redundant or zone-redundant storage blobs for backups. Storage redundancy mechanisms store multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failure, network or power outages, or massive natural disasters. The configured backup storage redundancy is applied to both short-term backup retention settings that are used for point in time restore (PITR) and long-term retention backups used for long-term backups (LTR).
-
-> [!IMPORTANT]
-> Backup storage redundancy for Hyperscale can only be set during database creation. This setting cannot be modified once the resource is provisioned. [Database copy](database-copy.md) process can be used to update the backup storage redundancy settings for an existing Hyperscale database.
+For Azure SQL Database, backup storage redundancy can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only. After the backup storage redundancy of an existing database is updated, it may take up to 48 hours for the changes to be applied. Geo-restore is disabled as soon as a database is updated to use local or zone redundant storage. For Hyperscale databases, the selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. Learn more in [Hyperscale backups and storage redundancy](#hyperscale-backups-and-storage-redundancy).
> [!IMPORTANT] > Zone-redundant storage is currently only available in [certain regions](../../storage/common/storage-redundancy.md#zone-redundant-storage).
-> [!NOTE]
-> Backup storage redundancy for Hyperscale is currently in preview.
- ### Backup usage You can use these backups to: -- **Point-in-time restore of existing database** - [Restore an existing database to a point in time in the past](recovery-using-backups.md#point-in-time-restore) within the retention period by using Azure portal, Azure PowerShell, Azure CLI, or REST API. For SQL Database, this operation creates a new database on the same server as the original database, but uses a different name to avoid overwriting the original database. After restore completes, you can delete the original database. Alternatively, you can [rename](/sql/relational-databases/databases/rename-a-database) both the original database, and then rename the restored database to the original database name. Similarly, for SQL Managed Instance, this operation creates a copy of the database on the same or different managed instance in the same subscription and same region.
+- **Point-in-time restore of existing database** - [Restore an existing database to a point in time in the past](recovery-using-backups.md#point-in-time-restore) within the retention period by using the Azure portal, Azure PowerShell, Azure CLI, or REST API. For SQL Database, this operation creates a new database on the same server as the original database, but uses a different name to avoid overwriting the original database. After restore completes, you can delete the original database. Alternatively, you can [rename](/sql/relational-databases/databases/rename-a-database) both the original database, and then rename the restored database to the original database name. Similarly, for SQL Managed Instance, this operation creates a copy of the database on the same or different managed instance in the same subscription and same region.
- **Point-in-time restore of deleted database** - [Restore a deleted database to the time of deletion](recovery-using-backups.md#deleted-database-restore) or to any point in time within the retention period. The deleted database can be restored only on the same server or managed instance where the original database was created. When deleting a database, the service takes a final transaction log backup before deletion, to prevent any data loss. - **Geo-restore** - [Restore a database to another geographic region](recovery-using-backups.md#geo-restore). Geo-restore allows you to recover from a geographic disaster when you cannot access your database or backups in the primary region. It creates a new database on any existing server or managed instance, in any Azure region. > [!IMPORTANT]
- > Geo-restore is available only for SQL databases or managed instances configured with geo-redundant backup storage.
+ > Geo-restore is available only for databases in Azure SQL Database or managed instances configured with geo-redundant backup storage. If you are not currently using geo-replicated backups for a database, you can change this by [configuring backup storage redundancy](#configure-backup-storage-redundancy).
- **Restore from long-term backup** - [Restore a database from a specific long-term backup](long-term-retention-overview.md) of a single database or pooled database, if the database has been configured with a long-term retention policy (LTR). LTR allows you to [restore an old version of the database](long-term-backup-retention-configure.md) by using the Azure portal, Azure CLI, or Azure PowerShell to satisfy a compliance request or to run an old version of the application. For more information, see [Long-term retention](long-term-retention-overview.md). > [!NOTE]
You can use these backups to:
This table summarizes the capabilities and features of [point in time restore (PITR)](recovery-using-backups.md#point-in-time-restore), [geo-restore](recovery-using-backups.md#geo-restore), and [long-term retention backups](long-term-retention-overview.md).
-| **Backup Properties** | Point in time recovery (PITR) | Geo-restore | Long-term backup restore |           
-|-|--|--|--|
-| **Types of SQL backup** | Full, Differential, Log | Replicated copies of PITR backups | Only the full backups | 
+| **Backup Properties** | Point in time recovery (PITR) | Geo-restore | Long-term backup restore |
+|||||
+| **Types of SQL backup** | Full, Differential, Log | Replicated copies of PITR backups | Only the full backups |
| **Recovery Point Objective (RPO)** |  5-10 minutes, based on compute size and amount of database activity. | Up to 1 hour, based on geo-replication.\*  |  One week (or user's policy).|
-| **Recovery Time Objective (RTO)** | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). |
-| **Retention** | 7 days by default, Up to 35 days |  Enabled by default, same as source.\*\* | Not enabled by default, Retention Up to 10 years. |     
-| **Azure storage**  | Geo-redundant by default. Can optionally configure zone or locally redundant storage. | Available when PITR backup storage redundancy is set to Geo-redundant. Not available when PITR backup store is zone or locally redundant storage. | Geo-redundant by default. Can configure zone or locally redundant storage. | 
+| **Recovery Time Objective (RTO)** | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). |
+| **Retention** | 7 days by default, Up to 35 days |  Enabled by default, same as source.\*\* | Not enabled by default, Retention Up to 10 years. |
+| **Azure storage**  | Geo-redundant by default. Can optionally configure zone or locally redundant storage. | Available when PITR backup storage redundancy is set to Geo-redundant. Not available when PITR backup store is zone or locally redundant storage. | Geo-redundant by default. Can configure zone or locally redundant storage. |
| **Use to create new database in same region** | Supported | Supported | Supported | | **Use to create new database in another region** | Not Supported | Supported in any Azure region | Supported in any Azure region |
-| **Use to create new database in another Subscription** | Not Supported | Not Supported\*\*\* | Not Supported\*\*\* |
+| **Use to create new database in another Subscription** | Not Supported | Not Supported\*\*\* | Not Supported\*\*\* |
| **Restore via Azure portal**|Yes|Yes|Yes|
-| **Restore via PowerShell** |Yes|Yes|Yes|
-| **Restore via Azure CLI** |Yes|Yes|Yes|
+| **Restore via PowerShell** |Yes|Yes|Yes|
+| **Restore via Azure CLI** |Yes|Yes|Yes|
| | | | | \* For business-critical applications that require large databases and must ensure business continuity, use [Auto-failover groups](auto-failover-group-overview.md).
The first full backup is scheduled immediately after a new database is created o
## Backup storage consumption
-With SQL Server backup and restore technology, restoring a database to a point in time requires an uninterrupted backup chain consisting of one full backup, optionally one differential backup, and one or more transaction log backups. SQL Database and SQL Managed Instance backup schedule includes one full backup every week. Therefore, to provide PITR within the entire retention period, the system must store additional full, differential, and transaction log backups for up to a week longer than the configured retention period.
+With SQL Server backup and restore technology, restoring a database to a point in time requires an uninterrupted backup chain consisting of one full backup, optionally one differential backup, and one or more transaction log backups. Azure SQL Database and Azure SQL Managed Instance backup schedules include one full backup every week. Therefore, to provide PITR within the entire retention period, the system must store additional full, differential, and transaction log backups for up to a week longer than the configured retention period.
In other words, for any point in time during the retention period, there must be a full backup that is older than the oldest time of the retention period, as well as an uninterrupted chain of differential and transaction log backups from that full backup until the next full backup.
Backups that are no longer needed to provide PITR functionality are automaticall
For all databases including [TDE encrypted](transparent-data-encryption-tde-overview.md) databases, backups are compressed to reduce backup storage compression and costs. Average backup compression ratio is 3-4 times, however it can be significantly lower or higher depending on the nature of the data and whether data compression is used in the database.
-SQL Database and SQL Managed Instance compute your total used backup storage as a cumulative value. Every hour, this value is reported to the Azure billing pipeline, which is responsible for aggregating this hourly usage to calculate your consumption at the end of each month. After the database is deleted, consumption decreases as backups age out and are deleted. Once all backups are deleted and PITR is no longer possible, billing stops.
+Azure SQL Database and Azure SQL Managed Instance compute your total used backup storage as a cumulative value. Every hour, this value is reported to the Azure billing pipeline, which is responsible for aggregating this hourly usage to calculate your consumption at the end of each month. After the database is deleted, consumption decreases as backups age out and are deleted. Once all backups are deleted and PITR is no longer possible, billing stops.
> [!IMPORTANT] > Backups of a database are retained to provide PITR even if the database has been deleted. While deleting and re-creating a database may save storage and compute costs, it may increase backup storage costs, because the service retains backups for each deleted database, every time it is deleted. ### Monitor consumption
-For vCore databases, the storage consumed by each type of backup (full, differential, and log) is reported on the database monitoring pane as a separate metric. The following diagram shows how to monitor the backup storage consumption for a single database. This feature is currently not available for managed instances.
+For vCore databases in Azure SQL Database, the storage consumed by each type of backup (full, differential, and log) is reported on the database monitoring pane as a separate metric. The following diagram shows how to monitor the backup storage consumption for a single database. This feature is currently not available for managed instances.
![Monitor database backup consumption in the Azure portal](./media/automated-backups-overview/backup-metrics.png)
Backup storage consumption up to the maximum data size for a database is not cha
## Backup retention
-Azure SQL Database and Azure SQL Managed Instance provide both short-term and long-term retention of backups. The short-term retention backups allow Point-In-Time-Restore (PITR) with the retention period for the database, while the long-term retention provide backups for various compliance requirements.
+Azure SQL Database and Azure SQL Managed Instance provide both short-term and long-term retention of backups. Short-term retention backups allow Point-In-Time-Restore (PITR) within the retention period for the database, while long-term retention provides backups for various compliance requirements.
### Short-term retention
For all new, restored, and copied databases, Azure SQL Database and Azure SQL Ma
> [!NOTE] > A 24-hour differential backup frequency may increase the time required to restore the database.
-With the exception of Hyperscale and Basic tier databases, you can [change backup retention period](#change-the-short-term-retention-policy) per each active database in the 1-35 day range. As described in [Backup storage consumption](#backup-storage-consumption), backups stored to enable PITR may be older than the retention period. For Azure SQL Managed Instance only, it is possible to set the PITR backup retention rate once a database has been deleted in the 0-35 days range.
+Except for Hyperscale and Basic tier databases, you can [change backup retention period](#change-the-short-term-retention-policy) per each active database in the 1-35 day range. As described in [Backup storage consumption](#backup-storage-consumption), backups stored to enable PITR may be older than the retention period. For Azure SQL Managed Instance only, it is possible to set the PITR backup retention rate once a database has been deleted in the 0-35 days range.
If you delete a database, the system keeps backups in the same way it would for an online database with its specific retention period. You cannot change backup retention period for a deleted database.
For managed instances, the total billable backup storage size is aggregated at t
Total billable backup storage, if any, will be charged in GB/month as per the rate of the backup storage redundancy used. This backup storage consumption will depend on the workload and size of individual databases, elastic pools, and managed instances. Heavily modified databases have larger differential and log backups, because the size of these backups is proportional to the amount of changed data. Therefore, such databases will have higher backup charges.
-SQL Database and SQL Managed Instance computes your total billable backup storage as a cumulative value across all backup files. Every hour, this value is reported to the Azure billing pipeline, which aggregates this hourly usage to get your backup storage consumption at the end of each month. If a database is deleted, backup storage consumption will gradually decrease as older backups age out and are deleted. Because differential backups and log backups require an earlier full backup to be restorable, all three backup types are purged together in weekly sets. Once all backups are deleted, billing stops.
+Azure SQL Database and Azure SQL Managed Instance compute your total billable backup storage as a cumulative value across all backup files. Every hour, this value is reported to the Azure billing pipeline, which aggregates this hourly usage to get your backup storage consumption at the end of each month. If a database is deleted, backup storage consumption will gradually decrease as older backups age out and are deleted. Because differential backups and log backups require an earlier full backup to be restorable, all three backup types are purged together in weekly sets. Once all backups are deleted, billing stops.
As a simplified example, assume a database has accumulated 744 GB of backup storage and that this amount stays constant throughout an entire month because the database is completely idle. To convert this cumulative storage consumption to hourly usage, divide it by 744.0 (31 days per month * 24 hours per day). SQL Database will report to Azure billing pipeline that the database consumed 1 GB of PITR backup each hour, at a constant rate. Azure billing will aggregate this consumption and show a usage of 744 GB for the entire month. The cost will be based on the amount/GB/month rate in your region.
Backup storage redundancy impacts backup costs in the following way:
For more details about backup storage pricing visit [Azure SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/single/) and [Azure SQL Managed Instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/). > [!IMPORTANT]
-> Backup storage redundancy for Hyperscale can only be set during database creation. This setting cannot be modified once the resource is provisioned. [Database copy](database-copy.md) process can be used to update the backup storage redundancy settings for an existing Hyperscale database.
-
-> [!NOTE]
-> Backup storage redundancy for Hyperscale is currently in preview.
+> Backup storage redundancy for Hyperscale can only be set during database creation. This setting cannot be modified once the resource is provisioned. [Database copy](database-copy.md) process can be used to update the backup storage redundancy settings for an existing Hyperscale database. Learn more in [Hyperscale backups and storage redundancy](#hyperscale-backups-and-storage-redundancy).
### Monitor costs
For more information, see [Backup Retention REST API](/rest/api/sql/backupshortt
+## Hyperscale backups and storage redundancy
+
+Hyperscale databases in Azure SQL Database use a [unique architecture](service-tier-hyperscale.md#distributed-functions-architecture) with highly scalable storage and compute performance tiers.
+
+Hyperscale backups are snapshot based and are nearly instantaneous. Log generated is stored in long term Azure storage for the backup retention period. Hyperscale architecture does not use full database backups or log backups and the backup and restore considerations described in the previous sections of this article do not apply.
+
+### Backup and restore performance for Hyperscale databases
+
+Storage and compute separation enables Hyperscale to push down backup and restore operation to the storage layer to reduce the processing burden on the primary compute replica. As a result, database backups don't impact performance of the primary compute node.
+
+Backup and restore operations for Hyperscale databases are fast regardless of data size due to the use of storage snapshots. A database can be restored to any point in time within its backup retention period. Point in time recovery (PITR) is achieved by reverting to file snapshots, and as such is not a size of data operation. Restore of a Hyperscale database within the same Azure region is a constant-time operation, and even multiple-terabyte databases can be restored in minutes instead of hours or days. Creation of new databases by restoring an existing backup or copying the database also takes advantage of this feature: creating database copies for development or testing purposes, even of multi-terabyte databases, is doable in minutes within the same region when the same storage type is used.
+
+### Hyperscale backup retention
+
+Hyperscale backup retention is currently seven days; long-term retention policies aren't currently supported.
+
+### Hyperscale storage redundancy applies to both data storage and backup storage
+
+Hyperscale supports configurable storage redundancy. When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy.
+
+### Consider storage redundancy carefully when you create a Hyperscale database
+
+Backup storage redundancy for Hyperscale databases can only be set during database creation. This setting cannot be modified once the resource is provisioned. Geo-restore is only available when geo-redundant storage (RA-GRS) has been chosen for backup storage redundancy. The [database copy](database-copy.md) process can be used to update the storage redundancy settings for an existing Hyperscale database. Copying a database to a different storage type will be a size-of-data operation. Find example code in [configure backup storage redundancy](#configure-backup-storage-redundancy).
+
+> [!IMPORTANT]
+> Zone-redundant storage is currently only available in [certain regions](../../storage/common/storage-redundancy.md#zone-redundant-storage).
+
+### Restoring a Hyperscale database to a different region
+
+If you need to restore a Hyperscale database in Azure SQL Database to a region other than the one it's currently hosted in, as part of a disaster recovery operation or drill, relocation, or any other reason, the primary method is to do a geo-restore of the database. This involves exactly the same steps as what you would use to restore any other database in SQL Database to a different region:
+
+1. Create a [server](logical-servers.md) in the target region if you don't already have an appropriate server there. This server should be owned by the same subscription as the original (source) server.
+2. Follow the instructions in the [geo-restore](./recovery-using-backups.md#geo-restore) section of the page on restoring a database in Azure SQL Database from automatic backups.
+
+> [!NOTE]
+> Because the source and target are in separate regions, the database cannot share snapshot storage with the source database as in non-geo restores, which complete quickly regardless of database size. In the case of a geo-restore of a Hyperscale database, it will be a size-of-data operation, even if the target is in the paired region of the geo-replicated storage. Therefore, a geo-restore will take time proportional to the size of the database being restored. If the target is in the paired region, data transfer will be within a region, which will be significantly faster than a cross-region data transfer, but it will still be a size-of-data operation.
+
+If you prefer, you can copy the database to a different region as well. Learn about [Database Copy for Hyperscale](database-copy.md#database-copy-for-azure-sql-hyperscale).
+ ## Configure backup storage redundancy
-Configurable storage redundancy for SQL Databases can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only.
-For SQL Managed Instance, backup storage redundancy is set on the instance level, and it is applied for all belonging managed databases. It can be configured at the time of an instance creation or updated for existing instances; the backup storage redundancy change would trigger then a new full backup per database and the change will apply for all future backups. The default storage redundancy type is geo-redundancy (RA-GRS).
-For HyperScale backup storage redundancy can only be specified during the create process. Once the resource is provisioned, you can't change the backup storage redundancy option. The default value is geo-redundant storage. For differences in pricing between locally redundant, zone-redundant and geo-redundant backup storage visit [managed instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/).
+Backup storage redundancy for databases in Azure SQL Database can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only. The default value is geo-redundant storage. For differences in pricing between locally redundant, zone-redundant and geo-redundant backup storage visit [managed instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/). Storage redundancy for Hyperscale databases is unique: learn more in [Hyperscale backups and storage redundancy](#hyperscale-backups-and-storage-redundancy).
+
+For Azure SQL Managed Instance, backup storage redundancy is set at the instance level, and it is applied for all belonging managed databases. It can be configured at the time of an instance creation or updated for existing instances; the backup storage redundancy change would trigger then a new full backup per database and the change will apply for all future backups. The default storage redundancy type is geo-redundancy (RA-GRS).
> [!NOTE]
-> Backup storage redundancy change for SQL Managed Instance is currently available only for the Public cloud via Azure Portal.
+> Backup storage redundancy change for SQL Managed Instance is currently available only for the Public cloud via Azure Portal.
### Configure backup storage redundancy by using the Azure portal
To change the Backup storage redundancy option for an existing instance, go to t
#### [SQL Database](#tab/single-database)
-To configure backup storage redundancy when creating a new database, you can specify the `backup-storage-redundancy` parameter. Possible values are Geo, Zone, and Local. By default, all SQL Databases use geo-redundant storage for backups. Geo-restore is disabled if a database is created or updated with local or zone redundant backup storage.
+To configure backup storage redundancy when creating a new database, you can specify the `--backup-storage-redundancy` parameter with the `az sql db create` command. Possible values are `Geo`, `Zone`, and `Local`. By default, all databases in Azure SQL Database use geo-redundant storage for backups. Geo-restore is disabled if a database is created or updated with local or zone redundant backup storage.
+
+This example creates a database in the [General Purpose](service-tier-general-purpose.md) service tier with local backup redundancy:
```azurecli az sql db create \
az sql db create \
--backup-storage-redundancy Local ```
-You can also update an existing database with the `backup-storage-redundancy` parameter.
+Carefully consider the configuration option for `--backup-storage-redundancy` when creating a Hyperscale database. Storage redundancy can only be specified during the database creation process for Hyperscale databases. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. Learn more in [Hyperscale backups and storage redundancy](#hyperscale-backups-and-storage-redundancy).
+
+Existing Hyperscale databases can migrate to different storage redundancy using [database copy](database-copy.md) or point in time restore: sample code to copy a Hyperscale database follows in this section.
+
+This example creates a database in the [Hyperscale](service-tier-general-purpose.md) service tier with Zone redundancy:
+
+```azurecli
+az sql db create \
+ --resource-group myresourcegroup \
+ --server myserver \
+ --name mydb \
+ --tier Hyperscale \
+ --backup-storage-redundancy Zone
+```
+For more information, see [az sql db create](/cli/azure/sql/db#az_sql_db_create) and [az sql db update](/cli/azure/sql/db#az_sql_db_update).
+
+Except for Hyperscale and Basic tier databases, you can update the backup storage redundancy setting for an existing database with the `--backup-storage-redundancy` parameter and the `az sql db update` command. It may take up to 48 hours for the changes to be applied on the database. Switching from geo-redundant backup storage to local or zone redundant storage disables geo-restore.
+
+This example code changes the backup storage redundancy to `Local`.
```azurecli az sql db update \
az sql db update \
--name mydb \ --backup-storage-redundancy Local ```
-For more details, see [az sql db create](/cli/azure/sql/db#az-sql-db-create) and [az sql db update](/cli/azure/sql/db#az-sql-db-update).
+
+You cannot update the backup storage redundancy of a Hyperscale database directly. However, you can change it using [the database copy command](database-copy.md) with the `--backup-storage-redundancy` parameter. This example copies a Hyperscale database to a new database using Gen5 hardware and two vCores. The new database has the backup redundancy set to `Zone`.
+
+```azurecli
+az sql db copy \
+ --resource-group myresourcegroup \
+ --server myserver
+ --name myHSdb
+ --dest-resource-group mydestresourcegroup
+ --dest-server destdb
+ --dest-name myHSdb
+ --service-objective HS_Gen5_2
+ --read-replicas 0
+ --backup-storage-redundancy Zone
+```
+
+For syntax details, see [az sql db copy](/cli/azure/sql/db#az_sql_db_copy). For an overview of database copy, visit [Copy a transactionally consistent copy of a database in Azure SQL Database](database-copy.md).
#### [SQL Managed Instance](#tab/managed-instance)
Configuring backup storage redundancy is not available for a SQL Managed Instanc
#### [SQL Database](#tab/single-database)
-To configure backup storage redundancy when creating a new database, you can specify the -BackupStorageRedundancy parameter. Possible values are Geo, Zone, and Local. By default, all SQL Databases use geo-redundant storage for backups. Geo-restore is disabled if a database is created with local or zone redundant backup storage.
+To configure backup storage redundancy when creating a new database, you can specify the `-BackupStorageRedundancy` parameter with the `New-AzSqlDatabase` cmdlet. Possible values are `Geo`, `Zone`, and `Local`. By default, all databases in Azure SQL Database use geo-redundant storage for backups. Geo-restore is disabled if a database is created with local or zone redundant backup storage.
+
+This example creates a database in the [General Purpose](service-tier-general-purpose.md) service tier with local backup redundancy:
+
+```powershell
+# Create a new database with geo-redundant backup storage.
+New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "Database03" -Edition "GeneralPurpose" -Vcore 2 -ComputeGeneration "Gen5" -BackupStorageRedundancy Local
+```
+
+Carefully consider the configuration option for `--backup-storage-redundancy` when creating a Hyperscale database. Storage redundancy can only be specified during the database creation process for Hyperscale databases. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. Learn more in [Hyperscale backups and storage redundancy](#hyperscale-backups-and-storage-redundancy).
+
+Existing databases can migrate to different storage redundancy using [database copy](database-copy.md) or point in time restore: sample code to copy a Hyperscale database follows in this section.
+
+This example creates a database in the [Hyperscale](service-tier-general-purpose.md) service tier with Zone redundancy:
```powershell # Create a new database with geo-redundant backup storage.
-New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "Database03" -Edition "GeneralPurpose" -Vcore 2 -ComputeGeneration "Gen5" -BackupStorageRedundancy Geo
+New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "Database03" -Edition "Hyperscale" -Vcore 2 -ComputeGeneration "Gen5" -BackupStorageRedundancy Zone
```
-For details visit [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase).
+For syntax details visit [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase).
-To update backup storage redundancy of an existing database, you can use the -BackupStorageRedundancy parameter. Possible values are Geo, Zone, and Local.
-It may take up to 48 hours for the changes to be applied on the database. Switching from geo-redundant backup storage to local or zone redundant storage disables geo-restore.
+Except for Hyperscale and Basic tier databases, you can use the `-BackupStorageRedundancy` parameter with the `Set-AzSqlDatabase` cmdlet to update the backup storage redundancy setting for an existing database. Possible values are Geo, Zone, and Local. It may take up to 48 hours for the changes to be applied on the database. Switching from geo-redundant backup storage to local or zone redundant storage disables geo-restore.
+
+This example code changes the backup storage redundancy to `Local`.
```powershell # Change the backup storage redundancy for Database01 to zone-redundant.
-Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -DatabaseName "Database01" -ServerName "Server01" -BackupStorageRedundancy Zone
+Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -DatabaseName "Database01" -ServerName "Server01" -BackupStorageRedundancy Local
``` For details visit [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase)
+Backup storage redundancy of an existing Hyperscale database cannot be updated. However, you can use the [database copy command](database-copy.md) to create a copy of the database and use the `-BackupStorageRedundancy` parameter to update the backup storage redundancy. This example copies a Hyperscale database to a new database using Gen5 hardware and two vCores. The new database has the backup redundancy set to `Zone`.
+
+```powershell
+# Change the backup storage redundancy for Database01 to zone-redundant.
+New-AzSqlDatabaseCopy -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "HSSourceDB" -CopyResourceGroupName "DestResourceGroup" -CopyServerName "DestServer" -CopyDatabaseName "HSDestDB" -Vcore 2 -ComputeGeneration "Gen5" -ComputeModel Provisioned -BackupStorageRedundancy Zone
+```
+
+For syntax details, visit [New-AzSqlDatabaseCopy](/powershell/module/az.sql/new-azsqldatabasecopy).
+
+For an overview of database copy, visit [Copy a transactionally consistent copy of a database in Azure SQL Database](database-copy.md).
+ > [!NOTE] > To use -BackupStorageRedundancy parameter with database restore, database copy or create secondary operations, use Azure PowerShell version Az.Sql 2.11.0.
azure-sql Automatic Tuning Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automatic-tuning-enable.md
Please note that DROP_INDEX option at this time is not compatible with applicati
Once you have selected your desired configuration, click **Apply**.
-### Rest API
+### REST API
To find out more about using a REST API to enable automatic tuning on a single database, see [Azure SQL Database automatic tuning UPDATE and GET HTTP methods](/rest/api/sql/databaseautomatictuning).
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/data-discovery-and-classification-overview.md
Manage classifications and recommendations for Azure SQL Database and Azure SQL
- [Enable-AzSqlInstanceDatabaseSensitivityRecommendation](/powershell/module/az.sql/enable-azsqlinstancedatabasesensitivityrecommendation) - [Disable-AzSqlInstanceDatabaseSensitivityRecommendation](/powershell/module/az.sql/disable-azsqlinstancedatabasesensitivityrecommendation)
-### Use the Rest API
+### Use the REST API
You can use the REST API to programmatically manage classifications and recommendations. The published REST API supports the following operations:
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Database that have transitio
| Feature | GA Month | Details | | | | |
+| [Storage redundancy for Hyperscale databases](automated-backups-overview.md#configure-backup-storage-redundancy) | March 2022 | When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. |
| [Azure Active Directory-only authentication](authentication-azure-ad-only-authentication.md) | November 2021 | It's possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory. | | [Azure AD service principal](authentication-aad-service-principal.md) | September 2021 | Azure Active Directory (Azure AD) supports user creation in Azure SQL Database on behalf of Azure AD applications (service principals).| | [Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. |
azure-sql Elastic Pool Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/elastic-pool-manage.md
Title: Manage elastic pools
-description: Create and manage Azure SQL Database elastic pools using the Azure portal, PowerShell, the Azure CLI, Transact-SQL (T-SQL), and Rest API.
+description: Create and manage Azure SQL Database elastic pools using the Azure portal, PowerShell, the Azure CLI, Transact-SQL (T-SQL), and REST API.
azure-sql Firewall Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/firewall-configure.md
New-AzSqlServerFirewallRule -ResourceGroupName "myResourceGroup" `
| Cmdlet | Level | Description | | | | |
-|[az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule#az_sql_server_firewall_rule_create)|Server|Creates a server IP firewall rule|
-|[az sql server firewall-rule list](/cli/azure/sql/server/firewall-rule#az_sql_server_firewall_rule_list)|Server|Lists the IP firewall rules on a server|
-|[az sql server firewall-rule show](/cli/azure/sql/server/firewall-rule#az_sql_server_firewall_rule_show)|Server|Shows the detail of an IP firewall rule|
-|[az sql server firewall-rule update](/cli/azure/sql/server/firewall-rule##az_sql_server_firewall_rule_update)|Server|Updates an IP firewall rule|
-|[az sql server firewall-rule delete](/cli/azure/sql/server/firewall-rule#az_sql_server_firewall_rule_delete)|Server|Deletes an IP firewall rule|
+|[az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule#az-sql-server-firewall-rule-create)|Server|Creates a server IP firewall rule|
+|[az sql server firewall-rule list](/cli/azure/sql/server/firewall-rule#az-sql-server-firewall-rule-list)|Server|Lists the IP firewall rules on a server|
+|[az sql server firewall-rule show](/cli/azure/sql/server/firewall-rule#az-sql-server-firewall-rule-show)|Server|Shows the detail of an IP firewall rule|
+|[az sql server firewall-rule update](/cli/azure/sql/server/firewall-rule##az-sql-server-firewall-rule-update)|Server|Updates an IP firewall rule|
+|[az sql server firewall-rule delete](/cli/azure/sql/server/firewall-rule#az-sql-server-firewall-rule-delete)|Server|Deletes an IP firewall rule|
The following example uses CLI to set a server-level IP firewall rule:
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/recovery-using-backups.md
For a sample PowerShell script showing how to restore a deleted instance databas
## Geo-restore > [!IMPORTANT]
-> - Geo-restore is available only for SQL databases or managed instances configured with geo-redundant [backup storage](automated-backups-overview.md#backup-storage-redundancy).
+> - Geo-restore is available only for SQL databases or managed instances configured with geo-redundant [backup storage](automated-backups-overview.md#backup-storage-redundancy). If you are not currently using geo-replicated backups for a database, you can change this by [configuring backup storage redundancy](automated-backups-overview.md#configure-backup-storage-redundancy).
> - Geo-restore can be performed on SQL databases or managed instances residing in the same subscription only. You can restore a database on any SQL Database server or an instance database on any managed instance in any Azure region from the most recent geo-replicated backups. Geo-restore uses a geo-replicated backup as its source. You can request geo-restore even if the database or datacenter is inaccessible due to an outage.
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-hyperscale.md
The vCore-based service tiers are differentiated based on database availability
| **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB | | **IOPS** | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiplelevels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS| |**Availability**| 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, partiallocal cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
-|**Backups** | A choice of geo-redundant, zone-redundant <sup>2</sup> , or locally-redundant<sup>2</sup> backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant <sup>3</sup>, or locally-redundant<sup>3</sup> backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant<sup>2</sup>, or locally-redundant<sup>2</sup> backup storage, 1-35 day retention (default 7 days) |
+|**Backups** | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) |
-<sup>1</sup> Elastic pools are not supported in the Hyperscale service tier
-<sup>2</sup> In preview
-<sup>3</sup> In preview, for new Hyperscale databases only
+<sup>1</sup> Elastic pools are not supported in the Hyperscale service tier.
## Distributed functions architecture
Page servers are systems representing a scaled-out storage engine. Each page se
### Log service
-The log service accepts transaction log records from the primary compute replica, persists them in a durable cache, and forwards the log records to the rest of compute replicas (so they can update their caches) as well as the relevant page server(s), so that the data can be updated there. In this way, all data changes from the primary compute replica are propagated through the log service to all the secondary compute replicas and page servers. Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite storage repository. This mechanism removes the need for frequent log truncation. The log service also has local memory and SSD caches to speed up access to log records. The log on hyperscale is practically infinite, with the restriction that a single transaction cannot generate more than 1TB of log. Additionally , if using [Change Data Capture](/sql/relational-databases/track-changes/about-change-data-capture-sql-server), at most 1TB of log can be generated since the start of the oldest active transaction. It is recommended to avoid unnecessarily large transactions to stay below this limit.
+The log service accepts transaction log records from the primary compute replica, persists them in a durable cache, and forwards the log records to the rest of compute replicas (so they can update their caches) as well as the relevant page server(s), so that the data can be updated there. In this way, all data changes from the primary compute replica are propagated through the log service to all the secondary compute replicas and page servers. Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite storage repository. This mechanism removes the need for frequent log truncation. The log service also has local memory and SSD caches to speed up access to log records. The log on hyperscale is practically infinite, with the restriction that a single transaction cannot generate more than 1 TB of log. Additionally, if using [Change Data Capture](/sql/relational-databases/track-changes/about-change-data-capture-sql-server), at most 1 TB of log can be generated since the start of the oldest active transaction. It is recommended to avoid unnecessarily large transactions to stay below this limit.
### Azure storage Azure Storage contains all data files in a database. Page servers keep data files in Azure Storage up to date. This storage is used for backup purposes, as well as for replication between Azure regions. Backups are implemented using storage snapshots of data files. Restore operations using snapshots are fast regardless of data size. A database can be restored to any point in time within its backup retention period.
-## Backup and restore
-
-Backups are file-snapshot based and hence they're nearly instantaneous. Storage and compute separation enables pushing down the backup/restore operation to the storage layer to reduce the processing burden on the primary compute replica. As a result, database backup doesn't impact performance of the primary compute node. Similarly, point in time recovery (PITR) is done by reverting to file snapshots, and as such is not a size of data operation. Restore of a Hyperscale database in the same Azure region is a constant-time operation, and even multiple-terabyte databases can be restored in minutes instead of hours or days. Creation of new databases by restoring an existing backup also takes advantage of this feature: creating database copies for development or testing purposes, even of multi-terabyte databases, is doable in minutes.
-
-For geo-restore of Hyperscale databases, see [Restoring a Hyperscale database to a different region](#restoring-a-hyperscale-database-to-a-different-region).
- ## Scale and performance advantages With the ability to rapidly spin up/down additional read-only compute nodes, the Hyperscale architecture allows significant read scale capabilities and can also free up the primary compute node for serving more write requests. Also, the compute nodes can be scaled up/down rapidly due to the shared-storage architecture of the Hyperscale architecture.
As in all other service tiers, Hyperscale guarantees data durability for committ
For Hyperscale SLA, see [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database).
-## Disaster recovery for Hyperscale databases
+## Backup and restore
-### Restoring a Hyperscale database to a different region
+Backup and restore operations for Hyperscale databases are file-snapshot based. This enables these operations to be nearly instantaneous. Since Hyperscale architecture utilizes the storage layer for backup and restore, processing burden and performance impact to compute replicas are significantly reduced. Learn more in [Hyperscale backups and storage redundancy](automated-backups-overview.md#hyperscale-backups-and-storage-redundancy).
-If you need to restore a Hyperscale database in Azure SQL Database to a region other than the one it's currently hosted in, as part of a disaster recovery operation or drill, relocation, or any other reason, the primary method is to do a geo-restore of the database. This involves exactly the same steps as what you would use to restore any other database in SQL Database to a different region:
+## Disaster recovery for Hyperscale databases
-1. Create a [server](logical-servers.md) in the target region if you don't already have an appropriate server there. This server should be owned by the same subscription as the original (source) server.
-2. Follow the instructions in the [geo-restore](./recovery-using-backups.md#geo-restore) topic of the page on restoring a database in Azure SQL Database from automatic backups.
+If you need to restore a Hyperscale database in Azure SQL Database to a region other than the one it's currently hosted in, as part of a disaster recovery operation or drill, relocation, or any other reason, the primary method is to do a geo-restore of the database. Geo-restore is only available when geo-redundant storage (RA-GRS) has been chosen for storage redundancy.
-> [!NOTE]
-> Because the source and target are in separate regions, the database cannot share snapshot storage with the source database as in non-geo restores, which complete quickly regardless of database size. In the case of a geo-restore of a Hyperscale database, it will be a size-of-data operation, even if the target is in the paired region of the geo-replicated storage. Therefore, a geo-restore will take time proportional to the size of the database being restored. If the target is in the paired region, data transfer will be within a region, which will be significantly faster than a cross-region data transfer, but it will still be a size-of-data operation.
+Learn more in [restoring a Hyperscale database to a different region](automated-backups-overview.md#restoring-a-hyperscale-database-to-a-different-region).
## <a name=regions></a>Available regions
azure-sql Authentication Azure Ad User Assigned Managed Identity Create Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/authentication-azure-ad-user-assigned-managed-identity-create-managed-instance.md
For more information, see [New-AzSqlInstance](/powershell/module/az.sql/new-azsq
> [!NOTE] > The above example provisions a managed instance with only a user-assigned managed identity. You could set the You could set the `-IdentityType` to be `"UserAssigned,SystemAssigned"` if you wanted both types of managed identities to be created with the instance.
-# [Rest API](#tab/rest-api)
+# [REST API](#tab/rest-api)
-The [Managed Instances - Create Or Update](/rest/api/sql/2020-11-01-preview/managed-instances/create-or-update) Rest API can be used to create a managed instance with a user-assigned managed identity.
+The [Managed Instances - Create Or Update](/rest/api/sql/2020-11-01-preview/managed-instances/create-or-update) REST API can be used to create a managed instance with a user-assigned managed identity.
> [!NOTE] > The script requires a virtual network and subnet be created as a prerequisite.
azure-sql User Initiated Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/user-initiated-failover.md
Use the following CLI command to failover read secondary node, applicable to BC
az sql mi failover -g myresourcegroup -n myinstancename --replica-type ReadableSecondary ```
-### Using Rest API
+### Using REST API
For advanced users who would perhaps need to automate failovers of their SQL Managed Instances for purposes of implementing continuous testing pipeline, or automated performance mitigators, this function can be accomplished through initiating failover through an API call. see [Managed Instances - Failover REST API](/rest/api/sql/managed%20instances%20-%20failover/failover) for details.
azure-sql Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/storage-configuration.md
For example, the following PowerShell creates a new storage pool with the interl
$PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like "*2" -or $_.FriendlyName -like "*3"} New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Windows Storage on <VM Name>" `
- -PhysicalDisks $PhysicalDisks | New- VirtualDisk -FriendlyName "DataFiles" `
- -Interleave 65536 -NumberOfColumns $PhysicalDisks .Count -ResiliencySettingName simple `
- ΓÇôUseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
+ -PhysicalDisks $PhysicalDisks | New-VirtualDisk -FriendlyName "DataFiles" `
+ -Interleave 65536 -NumberOfColumns $PhysicalDisks.Count -ResiliencySettingName simple `
+ -UseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
-UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "DataDisks" ` -AllocationUnitSize 65536 -Confirm:$false ```
In Windows Server 2016 and later, the default value for `-StorageSubsystemFriend
$PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like "*2" -or $_.FriendlyName -like "*3"} New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Storage Spaces on <VMName>" `
- -PhysicalDisks $PhysicalDisks | New- VirtualDisk -FriendlyName "DataFiles" `
- -Interleave 65536 -NumberOfColumns $PhysicalDisks .Count -ResiliencySettingName simple `
- ΓÇôUseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
+ -PhysicalDisks $PhysicalDisks | New-VirtualDisk -FriendlyName "DataFiles" `
+ -Interleave 65536 -NumberOfColumns $PhysicalDisks.Count -ResiliencySettingName simple `
+ -UseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
-UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "DataDisks" ` -AllocationUnitSize 65536 -Confirm:$false ```
azure-web-pubsub Concept Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-performance.md
In this case, the app server writes back the original message back in the http r
-### Rest API
+### REST API
Azure Web PubSub provides powerful [APIs](/rest/api/webpubsub/) to manage clients and deliver real-time messages.
backup Backup Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-file-share-rest-api.md
Last updated 02/16/2020
-# Backup Azure file share using Azure Backup via Rest API
+# Backup Azure file share using Azure Backup via REST API
This article describes how to back up an Azure File share using Azure Backup via REST API.
Since the backup job is a long running operation, it needs to be tracked as expl
## Next steps -- Learn how to [restore Azure file shares using Rest API](restore-azure-file-share-rest-api.md).
+- Learn how to [restore Azure file shares using REST API](restore-azure-file-share-rest-api.md).
backup Backup Azure Sql Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-vm-rest-api.md
As the backup job is a long running operation, it needs to be tracked as explain
## Next steps -- Learn how to [restore SQL databases using Rest API](restore-azure-sql-vm-rest-api.md).
+- Learn how to [restore SQL databases using REST API](restore-azure-sql-vm-rest-api.md).
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Title: 'Tutorial: Deploy Bastion using manual settings: Azure portal'
+ Title: 'Tutorial: Deploy Bastion: Azure portal'
description: Learn how to deploy Bastion using manual settings using the Azure portal. Previously updated : 02/25/2022 Last updated : 02/28/2022 # Tutorial: Deploy Bastion using manual settings: Azure portal
-This tutorial shows you how to deploy Azure Bastion to your virtual network from the Azure portal using manual settings that you specify. While you can [deploy Bastion using VM settings](quickstart-host-portal.md), deploying Bastion using manual settings lets you specify granular settings for the bastion host. After you deploy Bastion, the RDP/SSH experience is available to all of the virtual machines in the virtual network. Azure Bastion is a PaaS service that is maintained for you, not a bastion host that you install on one of your VMs. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+This tutorial helps you deploy Azure Bastion from the Azure portal using manual settings. When you use manual settings, you can specify configuration values such as instance counts and the SKU at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
-In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host scaling (instance count). After the deployment is complete, you connect to your VM via private IP address. The VM you connect to doesn't need a public IP address, client software, agent, or a special configuration. If your VM has a public IP address that you don't need for anything else, you can remove it.
+In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host scaling (instance count). After the deployment is complete, you connect to your VM via private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it.
+
+Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
In this tutorial, you'll learn how to: > [!div class="checklist"]
-> * Create a bastion host for your VNet.
-> * Connect to a Windows virtual machine.
+> * Deploy Bastion to your VNet.
+> * Connect to a virtual machine.
> * Remove the public IP address from a virtual machine. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites * A [virtual network](../virtual-network/quick-create-portal.md). This will be the VNet to which you deploy Bastion.
-* A Windows virtual machine in the virtual network. This VM isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later via Bastion. If you don't have a VM, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md).
-* The following required roles for your resources:
- * Required VM roles:
- * Reader role on the virtual machine.
- * Reader role on the NIC with private IP of the virtual machine.
+* A virtual machine in the virtual network. This VM isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion. If you don't have a VM, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md).
+* The following required roles for your resources.
+
+ * Required VM roles:
-* Ports: To connect to the Windows VM, you must have the following ports open on your Windows VM:
- * Inbound ports: RDP (3389)
+ * Reader role on the virtual machine.
+ * Reader role on the NIC with private IP of the virtual machine.
+ * Required inbound ports:
- >[!NOTE]
- >The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ * For Windows VMS - RDP (3389)
+ * For Linux VMs - SSH (22)
+
+ > [!NOTE]
+ > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
> ### <a name="values"></a>Example values
You can use the following example values when creating this configuration, or yo
| Public IP address SKU | Standard | | Assignment | Static |
- >[!IMPORTANT]
- >For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work, but we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future.
+ > [!IMPORTANT]
+ > For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work, but we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future.
>
-## <a name="createhost"></a>Create a bastion host
+## <a name="createhost"></a>Deploy Bastion
-This section helps you create the bastion object in your VNet. This is required in order to create a secure connection to a VM in the VNet.
+This section helps you deploy Bastion to your VNet. Once Bastion is deployed, you can connect securely to any VM in the VNet using its private IP address.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Type **Bastion** into the search.
+1. Type **Bastion** in the search.
1. Under services, select **Bastions**. 1. On the Bastions page, select **+ Create** to open the **Create a Bastion** page.
-1. On the **Create a Bastion** page, configure a new Bastion resource.
+1. On the **Create a Bastion** page, configure the required settings.
- :::image type="content" source="./media/tutorial-create-host-portal/review-create.png" alt-text="Screenshot of Create a Bastion portal page." lightbox="./media/tutorial-create-host-portal/create.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/review-create.png" alt-text="Screenshot of Create a Bastion portal page." lightbox="./media/tutorial-create-host-portal/review-create.png":::
### Project details
-* **Subscription**: The Azure subscription you want to use.
+* **Subscription**: Select your Azure subscription.
-* **Resource Group**: The Azure resource group in which the new Bastion resource will be created. If you don't have an existing resource group, you can create a new one.
+* **Resource Group**: Select your Resource Group.
### Instance details
-* **Name**: The name of the new Bastion resource.
+* **Name**: Type the name that you want to use for your bastion resource.
* **Region**: The Azure public region in which the resource will be created. Choose the region in which your virtual network resides.
-* **Tier:** The tier is also known as the **SKU**. For this tutorial, we select the **Standard** SKU from the dropdown. Selecting the Standard SKU lets you configure the instance count for host scaling. The Basic SKU doesn't support host scaling. For more information about features that require te Standard SKU, see [Configuration settings - SKU](configuration-settings.md#skus).
+* **Tier:** The tier is also known as the **SKU**. For this tutorial, select **Standard**. The Standard SKU lets you configure the instance count for host scaling and other features. For more information about features that require the Standard SKU, see [Configuration settings - SKU](configuration-settings.md#skus).
-* **Instance count:** This is the setting for **host scaling** and configured in scale unit increments. Use the slider to configure the instance count. If you specified the Basic tier SKU, you canΓÇÖt configure this setting. For more information, see [Configuration settings - host scaling](configuration-settings.md#instance). In this tutorial, you can select the instance count you'd prefer, keeping in mind any scale unit [pricing](https://azure.microsoft.com/pricing/details/azure-bastion) considerations.
+* **Instance count:** This is the setting for **host scaling**. It's configured in scale unit increments. Use the slider or type a number to configure the instance count that you want. For this tutorial, you can select the instance count you'd prefer. For more information, see [Host scaling](configuration-settings.md#instance) and [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion).
### Configure virtual networks
-* **Virtual network**: The virtual network in which the Bastion resource will be created. You can create a new virtual network in the portal during this process, or use an existing virtual network. If you're using an existing virtual network, make sure the existing virtual network has enough free address space to accommodate the Bastion subnet requirements. If you don't see your virtual network from the dropdown, make sure you've selected the correct Resource Group.
+* **Virtual network**: Select your virtual network. If you don't see your VNet in the dropdown list, make sure you selected the correct Resource Group and Region in the previous settings on this page.
-* **Subnet**: Once you create or select a virtual network, the subnet field appears on the page. This is the subnet in which your Bastion instances will be deployed. The name must be **AzureBastionSubnet**. See the following steps to add the subnet.
+* **Subnet**: Once select a virtual network, the subnet field appears on the page. This is the subnet to which your Bastion instances will be deployed. In most cases, you won't already have the subnet **AzureBastionSubnet** configured. The subnet name must be **AzureBastionSubnet**. See the following steps to add the subnet.
#### Manage subnet configuration
-In most cases, you won't already have an AzureBastionSubnet configured. To configure the bastion subnet:
+To configure the bastion subnet:
1. Select **Manage subnet configuration**. This takes you to the **Subnets** page.
- :::image type="content" source="./media/tutorial-create-host-portal/subnet.png" alt-text="Screenshot of Manage subnet configuration.":::
-1. On the **Subnets** page, select **+Subnet** to open the **Add subnet** page.
+ :::image type="content" source="./media/tutorial-create-host-portal/subnet.png" alt-text="Screenshot of Manage subnet configuration." lightbox="./media/tutorial-create-host-portal/subnet.png":::
+1. On the **Subnets** page, select **+Subnet** to open the **Add subnet** page.
1. Create a subnet using the following guidelines: * The subnet must be named **AzureBastionSubnet**.
- * The subnet must be at least /26 or larger. For the Standard SKU, we recommend /26 or larger to accommodate future additional host scaling instances.
-
- :::image type="content" source="./media/tutorial-create-host-portal/bastion-subnet.png" alt-text="Screenshot of the AzureBastionSubnet subnet.":::
+ * The subnet must be at least **/26 or larger** (/26, /25, /24 etc.) to accommodate features available with the Standard SKU.
-1. You don't need to fill out additional fields on this page. Select **Save** at the bottom of the page to save the settings and close the **Add subnet** page.
+1. You don't need to fill out additional fields on this page. Select **Save** at the bottom of the page to create the subnet.
1. At the top of the **Subnets** page, select **Create a Bastion** to return to the Bastion configuration page.
- :::image type="content" source="./media/tutorial-create-host-portal/create-a-bastion.png" alt-text="Screenshot of Create a Bastion.":::
+ :::image type="content" source="./media/tutorial-create-host-portal/create-a-bastion.png" alt-text="Screenshot of Create a Bastion."lightbox="./media/tutorial-create-host-portal/create-a-bastion.png":::
### Public IP address
-The public IP address of the Bastion resource on which RDP/SSH will be accessed (over port 443). Create a **new public IP address**. The public IP address must be in the same region as the Bastion resource you're creating. This IP address doesn't have anything to do with any of the VMs that you want to connect to. It's the public IP address for the Bastion host resource.
+This is the public IP address of the Bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating. This IP address doesn't have anything to do with any of the VMs that you want to connect to.
- * **Public IP address name**: The name of the public IP address resource. For this tutorial, you can leave the default.
- * **Public IP address SKU**: This setting is prepopulated by default to **Standard**. Azure Bastion uses/supports only the Standard public IP SKU.
- * **Assignment**: This setting is prepopulated by default to **Static**.
+1. Select **Create new**.
+1. For **Public IP address name**, you can leave the default naming suggestion.
+1. For **Public IP address SKU**, this setting is prepopulated by default to **Standard**. Azure Bastion supports only the Standard public IP address SKU.
+1. For **Assignment**, this setting is prepopulated by default to **Static**. You can't change this setting.
### Review and create
-1. When you finish specifying the settings, select **Review + Create**. This validates the values. Once validation passes, you can create the Bastion resource.
-1. Review your settings.
+1. When you finish specifying the settings, select **Review + Create**. This validates the values. Once validation passes, you can deploy Bastion.
+1. Review your settings.
1. At the bottom of the page, select **Create**.
-1. You'll see a message letting you know that your deployment is underway. Status will display on this page as the resources are created. It takes about 5 minutes for the Bastion resource to be created and deployed.
+1. You'll see a message letting you know that your deployment is underway. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
## Connect to a VM ## Remove VM public IP address
your resources using the following steps:
## Next steps
-In this tutorial, you created a Bastion host and associated it to a virtual network. You then removed the public IP address from a VM and connected to it. You may choose to use Network Security Groups with your Azure Bastion subnet. To do so, see:
+In this tutorial, you deployed Bastion to a virtual network and connected to a VM. You then removed the public IP address from the VM. Next, learn about and configure additional Bastion features.
> [!div class="nextstepaction"]
-> [Work with NSGs](bastion-nsg.md)
+> [Bastion features and configuration settings](configuration-settings.md)
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-map-content-to-custom-domain.md
Azure CDN provides the option of associating a custom domain with a CDN endpoint
In this tutorial, you learn how to: > [!div class="checklist"] > - Create a CNAME DNS record.
-> - Associate the custom domain with your CDN endpoint.
+> - Add a custom domain with your CDN endpoint.
> - Verify the custom domain. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
For Azure CDN, the source domain name is your custom domain name and the destina
Azure CDN routes traffic addressed to the source custom domain to the destination CDN endpoint hostname after it verifies the CNAME record.
-A custom domain and its subdomain can be associated with a single endpoint at a time.
+A custom domain and its subdomain can be added to only a single endpoint at a time.
Use multiple CNAME records for different subdomains from the same custom domain for different Azure services.
To create a CNAME record for your custom domain:
4. If you're previously created a temporary cdnverify subdomain CNAME record, delete it.
-5. If you're using this custom domain in production for the first time, follow the steps for [Associate the custom domain with your CDN endpoint](#associate-the-custom-domain-with-your-cdn-endpoint) and [Verify the custom domain](#verify-the-custom-domain).
+5. If you're using this custom domain in production for the first time, follow the steps for [Add the custom domain with your CDN endpoint](#add-a-custom-domain-to-your-cdn-endpoint) and [Verify the custom domain](#verify-the-custom-domain).
-## Associate the custom domain with your CDN endpoint
+## Add a custom domain to your CDN endpoint
After you've registered your custom domain, you can then add it to your CDN endpoint.
After you've registered your custom domain, you can then add it to your CDN endp
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to the CDN profile containing the endpoint that you want to map to a custom domain.
-2. On the **CDN profile** page, select the CDN endpoint to associate with the custom domain.
+2. On the **CDN profile** page, select the CDN endpoint to add the custom domain.
:::image type="content" source="media/cdn-map-content-to-custom-domain/cdn-endpoint-selection.png" alt-text="CDN endpoint selection" border="true":::
If you no longer want to associate your endpoint with a custom domain, remove th
3. From the **Endpoint** page, under Custom domains, right-click the custom domain that you want to remove, then select **Delete** from the context menu. Select **Yes**.
- The custom domain is disassociated from your endpoint.
+ The custom domain is removed from your endpoint.
# [**PowerShell**](#tab/azure-powershell-cleanup)
-If you no longer want to associate your endpoint with a custom domain, remove the custom domain by doing the following steps:
+If you no longer want your endpoint to have a custom domain, remove the custom domain by doing the following steps:
1. Go to your DNS provider, delete the CNAME record for the custom domain, or update the CNAME record for the custom domain to a non-Azure CDN endpoint.
In this tutorial, you learned how to:
> [!div class="checklist"] > - Create a CNAME DNS record.
-> - Associate the custom domain with your CDN endpoint.
+> - Add a custom domain with your CDN endpoint.
> - Verify the custom domain. Advance to the next tutorial to learn how to configure HTTPS on an Azure CDN custom domain.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the prebuilt neural voices supported in each language.
| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-HiuGaaiNeural` | General | | Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-HiuMaanNeural` | General | | Chinese (Cantonese, Traditional) | `zh-HK` | Male | `zh-HK-WanLungNeural` | General |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaochenNeural` | Optimized for spontaneous conversation |
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaohanNeural` | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaomoNeural` | General, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoqiuNeural` | Optimized for narrating |
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoruiNeural` | Senior voice, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` | Child voice,optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)|
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxiaoNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxuanNeural` | General, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyanNeural` | Optimized for customer service |
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyouNeural` | Child voice, optimized for story narrating | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunxiNeural` | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyangNeural` | Optimized for news reading,<br /> multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-||
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaochenNeural` | Optimized for spontaneous conversation |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoqiuNeural` | Optimized for narrating |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` | Child voice,optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyanNeural` | Optimized for customer service |
| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` <sup>New</sup> | General |
Use the following table to determine supported styles and roles for each neural
|en-US-SaraNeural|`angry`, `cheerful`, `sad`||| |ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`||| |pt-BR-FranciscaNeural|`calm`|||
-|zh-CN-XiaohanNeural|`affectionate`, `angry`, `cheerful`, `customerservice`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported|Supported|
-|zh-CN-XiaomoNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported|
-|zh-CN-XiaoruiNeural|`angry`, `fearful`, `sad`|Supported||
+|zh-CN-XiaohanNeural|`affectionate`, `angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
+|zh-CN-XiaomoNeural|`affectionate`, `angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `envious`, `fearful`, `gentle`, `sad`, `serious`|Supported|Supported|
+|zh-CN-XiaoruiNeural|`angry`, `calm`, `fearful`, `sad`|Supported||
|zh-CN-XiaoshuangNeural|`chat`|Supported||
-|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `fearful`, `gentle`, `lyrical`, `newscast`, `sad`, `serious`|Supported||
-|zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `customerservice`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported||
-|zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `customerservice`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
-|zh-CN-YunyangNeural|`customerservice`|Supported||
-|zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `fearful`, `sad`, `serious`|Supported|Supported|
+|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `sad`, `serious`|Supported||
+|zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported|
+|zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported|
+|zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported||
+|zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
### Custom Neural Voice
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
In the following tables, the parameters without the **Adjustable** row aren't ad
| Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| Max number of transactions per second (TPS) per Speech service resource | See [General](#general) | See [General](#general) |
-| Max number of datasets per Speech service resource | 10 | 500 |
-| Max number of simultaneous dataset uploads per Speech service resource | 2 | 5 |
-| Max data file size for data import per dataset | 2 GB | 2 GB |
-| Upload of long audios or audios without script | No | Yes |
+| Max number of transactions per second (TPS) per Speech service resource | Not available for F0 | See [General](#general) |
+| Max number of datasets per Speech service resource | N/A | 500 |
+| Max number of simultaneous dataset uploads per Speech service resource | N/A | 5 |
+| Max data file size for data import per dataset | N/A | 2 GB |
+| Upload of long audios or audios without script | N/A | Yes |
| Max number of simultaneous model trainings per Speech service resource | N/A | 3 | | Max number of custom endpoints per Speech service resource | N/A | 50 | | *Concurrent request limit for Custom Neural Voice* | | |
cognitive-services Create Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2-preview/how-to/create-manage-workspace.md
The person who created the workspace is the owner. Within **Workspace settings**
* **Reader**. A reader can view (and download if available) all information in the workspace.
+> [!NOTE]
+> The Custom Translator workspace sharing policy has changed. For additional security measures, you can share a workspace only with people who have recently signed in to the Custom Translator portal.
+ 1. Select **Share**. 1. Complete the **email address** field for collaborators.
The person who created the workspace is the owner. Within **Workspace settings**
:::image type="content" source="../media/quickstart/manage-workspace-settings-1.png" alt-text="Screenshot illustrating how to share a workspace."::: ### Remove somebody from a workspace
The person who created the workspace is the owner. Within **Workspace settings**
2. Select the **X** icon next to the **Role** and email address that you want to remove. ## Next steps > [!div class="nextstepaction"]
-> [Learn how to manage projects](create-manage-project.md)
+> [Learn how to manage projects](create-manage-project.md)
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
> [!NOTE] >
-> 1. Generally, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service subscription key or a single-service subscription key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
-> 2. Document Translation is **only** available in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+> * Generally, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service subscription key or a single-service subscription key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+>
+> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
> To get started, you'll need: * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource).
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your blob data within your storage account.
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource):
+
+ **Complete the Translator project and instance details fields as follows:**
+
+ 1. **Subscription**. Select one of your available Azure subscriptions.
+
+ 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+ 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](managed-identity.md) for authentication, choose a **non-global** region.
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You will create containers to store and organize your blob data within your storage account.
+ 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+
+ > [!NOTE]
+ > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
+
+ 1. **Pricing tier**. Document Translation isn't supported in the free tier. Select Standard S1 to try the service.
+
+ 1. Select **Review + Create**.
+
+ 1. Review the service terms and select **Create** to deploy your resource.
+
+ 1. After your resource has successfully deployed, select **Go to resource**.
## Custom domain name and subscription key
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
> > * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**. > * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
->
+> * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](managed-identity.md) for authentication.
-## Document Translation: HTTP requests
+## HTTP requests
-A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service.
+A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service. The translated documents will be listed in your target container.
### HTTP headers
The following headers are included with each Document Translator API request:
||--| |Ocp-Apim-Subscription-Key|**Required**: The value is the Azure subscription key for your Translator or Cognitive Services resource.| |Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
-|Content-Length|**Required**: the length of the request body.|
### POST request body properties
The following headers are included with each Document Translator API request:
### Translate a specific document in a container
-* Ensure you have specified "storageType": "File"
-* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
-* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+* Specify `"storageType": "File"`
+* If you aren't using a [**system-assigned managed identity**](managed-identity.md) for authentication, make sure you've created source URL & SAS token for the specific blob/document (not for the container)
+* Ensure you've specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
* Sample request below shows a single document getting translated into two target languages ```json
The table below lists the limits for data that you send to Document Translation.
|Number of target languages in a batch| Γëñ 10 | |Size of Translation memory file| Γëñ 10 MB|
-Document Translation can not be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
+Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
## Troubleshooting
Document Translation can not be used to translate secured documents such as thos
||-|--| | 200 | OK | The request was successful. | | 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. When managing your subscription on the Azure portal, please ensure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
-| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or token is valid and in the correct region. When managing your subscription on the Azure portal, make sure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. | ## Learn more
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/managed-identity.md
Previously updated : 02/22/2022 Last updated : 02/28/2022
> [!IMPORTANT] >
-> Managed identities for Azure resources are currently unavailable for Document Translation service in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
+> * Currently, Document Translation doesn't support managed identity in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
+>
+> * Document Translation is **only** available in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources:
-* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
+* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
-* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/overview.md).
+* To grant access to an Azure resource, you'll assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/overview.md).
* There's no added cost to use managed identities in Azure. > [!TIP]
-> Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens. Managed identities are a safer way to grant access to data without having credentials in your code.
+>
+> * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail.
+>
+> * Managed identities are a safer way to grant access to data without having SAS tokens included with your HTTP requests.
-## Prerequisites
+## Prerequisites
To get started, you'll need: * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
To get started, you'll need:
* A brief understanding of [**Azure role-based access control (Azure RBAC)**](../../../role-based-access-control/role-assignments-portal.md) using the Azure portal.
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You'll create containers to store and organize your blob data within your storage account.
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You'll create containers to store and organize your blob data within your storage account.
* **If your storage account is behind a firewall, you must enable the following configuration**: </br>
To get started, you'll need:
## Managed identity assignments
-There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports system-assigned managed identities:
+There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports **system-assigned managed identity**:
-* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
In the following steps, we'll enable a system-assigned managed identity and gran
## Grant access to your storage account
-You need to grant Translator access to your storage account before it can create, read, or delete blobs. Now that you enabled Translator with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Translator access to Azure storage.
+You need to grant Translator access to your storage account before it can create, read, or delete blobs. Once you've enabled Translator with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Translator access to your Azure storage containers.
The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data.
The **Storage Blob Data Contributor** role gives Translator (represented by the
:::image type="content" source="../media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
- Great! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Translator specific access rights to your storage resource without having to manage credentials such as SAS tokens.
+## HTTP requests
+
+* A batch Document Translation request is submitted to your Translator service endpoint via a POST request.
+
+* With managed identity and Azure RBAC, you'll no longer need to include SAS URLs.
+
+* If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service.
+
+* The translated documents will appear in your target container.
+
+### Headers
+
+The following headers are included with each Document Translator API request:
+
+|HTTP header|Description|
+||--|
+|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure subscription key for your Translator or Cognitive Services resource.|
+|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
+
+### POST request body
+
+* The request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`
+
+* The request body is a JSON object named `inputs`.
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs
+* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
+* A value for the `glossaries` field (optional) is applied when the document is being translated.
+* The `targetUrl` for each target language must be unique.
+
+>[!NOTE]
+> If a file with the same name already exists in the destination, the job will fail.
+
+<!-- markdownlint-disable MD024 -->
+### Translate all documents in a container
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-fr"
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate a specific document in a container
+
+* **Required**: "storageType": "File"
+* The sample request below shows a single document getting translated into two target languages
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-es/Target-Spanish.docx"
+ "language": "es"
+ },
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-de/Target-German.docx",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate documents using a custom glossary
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://myblob.blob.core.windows.net/source",
+ "filter": {
+ "prefix": "myfolder/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": "https://myblob.blob.core.windows.net/target",
+ "language": "es",
+ "glossaries": [
+ {
+ "glossaryUrl": "https:// myblob.blob.core.windows.net/glossary/en-es.xlf",
+ "format": "xliff"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+ Great! You've learned how to enable and use a system-assigned managed identity. With managed identity for Azure Resources and Azure RBAC, you granted Translator specific access rights to your storage resource without including SAS tokens with your HTTP requests.
## Next steps
+**Quickstart**
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](get-started-with-document-translation.md)
+
+**Tutorial**
+ > [!div class="nextstepaction"] > [Access Azure Storage from a web app using managed identities](/azure/app-service/scenario-secure-app-access-storage?toc=/azure/cognitive-services/translator/toc.json&bc=/azure/cognitive-services/translator/breadcrumb/toc.json)
cognitive-services Translator How To Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-how-to-signup.md
Previously updated : 02/16/2021 Last updated : 02/24/2022 # Create a Translator resource
The Translator service can be accessed through two different resource types:
1. **Subscription**. Select one of your available Azure subscriptions.
-1. **Resource Group**. The Azure resource group that you choose serve as a virtual container for your new resource. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
-1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. Translator is a non-regional serviceΓÇöthere is no dependency on a specific Azure region. *See* [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md).
+1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using the Document Translation feature with managed identity authentication, choose a non-global region.
1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
-> [!NOTE]
-> If you are using a Translator feature that requires a custom domain endpoint, the value that you enter in the Name field will be the custom domain name parameter for the endpoint.
+ > [!NOTE]
+ > If you are using a Translator feature that requires a custom domain endpoint, such as Document Translation, the value that you enter in the Name field will be the custom domain name parameter for the endpoint.
-5. **Pricing tier**. Select a [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/translator) that meets your needs:
+1. **Pricing tier**. Select a [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/translator) that meets your needs:
* Each subscription has a free tier. * The free tier has the same features and functionalities as paid plans and doesn't expire.
- * Only have one free subscription per account is allowed.</li></ul>
+ * Only have one free subscription per account is allowed.
+ * Document Translation isn't supported in the free tier. Select Standard S1 to try that feature.
-1. If you have created a multi-service resource, you will need to confirm additional usage details via the check boxes.
+1. If you've created a multi-service resource, you'll need to confirm additional usage details via the check boxes.
1. Select **Review + Create**.
All Cognitive Services API requests require an endpoint URL and a read-only key
* **Authentication keys**. Your key is a unique string that is passed on every request to the Translation service. You can pass your key through a query-string parameter or by specifying it in the HTTP request header.
-* **Endpoint URL**. Use the Global endpoint in your API request unless you need a specific Azure region. *See* [Base URLs](reference/v3-0-reference.md#base-urls). The Global endpoint URL is `api.cognitive.microsofttranslator.com`.
+* **Endpoint URL**. Use the Global endpoint in your API request unless you need a specific Azure region or custom endpoint. *See* [Base URLs](reference/v3-0-reference.md#base-urls). The Global endpoint URL is `api.cognitive.microsofttranslator.com`.
## Get your authentication keys and endpoint
In our quickstart, you'll learn how to use the Translator service with REST APIs
* [Microsoft Translator code samples](https://github.com/MicrosoftTranslator). Multi-language Translator code samples are available on GitHub. * [Microsoft Translator Support Forum](https://www.aka.ms/TranslatorForum)
-* [Get Started with Azure (3-minute video)](https://azure.microsoft.com/get-started/?b=16.24)
+* [Get Started with Azure (3-minute video)](https://azure.microsoft.com/get-started/?b=16.24)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/call-api.md
See the [application development lifecycle](../overview.md#project-development-l
## Deploy your model
-1. Go to your project in [Language Studio](https://aka.ms/custom-classification)
+After your model is [trained](train-model.md), you can deploy it. Deploying your model lets you start using it to classify text. You can deploy your model using the [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/language-authoring-apis-2021-11-01-preview/operations/Deployments_TriggerDeploymentJob) or Language Studio. To use Language Studio, see the steps below:
-2. Select **Deploy model** from the left side menu.
-3. Select the model you want to deploy, then select **Deploy model**. If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
+If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
> [!TIP]
-> You can test your model in Language Studio by sending samples of text for it to classify.
-> 1. Select **Test model** from the menu on the left side of your project in Language Studio.
-> 2. Select the model you want to test.
-> 3. Add your text to the textbox, you can also upload a `.txt` file.
-> 4. Click on **Run the test**.
-> 5. In the **Result** tab, you can see the predicted classes for your text. You can also view the JSON response under the **JSON** tab.
+> You can [test your model in Language Studio](../quickstart.md?pivots=language-studio#test-your-model) by sending samples of text for it to classify.
## Send a text classification request to your model
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/quickstart.md
Previously updated : 01/25/2022 Last updated : 02/28/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/tutorials/cognitive-search.md
Previously updated : 02/02/2022 Last updated : 02/28/2022
In this tutorial, you will learn how to:
* An [Azure function app](../../../../azure-functions/functions-create-function-app-portal.md)
-* Download this [sample data](). <!-- TODO: add link to sample data here (Movies)-->
+* Download this [sample data](https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/language-service/Custom%20text%20classification/Custom%20multi%20classification%20-%20movies%20summary.zip).
## Create a custom classification project through Language studio
-1. Log in to [Language Studio](https://aka.ms/languageStudio). A window will appear to let you select your subscription and Language resource. Select the resource you created in the above step.
-
-2. Under the **Classify text** section of Language Studio, select **custom text classification** from the available services, and select it.
-
-3. Select **Create new project** from the top menu in your projects page. Creating a project will let you tag data, train, evaluate, improve, and deploy your models.
-
-4. If youΓÇÖve created your resource using the steps in [Create a project](../how-to/create-project.md#azure-resources), the **Connect storage** step will be completed already. If not, you need to assign [roles for your storage account](../how-to/create-project.md#roles-for-your-storage-account) before connecting it to your resource.
-
-5. Select your project type. For this tutorial, we'll create a multi-label classification project where you can assign multiple classes to the same file. Then click **Next**. See [project types](../glossary.md#project-types) in the FAQ for more information.
-
-6. Enter project information, including a name, description, and the language of the files in your project. You wonΓÇÖt be able to change the name of your project later.
- >[!TIP]
- > Your dataset doesn't have to be entirely in the same language. You can have multiple files, each with different supported languages. If your dataset contains files of different languages or if you expect different languages during runtime, select **enable multi-lingual dataset** when you enter the basic information for your project.
-
-7. Select the container where youΓÇÖve uploaded your data. For this tutorial we'll use the tags file you downloaded from the sample data.
-
-8. Review the data you entered and select **Create Project**.
## Train your model
In this tutorial, you will learn how to:
## Deploy your model
-1. Select **Deploy model** from the left side menu.
+To deploy your model, go to your project in [Language Studio](https://aka.ms/custom-classification). You can also use the [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/language-authoring-apis-2021-11-01-preview/operations/Deployments_TriggerDeploymentJob).
+
-2. Select the model you want to deploy and from the top menu click on **Deploy model**. If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
+If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
## Use CogSvc language utilities tool for Cognitive search integration
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/bring-your-own-storage.md
[!INCLUDE [Private Preview Disclaimer](../includes/private-preview-include-section.md)]
-In many applications end-users may want to store their Call Recording files long-term. Some of the common scenarios are compliance, quality assurance, assessment, post call analysis, training, and coaching. Now with the BYOS (bring your own storage) being available, end-users will have an option to store their files long term and manage the files in a way they need. The end user will be responsible for legal regulations about storing the data. BYOS simplifies downloading of the files from Azure Communication Services (ACS) and minimizes the number of support request if customer was unable to download recording in 48 hours. Data will be transferred securely from Microsoft Azure blob storage to a customer Azure blob storage.
+In many applications, end-users may want to store their Call Recording files long-term. Some of the common scenarios are compliance, quality assurance, assessment, post-call analysis, training, and coaching. Now with the BYOS (bring your own storage) being available, end-users will have an option to store their files long-term and manage the files in a way they need. The end-user will be responsible for legal regulations about storing the data. BYOS simplifies downloading of the files from Azure Communication Services (ACS) and minimizes the number of support requests if the customer was unable to download the recording in 48 hours. Data will be transferred securely from Microsoft Azure blob storage to a customer Azure blob storage.
Here are a few examples: - Contact Center Recording - Compliance Recording Scenario - Healthcare Virtual Visits Scenario - Conference/meeting recordings and so on
-BYOS can be easily integrated into any application regardless of the programming language. When creating a call recording resource in Azure Portal, enable the BYOS option and provide the sas-url to the storage. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution.
+BYOS can be easily integrated into any application regardless of the programming language. When creating a call recording resource in Azure Portal, enable the BYOS option and provide the URL to the storage. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution.
+
+![Bring your own storage concept diagram](../media/byos-diagramm.png)
+
+1. Contoso enables MI (managed identities) on an Azure Storage account.
+2. Contoso creates ACS (azure communication services) resource.
+
+![Bring your own storage resource page](../media/byos-link-storage.png)
+
+3. Contoso enables BYOS on the ACS resource and specifies the URL to link with the storage.
+4. After the resource has been created Contoso will see linked storage and will be able to change settings later in time
+
+![Bring your own storage add storage page](../media/byos-add-storage.png)
+
+5. If Contoso has built an application with Call Recording, they can record a meeting. Once the recording file is available, Contoso will receive an event from ACS that a file is copied over to their storage.
+6. After the notification has been received Contoso will see the file
+6. After the notification has been received Contoso will see the file located in the storage they have specified.
+7. Contoso has successfully linked their storage with ACS!
++
+![Bring your own storage success page](../media/byos-storage-created.png)
## Feature highlights
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This f
- `chatThreadPropertiesUpdated` - when chat thread properties are updated; currently, only updating the topic for the thread is supported. - `participantsAdded` - when a user is added as a chat thread participant. - `participantsRemoved` - when an existing participant is removed from the chat thread.
+ - `realTimeNotificationConnected` - when real time notifiation is connected.
+ - `realTimeNotificationDisconnected` -when real time notifiation is disconnected.
## Push notifications To send push notifications for messages missed by your users while they were away, Communication Services provides two different ways to integrate:
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
To use a container registry, you first define the required fields to the [config
```json { ...
- "registries": {
+ "registries": [{
"server": "docker.io", "username": "my-registry-user-name", "passwordSecretRef": "my-password-secretref-name"
- }
+ }]
} ```
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cmk.md
description: Learn how to configure customer-managed keys for your Azure Cosmos
Previously updated : 02/18/2022- Last updated : 02/25/2022+ ms.devlang: azurecli
Because a system-assigned managed identity can only be retrieved after the creat
--assign-identity <identity-resource-id> --default-identity "UserAssignedIdentity=<identity-resource-id>" ```
+
+## Use CMK with continuous backup
+
+You can create a continuous backup account by using the Azure CLI or an Azure Resource Manager template.
+
+Currently, only user-assigned managed identity is supported for creating continuous backup accounts.
+
+### To create a continuous backup account by using the Azure CLI
+
+```azurecli
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
+
+az cosmosdb create \
+ -n $accountName \
+ -g $resourceGroupName \
+ --key-uri $keyVaultKeyUri \
+ --locations regionName=<Location> \
+ --assign-identity <identity-resource-id> \
+ --default-identity "UserAssignedIdentity=<identity-resource-id>" \
+ --backup-policy-type Continuous
+```
+
+### To create a continuous backup account by using an Azure Resource Manager template
+When you create a new Azure Cosmos account through an Azure Resource Manager template:
+
+- Pass the URI of the Azure Key Vault key that you copied earlier under the **keyVaultKeyUri** property in the **properties** object.
+- Use **2021-11-15** or later as the API version.
+
+> [!IMPORTANT]
+> You must set the `locations` property explicitly for the account to be successfully created with customer-managed keys as shown in the preceding example.
+
+```json
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "identity": {
+ "type": "UserAssigned",
+ "backupPolicy": {"type": "Continuous"},
+ "userAssignedIdentities": {
+ "<identity-resource-id>": {}
+ }
+ },
+ // ...
+ "properties": {
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
+ "keyVaultKeyUri": "<key-vault-key-uri>"
+ // ...
+ }
+}
+```
+
## Key rotation Rotating the customer-managed key used by your Azure Cosmos account can be done in two ways.
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
List<PatchOperation> patchOperations = new List<PatchOperation>();
patchOperations.Add(PatchOperation.Add("/nonExistentParent/Child", "bar")); patchOperations.Add(PatchOperation.Remove("/cost")); patchOperations.Add(PatchOperation.Increment("/taskNum", 6));
-patchOperations.Add(patchOperation.Set("/existingPath/newproperty",value));
+patchOperations.Add(PatchOperation.Set("/existingPath/newproperty",value));
container.PatchItemAsync<item>( id: 5,
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4 description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4.-+ Previously updated : 02/03/2022- Last updated : 02/28/2022+ ms.devlang: java
Start with this list:
* Review the [performance tips](performance-tips-java-sdk-v4-sql.md) for Azure Cosmos DB Java SDK v4, and follow the suggested practices. * Read the rest of this article, if you didn't find a solution. Then file a [GitHub issue](https://github.com/Azure/azure-sdk-for-java/issues). If there is an option to add tags to your GitHub issue, add a *cosmos:v4-item* tag.
-### Retry Logic <a id="retry-logics"></a>
+## Capture the diagnostics
+
+Database, container, item, and query responses in the Java V4 SDK have a Diagnostics property. This property records all the information related to the single request, including if there were retries or any transient failures.
+
+The Diagnostics are returned as a string. The string changes with each version as it is improved to better troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Do not parse the string to avoid breaking changes.
+
+The following code sample shows how to read diagnostic logs using the Java V4 SDK:
+
+> [!IMPORTANT]
+> We recommend validating the minimum recommended version of the Java V4 SDK and ensure you are using this version or higher. You can check recommended version [here](/azure/cosmos-db/sql/sql-api-sdk-java-v4#recommended-version).
+
+# [Sync](#tab/sync)
+
+#### Database Operations
+
+```Java
+CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName);
+CosmosDiagnostics diagnostics = databaseResponse.getDiagnostics();
+logger.info("Create database diagnostics : {}", diagnostics);
+```
+
+#### Container Operations
+
+```Java
+CosmosContainerResponse containerResponse = database.createContainerIfNotExists(containerProperties,
+ throughputProperties);
+CosmosDiagnostics diagnostics = containerResponse.getDiagnostics();
+logger.info("Create container diagnostics : {}", diagnostics);
+```
+
+#### Item Operations
+
+```Java
+// Write Item
+CosmosItemResponse<Family> item = container.createItem(family, new PartitionKey(family.getLastName()),
+ new CosmosItemRequestOptions());
+
+CosmosDiagnostics diagnostics = item.getDiagnostics();
+logger.info("Create item diagnostics : {}", diagnostics);
+
+// Read Item
+CosmosItemResponse<Family> familyCosmosItemResponse = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+
+CosmosDiagnostics diagnostics = familyCosmosItemResponse.getDiagnostics();
+logger.info("Read item diagnostics : {}", diagnostics);
+```
+
+#### Query Operations
+
+```Java
+String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
+
+CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(),
+ Family.class);
+
+// Add handler to capture diagnostics
+filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
+ logger.info("Query Item diagnostics through handler : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+});
+
+// Or capture diagnostics through iterableByPage() APIs.
+filteredFamilies.iterableByPage().forEach(familyFeedResponse -> {
+ logger.info("Query item diagnostics through iterableByPage : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+});
+```
+
+# [Async](#tab/async)
+
+#### Database Operations
+
+```Java
+Mono<CosmosDatabaseResponse> databaseResponseMono = client.createDatabaseIfNotExists(databaseName);
+CosmosDatabaseResponse cosmosDatabaseResponse = databaseResponseMono.block();
+
+CosmosDiagnostics diagnostics = cosmosDatabaseResponse.getDiagnostics();
+logger.info("Create database diagnostics : {}", diagnostics);
+```
+
+#### Container Operations
+
+```Java
+Mono<CosmosContainerResponse> containerResponseMono = database.createContainerIfNotExists(containerProperties,
+ throughputProperties);
+CosmosContainerResponse cosmosContainerResponse = containerResponseMono.block();
+CosmosDiagnostics diagnostics = cosmosContainerResponse.getDiagnostics();
+logger.info("Create container diagnostics : {}", diagnostics);
+```
+
+#### Item Operations
+
+```Java
+// Write Item
+Mono<CosmosItemResponse<Family>> itemResponseMono = container.createItem(family,
+ new PartitionKey(family.getLastName()),
+ new CosmosItemRequestOptions());
+
+CosmosItemResponse<Family> itemResponse = itemResponseMono.block();
+CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
+logger.info("Create item diagnostics : {}", diagnostics);
+
+// Read Item
+Mono<CosmosItemResponse<Family>> itemResponseMono = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+CosmosItemResponse<Family> familyCosmosItemResponse = itemResponseMono.block();
+CosmosDiagnostics diagnostics = familyCosmosItemResponse.getDiagnostics();
+logger.info("Read item diagnostics : {}", diagnostics);
+```
+
+#### Query Operations
+
+```Java
+String sql = "SELECT * FROM c WHERE c.lastName = 'Witherspoon'";
+CosmosPagedFlux<Family> filteredFamilies = container.queryItems(sql, new CosmosQueryRequestOptions(),
+ Family.class);
+// Add handler to capture diagnostics
+filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
+ logger.info("Query Item diagnostics through handler : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+});
+
+// Or capture diagnostics through byPage() APIs.
+filteredFamilies.byPage().toIterable().forEach(familyFeedResponse -> {
+ logger.info("Query item diagnostics through iterableByPage : {}",
+ familyFeedResponse.getCosmosDiagnostics());
+});
+```
++
+## Retry Logic <a id="retry-logics"></a>
Cosmos DB SDK on any IO failure will attempt to retry the failed operation if retry in the SDK is feasible. Having a retry in place for any failure is a good practice but specifically handling/retrying write failures is a must. It's recommended to use the latest SDK as retry logic is continuously being improved. 1. Read and query IO failures will get retried by the SDK without surfacing them to the end user. 2. Writes (Create, Upsert, Replace, Delete) are "not" idempotent and hence SDK cannot always blindly retry the failed write operations. It is required that user's application logic to handle the failure and retry. 3. [Trouble shooting sdk availability](troubleshoot-sdk-availability.md) explains retries for multi-region Cosmos DB accounts.
-### Retry design
+## Retry design
The application should be designed to retry on any exception unless it is a known issue where retrying will not help. For example, the application should retry on 408 request timeouts, this timeout is possibly transient so a retry may result in success. The application should not retry on 400s, this typically means that there is an issue with the request that must first be resolved. Retrying on the 400 will not fix the issue and will result in the same failure if retried again. The table below shows known failures and which ones to retry on.
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
You can opt out of getting your invoice by email by following the steps above an
Azure Government users use the same agreement types as other Azure users.
-Azure Government billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+Azure Government customers canΓÇÖt request their invoice by email. They can only download it.
To download your invoice, follow the steps above at [Download invoices for an individual subscription](#download-invoices-for-an-individual-subscription).
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
You may want to share your invoice every month with your accounting team or send
Azure Government users use the same agreement types as other Azure users.
-Azure Government billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+Azure Government customers canΓÇÖt request their invoice by email. They can only download it.
To download your invoice, follow the steps above at [Download your MOSP Azure subscription invoice](#download-your-mosp-azure-subscription-invoice).
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md
The following properties are supported for file system under `storeSettings` set
| type | The type property under `storeSettings` must be set to **FileServerReadSettings**. | Yes | | ***Locate the files to copy:*** | | | | OPTION 1: static path<br> | Copy from the given folder/file path specified in the dataset. If you want to copy all files from a folder, additionally specify `wildcardFileName` as `*`. | |
-| OPTION 2: server side filter<br>- fileFilter | File server side native filter, which provides better performance than OPTION 3 wildcard filter. Use `*` to match zero or more characters and `?` to match zero or single character. Learn more about the syntax and notes from the **Remarks** under [this section](/dotnet/api/system.io.directory.getfiles#System_IO_Directory_GetFiles_System_String_System_String_System_IO_SearchOption_). | No |
+| OPTION 2: server side filter<br>- fileFilter | File server side native filter, which provides better performance than OPTION 3 wildcard filter. Use `*` to match zero or more characters and `?` to match zero or single character. Learn more about the syntax and notes from the **Remarks** under [this section](/dotnet/api/system.io.directory.getfiles#system-io-directory-getfiles(system-string-system-string-system-io-searchoption)). | No |
| OPTION 3: client side filter<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. Such filter happens within the service, which enumerate the folders/files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No | | OPTION 3: client side filter<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. Such filter happens within the service, which enumerates the files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside.<br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes | | OPTION 3: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, do not specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No |
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 01/10/2022 Last updated : 02/25/2022
data-factory Connector Quickbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbase.md
+
+ Title: Transform data in Quickbase (Preview)
+
+description: Learn how to transform data in Quickbase (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 02/28/2022++
+# Transform data in Quickbase (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in Quickbase (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+## Supported capabilities
+
+This Quickbase connector is supported for the following activities:
+
+- [Mapping data flow](concepts-data-flow-overview.md)
+
+## Create a Quickbase linked service using UI
+
+Use the following steps to create a Quickbase linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for Quickbase (Preview) and select the Quickbase (Preview) connector.
+
+ :::image type="content" source="media/connector-quickbase/quickbase-connector.png" alt-text="Screenshot showing selecting Quickbase connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-quickbase/configure-quickbase-linked-service.png" alt-text="Screenshot of configuration for Quickbase linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Quickbase.
+
+## Linked service properties
+
+The following properties are supported for the Quickbase linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Quickbase**. |Yes |
+| url | The application URL of the Quickbase service. | Yes |
+| userToken | Specify a user token for the Quickbase. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "QuickbaseLinkedService",
+ "properties": {
+ "type": "Quickbase",
+ "typeProperties": {
+ "url": "<application url>",
+ "userToken": {
+ "type": "SecureString",
+ "value": "<user token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read tables from Quickbase. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+
+### Source transformation
+
+The below table lists the properties supported by Quickbase source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Table | Data flow will fetch all the data from the table specified in the source options. | Yes when use inline mode| - | table |
+| Report | Data flow will fetch the specified report for the table specified in the source options.| No | - | report |
+
+#### Quickbase source script examples
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'quickbase',
+ format: 'rest',
+ table: 'Table',
+ report: 'Report') ~> Quickbasesource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Smartsheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-smartsheet.md
+
+ Title: Transform data in Smartsheet (Preview)
+
+description: Learn how to transform data in Smartsheet (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 02/28/2022++
+# Transform data in Smartsheet (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in Smartsheet (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+## Supported capabilities
+
+This Smartsheet connector is supported for the following activities:
+
+- [Mapping data flow](concepts-data-flow-overview.md)
+
+## Create a Smartsheet linked service using UI
+
+Use the following steps to create a Smartsheet linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for Smartsheet (Preview) and select the Smartsheet (Preview) connector.
+
+ :::image type="content" source="media/connector-smartsheet/smartsheet-connector.png" alt-text="Screenshot showing selecting Smartsheet connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-smartsheet/configure-smartsheet-linked-service.png" alt-text="Screenshot of configuration for Smartsheet linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Smartsheet.
+
+## Linked service properties
+
+The following properties are supported for the Smartsheet linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Smartsheet**. |Yes |
+| apiToken | Specify an API token for the Smartsheet. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "SmartsheetLinkedService",
+ "properties": {
+ "type": "Smartsheet",
+ "typeProperties": {
+ "apiToken": {
+ "type": "SecureString",
+ "value": "<API token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read tables from Smartsheet. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+
+### Source transformation
+
+The below table lists the properties supported by Smartsheet source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Entity type | The type of the data asset in Smartsheet. | Yes when use inline mode| `sheets` `reports` | entityType |
+| Entity name | The name of a sheet or a report in Smartsheet. | Yes when use inline mode| String | entityId |
+
+#### Smartsheet source script examples
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'smartsheet',
+ format: 'rest',
+ entityId: 'Sheet1',
+ entityType: 'sheets') ~> SmartsheetSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Teamdesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md
Previously updated : 02/23/2022 Last updated : 02/25/2022 # Transform data in TeamDesk (Preview) using Azure Data Factory or Synapse Analytics
The following properties are supported for the TeamDesk linked service:
|: |: |: | | type | The type property must be set to **TeamDesk**. |Yes | | url | The URL of your TeamDesk database. An example is `https://www.teamdesk.net/secure/db/xxxxx`. | Yes |
-| authenticationType | Type of authentication used to connect to the TeamDesk service. Allowed values are **Basic** and **Token**. Refer to corresponding sections below on more properties and examples respectively.|Yes |
+| authenticationType | Type of authentication used to connect to the TeamDesk service. Allowed values are **basic** and **token**. Refer to corresponding sections below on more properties and examples respectively.|Yes |
### Basic authentication
data-factory Connector Zendesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zendesk.md
+
+ Title: Transform data in Zendesk (Preview)
+
+description: Learn how to transform data in Zendesk (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 02/28/2022++
+# Transform data in Zendesk (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in Zendesk (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+## Supported capabilities
+
+This Zendesk connector is supported for the following activities:
+
+- [Mapping data flow](concepts-data-flow-overview.md)
+
+## Create a Zendesk linked service using UI
+
+Use the following steps to create a Zendesk linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for Zendesk (Preview) and select the Zendesk (Preview) connector.
+
+ :::image type="content" source="media/connector-zendesk/zendesk-connector.png" alt-text="Screenshot showing selecting Zendesk connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-zendesk/configure-zendesk-linked-service.png" alt-text="Screenshot of configuration for Zendesk linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Zendesk.
+
+## Linked service properties
+
+The following properties are supported for the Zendesk linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Zendesk**. |Yes |
+| url | The base URL of your Zendesk service. | Yes |
+| authenticationType | Type of authentication used to connect to the Zendesk service. Allowed values are **basic** and **token**. Refer to corresponding sections below on more properties and examples respectively.|Yes |
+
+### Basic authentication
+
+Set the **authenticationType** property to **basic**. In addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| userName | The user name used to log in to Zendesk. |Yes |
+| password | Specify a password for the user account you specified for the user name. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "ZendeskLinkedService",
+ "properties": {
+ "type": "Zendesk",
+ "typeProperties": {
+ "url": "<base url>",
+ "authenticationType": "basic",
+ "userName": "<user name>",
+ "password": {
+ "type": "SecureString",
+ "value": "<password>"
+ }
+ }
+ }
+}
+```
+
+### Token authentication
+
+Set the **authenticationType** property to **token**. In addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| apiToken | Specify an API token for the Zendesk. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "ZendeskLinkedService",
+ "properties": {
+ "type": "Zendesk",
+ "typeProperties": {
+ "url": "<base url>",
+ "authenticationType": "token",
+ "apiToken": {
+ "type": "SecureString",
+ "value": "<API token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read tables from Zendesk. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+
+### Source transformation
+
+The below table lists the properties supported by Zendesk source. You can edit these properties in the **Source options** tab.
++
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Entity | The logical name of the entity in Zendesk. | Yes when use inline mode| `activities`<br/>`group_memberships`<br/>`groups`<br/>`organizations`<br/>`requests` <br/>`satisfaction_ratings`<br/>`sessions`<br/>`tags`<br/>`targets`<br/>`ticket_audits`<br/>`ticket_fields`<br/>`ticket_metrics`<br/>`tickets`<br/>`triggers`<br/>`users`<br/>`views` | entity |
+
+#### Zendesk source script examples
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'zendesk',
+ format: 'rest',
+ entity: 'tickets') ~> ZendeskSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 01/10/2022 Last updated : 02/25/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô | | [REST](connector-rest.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [TeamDesk (Preview)](connector-teamdesk.md#mapping-data-flow-properties) | | -/Γ£ô |
Settings specific to these connectors are located on the **Source options** tab. Information and data flow script examples on these settings are located in the connector documentation.
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md
Previously updated : 11/23/2021 Last updated : 03/01/2022 # Source control in Azure Data Factory
For more info about connecting Azure Repos to your organization's Active Directo
Visual authoring with GitHub integration supports source control and collaboration for work on your data factory pipelines. You can associate a data factory with a GitHub account repository for source control, collaboration, versioning. A single GitHub account can have multiple repositories, but a GitHub repository can be associated with only one data factory. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources.
-The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)) and GitHub Enterprise. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub.
+The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)) and GitHub Enterprise. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases)
-To configure a GitHub repo, you must have administrator permissions for the Azure subscription that you're using.
+> [!NOTE]
+> If you are using Microsoft Edge, GitHub Enterprise version less than 2.1.4 does not work with it. GitHub officially supports >=3.0 and these all should be fine for ADF. As GitHub changes its minimum version, ADF supported versions will also change.
### GitHub settings
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
+
+ Title: Transform data by using the Script activity
+
+description: Explains how to use the Script Activity to transform data in an Azure Data Factory or Synapse Analytics pipeline.
++++++ Last updated : 02/28/2022++
+# Transform data by using the Script activity in Azure Data Factory or Synapse Analytics
++
+You use data transformation activities in a Data Factory or Synapse [pipeline](concepts-pipelines-activities.md) to transform and process raw data into predictions and insights. The Script activity is one of the transformation activities that pipelines support. This article builds on the [transform data article](transform-data.md), which presents a general overview of data transformation and the supported transformation activities.
+
+Using the script activity, you can execute common operations with Data Manipulation Language (DML), and Data Definition Language (DDL). DML statements like SELECT, UPDATE, and INSERT let users retrieve, store, modify, delete, insert and update data in the database. DDL statements like CREATE, ALTER and DROP allow a database manager to create, modify, and remove database objects such as tables, indexes, and users.
+
+You can use the Script activity to invoke a SQL script in one of the following data stores in your enterprise or on an Azure virtual machine (VM):
+
+- Azure SQL Database
+- Azure Synapse Analytics
+- SQL Server Database. If you are using SQL Server, install Self-hosted integration runtime on the same machine that hosts the database or on a separate machine that has access to the database. Self-Hosted integration runtime is a component that connects data sources on-premises/on Azure VM with cloud services in a secure and managed way. See the [Self-hosted integration runtime](create-self-hosted-integration-runtime.md) article for details.
+- Oracle
+- Snowflake
+
+The script may contain either a single SQL statement or multiple SQL statements that run sequentially. You can use the Execute SQL task for the following purposes:
+
+- Truncate a table or view in preparation for inserting data.
+- Create, alter, and drop database objects such as tables and views.
+- Re-create fact and dimension tables before loading data into them.
+- Run stored procedures. If the SQL statement invokes a stored procedure that returns results from a temporary table, use the WITH RESULT SETS option to define metadata for the result set.
+- Save the rowset returned from a query as activity output for downstream consumption.
+
+## Syntax details
+
+Here is the JSON format for defining a Script activity:
+
+```json
+{
+ "name": "<activity name>",
+ "type": "Script",
+ "linkedServiceName": {
+ "referenceName": "<name>",
+ "type": "LinkedServiceReference"
+ },
+ "typeProperties": {
+ "scripts" : [
+ {
+ "text": "<Script Block>",
+ "type": "<Query> or <NonQuery>",
+ "parameters":[
+ {
+ "name": "<name>",
+ "value": "<value>",
+ "type": "<type>",
+ "direction": "<Input> or <Output> or <InputOutput>",
+ "size": 256
+ },
+ ...
+ ]
+ },
+ ...
+ ],
+ "scriptReference":{
+ "linkedServiceName":{
+ "referenceName": "<name>",
+ "type": "<LinkedServiceReference>"
+ },
+ "path": "<file path>",
+ "parameters":[
+ {
+ "name": "<name>",
+ "value": "<value>",
+ "type": "<type>",
+ "direction": "<Input> or <Output> or <InputOutput> or <ReturnValue>",
+ "size": 256
+ },
+ ...
+ ]
+ },
+ "logSettings": {
+ "logDestination": "<ActivityOutput> or <ExternalStore>",
+ "logLocationSettings":{
+ "linkedServiceName":{
+ "referenceName": "<name>",
+ "type": "<LinkedServiceReference>"
+ },
+ "path": "<folder path>"
+ }
+ }
+ }
+}
+```
+
+The following table describes these JSON properties:
++
+|Property name |Description |Required |
+||||
+|name |The name of the activity. |Yes |
+|type |The type of the activity, set to ΓÇ£ScriptΓÇ¥. |Yes |
+|typeProperties |Specify properties to configure the Script Activity. |Yes |
+|linkedServiceName |The target database the script runs on. It should be a reference to a linked service. |Yes |
+|scripts |An array of objects to represent the script. |No |
+|scripts.text |The plain text of a block of queries. |No |
+|scripts.type |The type of the block of queries. It can be Query or NonQuery. Default: Query. |No |
+|scripts.parameter |The array of parameters of the script. |No |
+|scripts.parameter.name |The name of the parameter. |No |
+|scripts.parameter.value |The value of the parameter. |No |
+|scripts.parameter.type |The data type of the parameter. The type is logical type and follows type mapping of each connector. |No |
+|scripts.parameter.direction |The direction of the parameter. It can be Input, Output, InputOutput. The value is ignored if the direction is Output. ReturnValue type is not supported. Set the return value of SP to an output parameter to retrieve it. |No |
+|scripts.parameter.size |The max size of the parameter. Only applies to Output/InputOutput direction parameter of type string/byte[]. |No |
+|scriptReference |The reference to a remotely stored script file. |No |
+|scriptReference.linkedServiceName |The linked service of the script location. |No |
+|scriptReference.path |The file path to the script file. Only a single file is supported. |No |
+|scriptReference.parameter |The array of parameters of the script. |No |
+|scriptReference.parameter.name |The name of the parameter. |No |
+|scriptReference.parameter.value |The value of the parameter. |No |
+|scriptReference.parameter.type |The data type of the parameter. The type is logical type and follows type mapping of each connector. |No |
+|scriptReference.parameter.direction |The direction of the parameter. It can be Input, Output, InputOutput. The value is ignored if the direction is Output. ReturnValue type is not supported. Set the return value of SP to an output parameter to retrieve it. |No |
+|scriptReference.parameter.size |The max size of the parameter. Only applies to types that can be variable size. |No |
+|logSettings |The settings to store the output logs. If not specified, script log is disabled. |No |
+|logSettings.logDestination |The destination of log output. It can be ActivityOutput or ExternalStore. Default: ActivityOutput. |No |
+|logSettings.logLocationSettings |The settings of the target location if logDestination is ExternalStore. |No |
+|logSettiongs.logLocationSettings.linkedServiceName |The linked service of the target location. Only blob storage is supported. |No |
+|logSettings.logLocationSettings.path |The folder path under which logs will be stored. |No |
+
+## Activity output
+
+Sample output:
+```json
+{
+ΓÇ» ΓÇ» "resultSetCount": 2,
+ΓÇ» ΓÇ» "resultSets": [
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "rowCount": 10,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "rows":[
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "<columnName1>": "<value1>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "<columnName2>": "<value2>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ...
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ]
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ...
+ΓÇ» ΓÇ» ],
+ΓÇ» ΓÇ» "recordsAffected": 123,
+ΓÇ» ΓÇ» "outputParameters":{
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "<parameterName1>": "<value1>",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "<parameterName2>": "<value2>"
+ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» "outputLogs": "<logs>",
+ΓÇ» ΓÇ» "outputLogsLocation": "<folder path>",
+ΓÇ» ΓÇ» "outputTruncated": true,
+ ...
+}
+```
++
+|Property name |Description |Condition |
+||||
+|resultSetCount |The count of result sets returned by the script. |Always |
+|resultSets |The array which contains all the result sets. |Always |
+|resultSets.rowCount |Total rows in the result set. |Always |
+|resultSets.rows |The array of rows in the result set. |Always |
+|recordsAffected |The row count of affected rows by the script. |If scriptType is NonQuery. |
+|outputParameters |The output parameters of the script. |If parameter type is Output or InputOutput. |
+|outputLogs |The logs written by the script, for example, print statement. |If connector supports log statement and enableScriptLogs is true and logLocationSettings is not provided. |
+|outputLogsPath |The full path of the log file. |If enableScriptLogs is true and logLocationSettings is provided. |
+|outputTruncated |Indicator of whether the output exceeds the limits and get truncated. |If output exceeds the limits. |
+
+> [!NOTE]
+> - The output is collected every time a script block is executed. The final output is the merged result of all script block outputs. The output parameter with same name in different script block will get overwritten.
+> - Since the output has size / rows limitation, the output will be truncated in following order: logs -> parameters -> rows. Note, this applies to a single script block, which means the output rows of next script block wonΓÇÖt evict previous logs.
+> - Any error caused by log wonΓÇÖt fail the activity.
+> - For consuming activity output resultSets in down stream activity please refer to the [Lookup activity result documentation](control-flow-lookup-activity.md#use-the-lookup-activity-result).
+> - Use outputLogs when you are using 'PRINT' statements for logging purpose. If query returns resultSets, it will be available in the activity output and will be limited to 5000 rows/ 2MB size limit.
+
+## Configure the Script activity using UI
+
+### Inline script
++
+Inline scripts integrate well with Pipeline CI/CD since the script is stored as part of the pipeline metadata.
+
+### Script file reference
++
+If you have you a custom process to generate scripts and would like to reference it in the pipeline rather than use an in-line script, you cam specify the file path on a storage.
+
+### Logging
++
+Logging options:
+
+- _Disable_ - No execution output is logged.
+- _Activity output_ ΓÇô The script execution output is appended to the activity output. It can be consumed by downstream activities. The output size is limited to 2MB.
+- _External storage_ ΓÇô Persists output to storage. Use this option if the output size is greater than 2MB or you would like to explicitly persist the output on your storage account.
+
+> [!NOTE]
+> **Billing** - The Script activity will be [billed](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) as **Pipeline activities**.
+
+## Next steps
+See the following articles that explain how to transform data in other ways:
+
+* [U-SQL activity](transform-data-using-data-lake-analytics.md)
+* [Hive activity](transform-data-using-hadoop-hive.md)
+* [Pig activity](transform-data-using-hadoop-pig.md)
+* [MapReduce activity](transform-data-using-hadoop-map-reduce.md)
+* [Hadoop Streaming activity](transform-data-using-hadoop-streaming.md)
+* [Spark activity](transform-data-using-spark.md)
+* [.NET custom activity](transform-data-using-dotnet-custom-activity.md)
+* [Stored procedure activity](transform-data-using-stored-procedure.md)
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 05/12/2021 Last updated : 02/25/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
Title: Adaptive application controls in Microsoft Defender for Cloud
-description: This document helps you use adaptive application control in Microsoft Defender for Cloud to create an allow list of applications running for Azure machines.
+description: This document helps you use adaptive application control in Microsoft Defender for Cloud to create an allowlist of applications running for Azure machines.
Last updated 11/09/2021
Learn about the benefits of Microsoft Defender for Cloud's adaptive application
## What are adaptive application controls?
-Adaptive application controls are an intelligent and automated solution for defining allow lists of known-safe applications for your machines.
+Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines.
-Often, organizations have collections of machines that routinely run the same processes. Microsoft Defender for Cloud uses machine learning to analyze the applications running on your machines and create a list of the known-safe software. Allow lists are based on your specific Azure workloads, and you can further customize the recommendations using the instructions below.
+Often, organizations have collections of machines that routinely run the same processes. Microsoft Defender for Cloud uses machine learning to analyze the applications running on your machines and create a list of the known-safe software. Allowlists are based on your specific Azure workloads, and you can further customize the recommendations using the instructions below.
When you've enabled and configured adaptive application controls, you'll get security alerts if any application runs other than the ones you've defined as safe.
Select the recommendation, or open the adaptive application controls page to vie
The **Adaptive application controls** page opens with your VMs grouped into the following tabs:
- - **Configured** - Groups of machines that already have a defined allow list of applications. For each group, the configured tab shows:
+ - **Configured** - Groups of machines that already have a defined allowlist of applications. For each group, the configured tab shows:
- the number of machines in the group - recent alerts
- - **Recommended** - Groups of machines that consistently run the same applications, and don't have an allow list configured. We recommend that you enable adaptive application controls for these groups.
+ - **Recommended** - Groups of machines that consistently run the same applications, and don't have an allowlist configured. We recommend that you enable adaptive application controls for these groups.
> [!TIP] > If you see a group name with the prefix "REVIEWGROUP", it contains machines with a partially consistent list of applications. Microsoft Defender for Cloud can't see a pattern but recommends reviewing this group to see whether _you_ can manually define some adaptive application controls rules as described in [Editing a group's adaptive application controls rule](#edit-a-groups-adaptive-application-controls-rule). > > You can also move machines from this group to other groups as described in [Move a machine from one group to another](#move-a-machine-from-one-group-to-another).
- - **No recommendation** - Machines without a defined allow list of applications, and which don't support the feature. Your machine might be in this tab for the following reasons:
+ - **No recommendation** - Machines without a defined allowlist of applications, and which don't support the feature. Your machine might be in this tab for the following reasons:
- It's missing a Log Analytics agent - The Log Analytics agent isn't sending events - It's a Windows machine with a pre-existing [AppLocker](/windows/security/threat-protection/windows-defender-application-control/applocker/applocker-overview) policy enabled by either a GPO or a local security policy
Select the recommendation, or open the adaptive application controls page to vie
> [!TIP] > Defender for Cloud needs at least two weeks of data to define the unique recommendations per group of machines. Machines that have recently been created, or which belong to subscriptions that were only recently protected by Microsoft Defender for servers, will appear under the **No recommendation** tab.
-1. Open the **Recommended** tab. The groups of machines with recommended allow lists appears.
+1. Open the **Recommended** tab. The groups of machines with recommended allowlists appears.
![Recommended tab.](./media/adaptive-application/adaptive-application-recommended-tab.png)
Select the recommendation, or open the adaptive application controls page to vie
1. **Recommended applications** - Review this list of applications that are common to the machines within this group, and recommended to be allowed to run.
- 1. **More applications** - Review this list of applications that are either seen less frequently on the machines within this group, or are known to be exploitable. A warning icon indicates that a specific application could be used by an attacker to bypass an application allow list. We recommend that you carefully review these applications.
+ 1. **More applications** - Review this list of applications that are either seen less frequently on the machines within this group, or are known to be exploitable. A warning icon indicates that a specific application could be used by an attacker to bypass an application allowlist. We recommend that you carefully review these applications.
> [!TIP] > Both application lists include the option to restrict a specific application to certain users. Adopt the principle of least privilege whenever possible.
Select the recommendation, or open the adaptive application controls page to vie
## Edit a group's adaptive application controls rule
-You might decide to edit the allow list for a group of machines because of known changes in your organization.
+You might decide to edit the allowlist for a group of machines because of known changes in your organization.
To edit the rules for a group of machines:
No enforcement options are currently available. Adaptive application controls ar
### Why do I see a Qualys app in my recommended applications? [Microsoft Defender for servers](defender-for-servers-introduction.md) includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. For details of this scanner and instructions for how to deploy it, see [Defender for Cloud's integrated Qualys vulnerability assessment solution](deploy-vulnerability-assessment-vm.md).
-To ensure no alerts are generated when Defender for Cloud deploys the scanner, the adaptive application controls recommended allow list includes the scanner for all machines.
+To ensure no alerts are generated when Defender for Cloud deploys the scanner, the adaptive application controls recommended allowlist includes the scanner for all machines.
## Next steps
-On this page, you learned how to use adaptive application control in Microsoft Defender for Cloud to define allow lists of applications running on your Azure and non-Azure machines. To learn more about some other cloud workload protection features, see:
+On this page, you learned how to use adaptive application control in Microsoft Defender for Cloud to define allowlists of applications running on your Azure and non-Azure machines. To learn more about some other cloud workload protection features, see:
* [Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md) * [Securing your Azure Kubernetes clusters](defender-for-kubernetes-introduction.md)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 02/28/2022 Last updated : 03/01/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
[Further details and notes](concept-defender-for-cosmos.md)
-| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+| Alert (alert type)| Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
|--|--|:-:|--|
-| **PREVIEW - Access from a Tor exit node** | This Azure Cosmos DB account was successfully accessed from an IP address known to be an active exit node of Tor, an anonymizing proxy. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity. | Initial Access | High/Medium |
-| **PREVIEW - Access from a suspicious IP** | This Azure Cosmos DB account was successfully accessed from an IP address that was identified as a threat by Microsoft Threat Intelligence. | Initial Access | Medium |
-| **PREVIEW - Access from an unusual location** | This Azure Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low |
-| **PREVIEW - Unusual volume of data extracted** | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium |
-| **PREVIEW - Extraction of Azure Cosmos DB accounts keys via a potentially malicious script** | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High |
-| **PREVIEW - SQL injection: potential data exfiltration** | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
-| **PREVIEW - SQL injection: fuzzing attempt** | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
+| **PREVIEW - Access from a Tor exit node** <br> (CosmosDB_TorAnomaly) | This Azure Cosmos DB account was successfully accessed from an IP address known to be an active exit node of Tor, an anonymizing proxy. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity. | Initial Access | High/Medium |
+| **PREVIEW - Access from a suspicious IP**<br>(CosmosDB_SuspiciousIp) | This Azure Cosmos DB account was successfully accessed from an IP address that was identified as a threat by Microsoft Threat Intelligence. | Initial Access | Medium |
+| **PREVIEW - Access from an unusual location**<br>(CosmosDB_GeoAnomaly) | This Azure Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low |
+| **PREVIEW - Unusual volume of data extracted**<br>(CosmosDB_DataExfiltrationAnomaly) | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium |
+| **PREVIEW - Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High |
+| **PREVIEW - SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
+| **PREVIEW - SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
| | | | |
Microsoft Defender for Containers provides security alerts on the cluster level
[Further details and notes](other-threat-protections.md#network-layer)
-| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
|-||:--:|-| | **Network communication with a malicious machine detected**<br>(Network_CommunicationWithC2) | Network traffic analysis indicates that your machine (IP %{Victim IP}) has communicated with what is possibly a Command and Control center. When the compromised resource is a load balancer or an application gateway, the suspected activity might indicate that one or more of the resources in the backend pool (of the load balancer or application gateway) has communicated with what is possibly a Command and Control center. | Command and Control | Medium | | **Possible compromised machine detected**<br>(Network_ResourceIpIndicatedAsMalicious) | Threat intelligence indicates that your machine (at IP %{Machine IP}) may have been compromised by a malware of type Conficker. Conficker was a computer worm that targets the Microsoft Windows operating system and was first detected in November 2008. Conficker infected millions of computers including government, business and home computers in over 200 countries/regions, making it the largest known computer worm infection since the 2003 Welchia worm. | Command and Control | Medium |
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
Title: Overview of Defender for Azure Cosmos DB
description: Learn about the benefits and features of Microsoft Defender for Azure Cosmos DB. Previously updated : 02/28/2022 Last updated : 03/01/2022 # Introduction to Microsoft Defender for Azure Cosmos DB
-APPLIES TO: :::image type="icon" source="media/icons/yes-icon.png" border="false"::: SQL/Core API
- Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders.
-Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
+Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-defender-for-cosmos.md) at either the subscription level, or the resource level.
-Microsoft Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB service. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
-Microsoft Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, and doesn't have any effect on its performance.
+Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, and doesn't have any effect on its performance.
## Availability
Microsoft Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB accoun
## What are the benefits of Microsoft Defender for Azure Cosmos DB
-Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities and Microsoft Threat Intelligence data. Microsoft Defender for Azure Cosmos DB continuously monitors your Azure Cosmos DB accounts for threats such as SQL injection, compromised identities and data exfiltration.
+Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities and Microsoft Threat Intelligence data. Defender for Azure Cosmos DB continuously monitors your Azure Cosmos DB accounts for threats such as SQL injection, compromised identities and data exfiltration.
This service provides action-oriented security alerts in Microsoft Defender for Cloud with details of the suspicious activity and guidance on how to mitigate the threats. You can use this information to quickly remediate security issues and improve the security of your Azure Cosmos DB accounts.
Alerts include details of the incident that triggered them, and recommendations
Threat intelligence security alerts are triggered for: - **Potential SQL injection attacks**: <br>
- Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Microsoft Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
+ Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
- **Anomalous database access patterns**: <br> For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations.
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 01/25/2022 Last updated : 02/28/2022 # Enable Microsoft Defender for Containers
Defender for Containers protects your clusters whether they're running in:
- **Amazon Elastic Kubernetes Service (EKS) in a connected Amazon Web Services (AWS) account** - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
+- **Google Kubernetes Engine (GKE) in a connected Google Cloud Platform (GCP) project** - GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.
+ - **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS. + Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). > [!NOTE]
-> Defender for Containers' support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is a preview feature.
+> Defender for Containers' support for Arc-enabled Kubernetes clusters, AWS EKS, and GCP GKE. This is a preview feature.
> > [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] ::: zone-end
Learn about this plan in [Overview of Microsoft Defender for Containers](defende
[!INCLUDE [Prerequisites](./includes/defender-for-container-prerequisites-aks.md)] ::: zone-end [!INCLUDE [Prerequisites](./includes/defender-for-container-prerequisites-arc-eks.md)] ::: zone-end
Learn about this plan in [Overview of Microsoft Defender for Containers](defende
[!INCLUDE [Enable plan for EKS](./includes/defender-for-containers-enable-plan-eks.md)] ::: zone-end ## Simulate security alerts from Microsoft Defender for Containers
A full list of supported alerts is available in the [reference table of all Defe
:::image type="content" source="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png" alt-text="Sample alert from Microsoft Defender for Kubernetes." lightbox="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png"::: ::: zone-end ::: zone pivot="defender-for-container-aks" ::: zone-end
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 02/16/2022 Last updated : 02/28/2022 # Overview of Microsoft Defender for Containers
On this page, you'll learn how you can use Defender for Containers to improve, m
| Release state: | General availability (GA)<br>Where indicated, specific features are in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] | | Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | | Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| Kubernetes distributions and configurations: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br>ΓÇóThe AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
+| Kubernetes distributions and configurations: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br>ΓÇóThe AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google GKE Standard clusters](https://cloud.google.com/kubernetes-engine/) <br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) (Except for preview features)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) (Except for preview features)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects|
| | | ## What are the benefits of Microsoft Defender for Containers?
Defender for Containers protects your clusters whether they're running in:
- **Amazon Elastic Kubernetes Service (EKS) in a connected Amazon Web Services (AWS) account** - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
+- **Google Kubernetes Engine (GKE) in a connected Google Cloud Platform (GCP) project** - GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.
+ - **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS. > [!NOTE]
In the diagrams you'll see that the items received and analyzed by Defender for
- Workload configuration from Azure Policy - Security signals and events from the node level -
-### [**AKS cluster**](#tab/defender-for-container-arch-aks)
+### [**Azure (AKS)**](#tab/defender-for-container-arch-aks)
### Architecture diagram of Defender for Cloud and AKS clusters<a name="jit-asc"></a>
The **Azure Policy add-on for Kubernetes** collects cluster and workload configu
\* resource limits are not configurable
-### [**Azure Arc-enabled Kubernetes**](#tab/defender-for-container-arch-arc)
+### [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
### Architecture diagram of Defender for Cloud and Arc-enabled Kubernetes clusters
Workload configuration information is collected by an Azure Policy add-on. As ex
-### [**AWS EKS**](#tab/defender-for-container-arch-eks)
+### [**AWS (EKS)**](#tab/defender-for-container-arch-eks)
### Architecture diagram of Defender for Cloud and EKS clusters
-For all clusters hosted outside of Azure, [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) is required to connect the clusters to Azure and provide Azure services such as Defender for Containers.
+The following describes the components necessary in order to receive the full protection offered by Microsoft Defender for Cloud for Containers.
+
+- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [AWS accountΓÇÖs CloudWatch](https://aws.amazon.com/cloudwatch/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.
-With an EKS-based cluster, Arc and its Defender extension are needed to collect policy and configuration data from nodes.
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).
-With an EKS-based cluster, Arc and its Defender extension are required for runtime protection. The **Azure Policy add-on for Kubernetes** collects cluster and workload configuration for admission control policies as explained in [Protect your Kubernetes workloads](kubernetes-workload-protections.md)
+- **The Defender extension** ΓÇô The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
-We use AWS's CloudWatch to collect log data. To monitor your EKS clusters with Defender for Cloud, your AWS account needs to be connected to Microsoft Defender for Cloud [via the environment settings page](quickstart-onboard-aws.md). You'll need both the **Defender for Containers** plan and the **CSPM** plan (for configuration monitoring and recommendations).
+- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
> [!NOTE] > Defender for Containers' support for AWS EKS clusters is a preview feature. :::image type="content" source="./media/defender-for-containers/architecture-eks-cluster.png" alt-text="High-level architecture of the interaction between Microsoft Defender for Containers, Amazon Web Services' EKS clusters, Azure Arc-enabled Kubernetes, and Azure Policy." lightbox="./media/defender-for-containers/architecture-eks-cluster.png"::: -
+### [**GCP (GKE)**](#tab/defender-for-container-gke)
+
+### Architecture diagram of Defender for Cloud and GKE clusters<a name="jit-asc"></a>
+
+The following describes the components necessary in order to receive the full protection offered by Microsoft Defender for Cloud for Containers.
+
+- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).
+
+- **The Defender extension** ΓÇô The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
+
+- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
+
+> [!NOTE]
+> Defender for Containers' support for GCP GKE clusters is a preview feature.
+++ ## Environment hardening through security recommendations
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
The vulnerability scanner extension works as follows:
Scanning begins automatically as soon as the extension is successfully deployed. Scans will then run every 12 hours. This interval isn't configurable. >[!IMPORTANT]
- > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following IPs to your allow lists (via port 443 - the default for HTTPS):
+ > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following IPs to your allowlists (via port 443 - the default for HTTPS):
> > - `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center >
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 02/21/2022 Last updated : 02/27/2022 zone_pivot_groups: connect-aws-accounts
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
- Access to an AWS account. -- **To enable the Defender for Kubernetes plan**, you'll need:
+- **To enable the Defender for Containers plan**, you'll need:
- At least one Amazon EKS cluster with permission to access to the EKS K8s API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). - The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region.
Additional extensions should be enabled on Arc-connected machines. These extensi
- (Optional) Select **Configure**, to edit the configuration as required.
-1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Kubernetes protect your AWS EKS clusters.
+1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters.
> [!Note]
- > Azure Arc-enabled Kubernetes, and the Defender extension should be installed. Use the dedicated Defender for Cloud recommendation to deploy the extension (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-kubernetes-introduction.md#protect-amazon-elastic-kubernetes-service-clusters).
+ > Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks).
1. Select **Next: Configure access**.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To protect your GCP-based resources, you can connect an account in two different
- **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources. - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds.md)
+ - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more.
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
To protect your GCP-based resources, you can connect an account in two different
|Aspect|Details| |-|:-| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. |
-|Pricing:|The **CSPM plan** is free.<br> The **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
+|Pricing:|The **CSPM plan** is free.<br> The **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
|Required roles and permissions:| **Contributor** on the relevant Azure Subscription| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)| |||
Follow the steps below to create your GCP cloud connector.
1. Ensure that the following resources were created:
- - CSPM service account reader role
- - MDFC identity federation
- - CSPM identity pool
- - *Microsoft Defender for Servers* service account (when the servers plan is enabled)
- - *Azure-Arc for servers onboarding* service account (when the Arc for servers auto-provisioning is enabled)
+ | CSPM | Defender for Containers|
+ |--|--|
+ | CSPM service account reader role <br> Microsoft Defender for Cloud identity federation <br> CSPM identity pool <br>*Microsoft Defender for Servers* service account (when the servers plan is enabled) <br>*Azure-Arc for servers onboarding* service account (when the Arc for servers auto-provisioning is enabled) | Microsoft Defender ContainersΓÇÖ service account role, <br> Microsoft Defender Data Collector service account role <br> microsoft defender for cloud identity pool |
1. (**Servers only**) When Arc auto-provisioning is enabled, copy the unique numeric ID presented at the end of the Cloud Shell script.
Follow the steps below to create your GCP cloud connector.
1. Select the **Create**.
-After creating a connector, a scan will start on your GCP environment. New recommendations will appear in Defender for Cloud after up to 6 hours. If you enabled agent auto-provisioning, Arc agent installation will occur automatically for each new resource detected.
+After creating a connector, a scan will start on your GCP environment. New recommendations will appear in Defender for Cloud after up to 6 hours. If you enabled auto-provisioning, Azure Arc, and any enabled extensions will install automatically for each new resource detected.
## (Optional) Configure selected plans
To have full visibility to Microsoft Defender for Servers security content, ensu
> [!Note] > If Azure Arc is toggled **Off**, you will need to follow the manual installation process mentioned above.
+1. Select **Save**.
+
+1. Continue from step number 8, of the [Connect your GCP projects](#connect-your-gcp-projects) instructions.
+
+### Configure the Containers plan
+
+Microsoft Defender for Containers brings threat detection, and advanced defences to your GCP GKE Standard clusters. To get the full security value out of Defender for Containers, and to fully protect GCP clusters, ensure you have the following requirements configured:
+
+- **Kubernetes audit logs to Defender for Cloud** - Enabled by default. This configuration is available at a GCP Project level only. This provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud backend for further analysis.
+- **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension** - Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in 3 different ways:
+ - **(Recommended)** Enable the Defender for Container auto-provisioning at the project level as explained in the instructions below.
+ - Defender for Cloud recommendations, for per cluster installation, which will appear on the Microsoft Defender for Cloud's Recommendations page. Learn how to [deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters).
+ - Manual installation for [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md), and [extensions](../azure-arc/kubernetes/extensions.md).
+
+**To configure the Containers plan**:
+
+1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project).
+
+1. On the Select plans screen select **Configure**.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/containers-configure.png" alt-text="Screenshot showing where to click to configure the Containers plan.":::
+
+1. On the Auto provisioning screen, toggle the switches **On**.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/containers-configuration.png" alt-text="Screenshot showing the toggle switches for the Containers plan.":::
+ 1. Select **Save**.
+1. Continue from step number 8, of the [Connect your GCP projects](#connect-your-gcp-projects) instructions.
+ ::: zone-end ::: zone pivot="classic-connector"
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 02/28/2022 Last updated : 03/01/2022 # What's new in Microsoft Defender for Cloud?
Updates in February include:
- [Kubernetes workload protection for Arc enabled K8s clusters](#kubernetes-workload-protection-for-arc-enabled-k8s-clusters) - [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances) - [Microsoft Defender for Azure Cosmos DB plan released for preview](#microsoft-defender-for-azure-cosmos-db-plan-released-for-preview)
+- [Threat protection for Google Kubernetes Engine (GKE) clusters](#threat-protection-for-google-kubernetes-engine-gke-clusters)
### Kubernetes workload protection for Arc enabled K8s clusters
Learn more at [Introduction to Microsoft Defender for Azure Cosmos DB](concept-d
We're also introducing a new enablement experience for database security. You can now enable Microsoft Defender for Cloud protection on your subscription to protect all database types, such as, Azure Cosmos DB, Azure SQL Database, Azure SQL servers on machines, and Microsoft Defender for open-source relational databases through one enablement process. Specific resource types can be included, or excluded by configuring your plan.
-Learn how to [enable your database security at the subscrition level](quickstart-enable-defender-for-cosmos.md#enable-database-protection-at-the-subscription-level).
+Learn how to [enable your database security at the subscription level](quickstart-enable-defender-for-cosmos.md#enable-database-protection-at-the-subscription-level).
+
+### Threat protection for Google Kubernetes Engine (GKE) clusters
+
+Following our recent announcement [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances), Microsoft Defender for Containers has extended its Kubernetes threat protection, behavioral analytics, and built-in admission control policies to Google's Kubernetes Engine (GKE) Standard clusters. You can easily onboard any existing, or new GKE Standard clusters to your environment through our Automatic onboarding capabilities. Check out [Container security with Microsoft Defender for Cloud](defender-for-containers-introduction.md#vulnerability-assessment), for a full list of available features.
## January 2022
Updates in January include:
### Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix
-The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it is also a potential target for attackers. Consequently, we recommend security operations teams closely monitor the resource management layer.
+The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it's also a potential target for attackers. Consequently, we recommend security operations teams closely monitor the resource management layer.
Microsoft Defender for Resource Manager automatically monitors the resource management operations in your organization, whether they're performed through the Azure portal, Azure REST APIs, Azure CLI, or other Azure programmatic clients. Defender for Cloud runs advanced security analytics to detect threats and alerts you about suspicious activity.
The new alerts for this Defender plan cover these intentions as shown in the fol
| Alert (alert type) | Description | MITRE tactics (intentions)| Severity | |-|--|:-:|-|
-| **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
-| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium |
-| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium |
-| **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate pr