Updates from: 03/01/2022 02:07:52
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Cloudknox Howto View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-view-role-policy.md
The **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) en
1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** subtab. The **Role/Policies list** displays a list of existing roles/policies and the following information about each role/policy
- - **Role/Policy name**: The name of the roles/policies available to you.
- - **Role/Policy type**: **Custom**, **System**, or **CloudKnox only**
+ - **Role/Policy Name**: The name of the roles/policies available to you.
+ - **Role/Policy Type**: **Custom**, **System**, or **CloudKnox Only**
- **Actions**: The type of action you can perform on the role/policy, **Clone**, **Modify**, or **Delete**
The **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) en
The **Tasks** list appears, displaying: - A list of **Tasks**. - **For AWS:**
- - The **Users**, **Groups**, and **Roles** the task is **Directly assigned to**.
- - The **Group members** and **Role identities** the task is **Indirectly assessable by**.
+ - The **Users**, **Groups**, and **Roles** the task is **Directly Assigned To**.
+ - The **Group Members** and **Role Identities** the task is **Indirectly Accessible By**.
- **For Azure:**
- - The **Users**, **Groups**, **Enterprise applications** and **Managed identities** the task is **Directly assigned to**.
- - The **Group members** the task is **Indirectly assessable by**.
+ - The **Users**, **Groups**, **Enterprise Applications** and **Managed Identities** the task is **Directly Assigned To**.
+ - The **Group Members** the task is **Indirectly Accessible By**.
- **For GCP:**
- - The **Users**, **Groups**, and **Service accounts** the task is **Directly assigned to**.
- - The **Group members** the task is **Indirectly assessable by**.
+ - The **Users**, **Groups**, and **Service Accounts** the task is **Directly Assigned To**.
+ - The **Group Members** the task is **Indirectly Accessible By**.
1. To close the role/policy details, select the arrow to the left of the role/policy name.
The **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) en
- **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
- When the file is successfully exported, a message appears: **Exported successfully.**
+ When the file is successfully exported, a message appears: **Exported Successfully.**
- Check your email for a message from the CloudKnox Customer Success Team. This email contains a link to: - The **Role Policy Details** report in CSV format.
The **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) en
1. On the CloudKnox home page, select the **Remediation** dashboard, and then select the **Role/Policies** tab. 1. To filter the roles/policies, select from the following options:
- - **Authorization system type**: Select **AWS**, **Azure**, or **GCP**.
- - **Authorization system**: Select the accounts you want.
- - **Role/Policy type**: Select from the following options:
+ - **Authorization System Type**: Select **AWS**, **Azure**, or **GCP**.
+ - **Authorization System**: Select the accounts you want.
+ - **Role/Policy Type**: Select from the following options:
- **All**: All managed roles/policies. - **Custom**: A customer-managed role/policy. - **System**: A cloud service provider-managed role/policy.
- - **CloudKnox only**: A role/policy created by CloudKnox.
+ - **CloudKnox Only**: A role/policy created by CloudKnox.
- - **Role/Policy status**: Select **All**, **Assigned**, or **Unassigned**.
- - **Role/Policy usage**: Select **All** or **Unused**.
+ - **Role/Policy Status**: Select **All**, **Assigned**, or **Unassigned**.
+ - **Role/Policy Usage**: Select **All** or **Unused**.
1. Select **Apply**.
- To discard your changes, select **Reset filter**.
+ To discard your changes, select **Reset Filter**.
## Next steps
active-directory Cloudknox Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-add-account-after-onboarding.md
This article describes how to add an Amazon Web Services (AWS) account, Microsof
1. On the **Data collectors** dashboard, select **AWS**. 1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
- The **M-CIEM Onboarding - Summary** page displays.
+ The **CloudKnox Onboarding - Summary** page displays.
1. Go to **AWS Account IDs**, and then select **Edit** (the pencil icon).
- The **M-CIEM On Boarding - AWS Member Account Details** page displays.
+ The **CloudKnox Onboarding - AWS Member Account Details** page displays.
1. Go to **Enter Your AWS Account IDs**, and then select **Add** (the plus **+** sign). 1. Copy your account ID from AWS and paste it into the **Enter Account ID** box.
This article describes how to add an Amazon Web Services (AWS) account, Microsof
1. Create a new script for the new account and press the **Enter** key. 1. Paste the script you copied. 1. Locate the account line, delete the original account ID (the one that was previously added), and then run the script.
-1. Return to CloudKnox, and the new account ID you added will be added to the list of account IDs displayed in the **M-CIEM Onboarding - Summary** page.
+1. Return to CloudKnox, and the new account ID you added will be added to the list of account IDs displayed in the **CloudKnox Onboarding - Summary** page.
1. Select **Verify now & save**. When your changes are saved, the following message displays: **Successfully updated configuration.**
This article describes how to add an Amazon Web Services (AWS) account, Microsof
1. On the **Data collectors** dashboard, select **Azure**. 1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
- The **M-CIEM Onboarding - Summary** page displays.
+ The **CloudKnox Onboarding - Summary** page displays.
1. Go to **Azure subscription IDs**, and then select **Edit** (the pencil icon). 1. Go to **Enter your Azure Subscription IDs**, and then select **Add subscription** (the plus **+** sign).
This article describes how to add an Amazon Web Services (AWS) account, Microsof
1. Create a new script for the new subscription and press enter. 1. Paste the script you copied. 1. Locate the subscription line and delete the original subscription ID (the one that was previously added), and then run the script.
-1. Return to CloudKnox, and the new subscription ID you added will be added to the list of subscription IDs displayed in the **M-CIEM Onboarding - Summary** page.
+1. Return to CloudKnox, and the new subscription ID you added will be added to the list of subscription IDs displayed in the **CloudKnox Onboarding - Summary** page.
1. Select **Verify now & save**. When your changes are saved, the following message displays: **Successfully updated configuration.**
This article describes how to add an Amazon Web Services (AWS) account, Microsof
1. On the **Data collectors** dashboard, select **GCP**. 1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
- The **M-CIEM Onboarding - Summary** page displays.
+ The **CloudKnox Onboarding - Summary** page displays.
1. Go to **GCP Project IDs**, and then select **Edit** (the pencil icon). 1. Go to **Enter your GCP Project IDs**, and then select **Add Project ID** (the plus **+** sign).
This article describes how to add an Amazon Web Services (AWS) account, Microsof
1. Create a new script for the new project ID and press enter. 1. Paste the script you copied. 1. Locate the project ID line and delete the original project ID (the one that was previously added), and then run the script.
-1. Return to CloudKnox, and the new project ID you added will be added to the list of project IDs displayed in the **M-CIEM Onboarding - Summary** page.
+1. Return to CloudKnox, and the new project ID you added will be added to the list of project IDs displayed in the **CloudKnox Onboarding - Summary** page.
1. Select **Verify now & save**. When your changes are saved, the following message displays: **Successfully updated configuration.**
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
This article describes how to onboard an Amazon Web Services (AWS) account on Cl
You can enter up to 10 account IDs. Click the plus icon next to the text box to add more account IDs. > [!NOTE]
- > Perform the next 5 steps for each account ID you add.
+ > Perform the next 6 steps for each account ID you add.
1. Open another browser window and sign in to the AWS console for the member account.
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
To view a video on how to enable CloudKnox in your Azure AD tenant, select
1. Copy the script on the **Welcome** screen:
- `az ad ap create --id b46c3ac5-9da6-418f-a849-0a7a10b3c6c`
+ `az ad sp create --id b46c3ac5-9da6-418f-a849-0a07a10b3c6c`
1. If you have an Azure subscription, return to the Azure AD portal and select **Cloud Shell** on the navigation bar. If you don't have an Azure subscription, open a command prompt on a Windows Server.
Use the **Data Collectors** dashboard in CloudKnox to configure data collection
- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md) - For a list of frequently asked questions (FAQs) about CloudKnox, see [FAQs](cloudknox-faqs.md).-- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
+- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
This article describes how to onboard a Google Cloud Platform (GCP) project on C
1. You can choose to download and run the script at this point, or you can do it via Google Cloud Shell, as described in the [next step](cloudknox-onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed).
-### 4. Run scripts in Cloud Shell. (Optional if not already executed.)
+### 4. Run scripts in Cloud Shell. (Optional if not already executed)
1. In the **CloudKnox Onboarding - GCP Project Ids** page, select **Launch SSH**. 1. To copy all your scripts into your current directory, in **Open in Cloud Shell**, select **Trust repo**, and then select **Confirm**.
active-directory Cloudknox Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-account-settings.md
This information can't be modified because the user information is pulled from A
## View personal information
-1. In the CloudKnox home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account settings**.
+1. In the CloudKnox home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account Settings**.
- The **Personal information** box displays your **First name**, **Last name**, and the **Email address** that was used to register your account on CloudKnox.
+ The **Personal Information** box displays your **First Name**, **Last Name**, and the **Email Address** that was used to register your account on CloudKnox.
## View current organization information
-1. In the CloudKnox home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account settings**.
+1. In the CloudKnox home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account Settings**.
- The **Current organization information** displays the **Name** of your organization, the **Tenant ID** box, and the **User session timeout (min)**.
+ The **Current Organization Information** displays the **Name** of your organization, the **Tenant ID** box, and the **User Session Timeout (min)**.
-1. To change duration of the **User session timeout (min)**, select **Edit** (the pencil icon), and then enter the number of minutes before you want a user session to time out.
+1. To change duration of the **User Session Timeout (min)**, select **Edit** (the pencil icon), and then enter the number of minutes before you want a user session to time out.
1. Select the check mark to confirm your new setting.
active-directory Cloudknox Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-dashboard.md
The CloudKnox Permissions Management (CloudKnox) **Dashboard** provides an overv
1. In the CloudKnox home page, select **Dashboard**. 1. From the **Authorization systems type** dropdown, select **AWS**, **Azure**, or **GCP**.
-1. Select the **Authorization system** box to display a **List** of accounts and **Folders** available to you.
+1. Select the **Authorization System** box to display a **List** of accounts and **Folders** available to you.
1. Select the accounts and folders you want, and then select **Apply**.
- The **Permission creep index (PCI)** chart updates to display information about the accounts and folders you selected. The number of days since the information was last updated displays in the upper right corner.
+ The **Permission Creep Index (PCI)** chart updates to display information about the accounts and folders you selected. The number of days since the information was last updated displays in the upper right corner.
-1. In the Permission creep index (PCI) graph, select a bubble.
+1. In the Permission Creep Index (PCI) graph, select a bubble.
The bubble displays the number of identities that are considered high-risk.
For more information about the CloudKnox **Dashboard**, see [View key statistics
## View user data on the PCI heat map
-The **Permission creep index (PCI)** heat map shows the incurred risk of users with access to high-risk privileges. The distribution graph displays all the users who contribute to the privilege creep. It displays how many users contribute to a particular score. For example, if the score from the PCI chart is 14, the graph shows how many users have a score of 14.
+The **Permission Creep Index (PCI)** heat map shows the incurred risk of users with access to high-risk privileges. The distribution graph displays all the users who contribute to the privilege creep. It displays how many users contribute to a particular score. For example, if the score from the PCI chart is 14, the graph shows how many users have a score of 14.
- To view detailed data about a user, select the number.
The **Resource** section below the heat map on the right side of the page shows
## Next steps -- For more information about how to view key statistics and data in the Dashboard, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
+- For more information about how to view key statistics and data in the Dashboard, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Cloudknox Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-sources.md
You can use the **Data Collectors** dashboard in CloudKnox Permissions Managemen
1. Select the ellipses **(...)** at the end of the row in the table. 1. Select **Delete Configuration**.
- The **M-CIEM Onboarding - Summary** box displays.
+ The **CloudKnox Onboarding - Summary** box displays.
1. Select **Delete**. 1. Check your email for a one time password (OTP) code, and enter it in **Enter OTP**.
active-directory Cloudknox Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-dashboard.md
The data provided by CloudKnox includes metrics related to avoidable risk. These
You can view the following information in CloudKnox: -- The **Permission creep index (PCI)** heat map on the CloudKnox **Dashboard** identifies:
+- The **Permission Creep Index (PCI)** heat map on the CloudKnox **Dashboard** identifies:
- The number of users who have been granted high-risk permissions but aren't using them. - The number of users who contribute to the permission creep index (PCI) and where they are on the scale.
The CloudKnox **Dashboard** displays the following information:
- **Authorization system types**: A dropdown list of authorization system types you can access: AWS, Azure, and GCP. -- **Authorization system**: Displays a **List** of accounts and **Folders** in the selected authorization system you can access.
+- **Authorization System**: Displays a **List** of accounts and **Folders** in the selected authorization system you can access.
- To add or remove accounts and folders, from the **Name** list, select or deselect accounts and folders, and then select **Apply**. -- **Permission creep index (PCI)**: The graph displays the **# of identities contributing to PCI**.
+- **Permission Creep Index (PCI)**: The graph displays the **# of identities contributing to PCI**.
The PCI graph may display one or more bubbles. Each bubble displays the number of identities that are considered high risk. *High-risk* refers to the number of users who have permissions that exceed their normal or required usage. - To display a list of the number of identities contributing to the **Low PCI**, **Medium PCI**, and **High PCI**, select the **List** icon in the upper right of the graph.
The CloudKnox **Dashboard** displays the following information:
## The PCI heat map
-The **Permission creep index** heat map shows the incurred risk of users with access to high-risk permissions, and provides information about:
+The **Permission Creep Index** heat map shows the incurred risk of users with access to high-risk permissions, and provides information about:
- Users who were given access to high-risk permissions but aren't actively using them. *High-risk permissions* include the ability to modify or delete information in the authorization system.
active-directory Cloudknox Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-remediation.md
This article provides an overview of the components of the **Remediation** dashb
- **Roles/Policies**: Use this subtab to perform Create Read Update Delete (CRUD) operations on roles/policies. - **Permissions**: Use this subtab to perform Read Update Delete (RUD) on granted permissions.
- - **Role/Policy template**: Use this subtab to create a template for roles/policies template.
+ - **Role/Policy Template**: Use this subtab to create a template for roles/policies template.
- **Requests**: Use this subtab to view approved, pending, and processed Permission on Demand (POD) requests.
- - **My requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
- - **Settings**: Use this subtab to select **Request role/policy filters**, **Request settings**, and **Auto-approve** settings.
+ - **My Requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
+ - **Settings**: Use this subtab to select **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** settings.
1. Use the dropdown to select the **Authorization System Type** and **Authorization System**, and then select **Apply**.
This article provides an overview of the components of the **Remediation** dashb
The **Role/Policies** subtab provides the following settings that you can use to view and create a role/policy. -- **Authorization system type**: Displays a dropdown with authorization system types you can access, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).-- **Authorization system**: Displays a list of authorization systems accounts you can access.-- **Role/Policy type**: A dropdown with available role/policy types. You can select **All**, **Custom**, **System**, or **CloudKnox only**.-- **Role/Policy status**: A dropdown with available role/policy statuses. You can select **All**, **Assigned**, or **Unassigned**.-- **Role/Policy usage**: A dropdown with **All** or **Unused** roles/policies.
+- **Authorization System Type**: Displays a dropdown with authorization system types you can access, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+- **Authorization System**: Displays a list of authorization systems accounts you can access.
+- **Policy Type**: A dropdown with available role/policy types. You can select **All**, **Custom**, **System**, or **CloudKnox Only**.
+- **Policy Status**: A dropdown with available role/policy statuses. You can select **All**, **Assigned**, or **Unassigned**.
+- **Policy Usage**: A dropdown with **All** or **Unused** roles/policies.
- **Apply**: Select this option to save the changes you've made. - **Reset Filter**: Select this option to discard the changes you've made.
-The **Role/Policies list** displays a list of existing roles/policies and the following information about each role/policy.
+The **Policy list** displays a list of existing roles/policies and the following information about each role/policy.
-- **Role/Policy name**: The name of the roles/policies available to you.-- **Role/Policy type**: **Custom**, **System**, or **CloudKnox only**
+- **Policy Name**: The name of the roles/policies available to you.
+- **Policy Type**: **Custom**, **System**, or **CloudKnox Only**
- **Actions** - Select **Clone** to create a duplicate copy of the role/policy. - Select **Modify** to change the existing role/policy.
Other options available to you:
- **Reload**: Select this option to refresh the displayed list of roles/policies. - **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
- When the file is successfully exported, a message appears: **Exported successfully.**
+ When the file is successfully exported, a message appears: **Exported Successfully.**
- Check your email for a message from the CloudKnox Customer Success Team. This email contains a link to: - The **Role Policy Details** report in CSV format.
Other options available to you:
The **Permissions** subtab provides the following settings that you can use to add filters to your permissions. -- **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.-- **Authorization system**: Displays a list of authorization systems accounts you can access.-- **Search for**: A dropdown from which you can select **Group**, **User**, or **Role**.-- **User status**: A dropdown from which you can select **Any**, **Active**, or **Inactive**.-- **Privilege creep index** (PCI): A dropdown from which you can select a PCI rating of **Any**, **High**, **Medium**, or **Low**.
+- **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+- **Authorization System**: Displays a list of authorization systems accounts you can access.
+- **Search For**: A dropdown from which you can select **Group**, **User**, or **Role**.
+- **User Status**: A dropdown from which you can select **Any**, **Active**, or **Inactive**.
+- **Privilege Creep Index** (PCI): A dropdown from which you can select a PCI rating of **Any**, **High**, **Medium**, or **Low**.
- **Task Usage**: A dropdown from which you can select **Any**, **Granted**, **Used**, or **Unused**.-- **Enter a username**: A dropdown from which you can select a username.
+- **Enter a Username**: A dropdown from which you can select a username.
- **Enter a Group Name**: A dropdown from which you can select a group name. - **Apply**: Select this option to save the changes you've made and run the filter. - **Reset Filter**: Select this option to discard the changes you've made. - **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
- When the file is successfully exported, a message appears: **Exported successfully.**
+ When the file is successfully exported, a message appears: **Exported Successfully.**
- Check your email for a message from the CloudKnox Customer Success Team. This email contains a link to: - The **Role Policy Details** report in CSV format.
The **Permissions** subtab provides the following settings that you can use to a
## Create templates for roles/policies
-Use the **Role/Policy template** subtab to create a template for roles/policies.
+Use the **Role/Policy Template** subtab to create a template for roles/policies.
1. Select:
- - **Authorization system type**: Displays a dropdown with authorization system types you can access, WS, Azure, and GCP.
- - **Create template**: Select this option to create a template.
+ - **Authorization System Type**: Displays a dropdown with authorization system types you can access, WS, Azure, and GCP.
+ - **Create Template**: Select this option to create a template.
1. In the **Details** page, make the required selections:
- - **Authorization system type**: Select the authorization system types you want, **AWS**, **Azure**, or **GCP**.
- - **Template name**: Enter a name for your template, and then select **Next**.
+ - **Authorization System Type**: Select the authorization system types you want, **AWS**, **Azure**, or **GCP**.
+ - **Template Name**: Enter a name for your template, and then select **Next**.
-1. In the **Statements** page, complete the **Tasks**, **Resources**, **Request conditions** and **Effect** sections. Then select **Save** to save your role/policy template.
+1. In the **Statements** page, complete the **Tasks**, **Resources**, **Request Conditions** and **Effect** sections. Then select **Save** to save your role/policy template.
Other options available to you: - **Search**: Select this option to search for a specific role/policy.
Other options available to you:
Use the **Requests** tab to view a list of **Pending**, **Approved**, and **Processed** requests for permissions your team members have made. - Select:
- - **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
- - **Authorization system**: Displays a list of authorization systems accounts you can access.
+ - **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+ - **Authorization System**: Displays a list of authorization systems accounts you can access.
Other options available to you: - **Reload**: Select this option to refresh the displayed list of roles/policies. - **Search**: Select this option to search for a specific role/policy. - **Columns**: Select one or more of the following to view more information about the request:
- - **Submitted by**
- - **On behalf of**
- - **Authorization system**
- - **Tasks/scope/policies**
- - **Request date**
+ - **Submitted By**
+ - **On Behalf Of**
+ - **Authorization System**
+ - **Tasks/Scope/Policies**
+ - **Request Date**
- **Schedule** - **Submitted**
- - **Reset to default**: Select this option to discard your settings.
+ - **Reset to Default**: Select this option to discard your settings.
### View pending requests
The **Processed** table displays information about the requests that have been p
Use the **My Requests** subtab to view a list of **Pending**, **Approved**, and **Processed** requests for permissions your team members have made and you must approve or reject. - Select:
- - **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
- - **Authorization system**: Displays a list of authorization systems accounts you can access.
+ - **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+ - **Authorization System**: Displays a list of authorization systems accounts you can access.
Other options available to you: - **Reload**: Select this option to refresh the displayed list of roles/policies. - **Search**: Select this option to search for a specific role/policy. - **Columns**: Select one or more of the following to view more information about the request:
- - **On behalf of**
- - **Authorization system**
- - **Tasks/scope/policies**
- - **Request date**
+ - **On Behalf Of**
+ - **Authorization System**
+ - **Tasks/Scope/Policies**
+ - **Request Date**
- **Schedule**
- - **Reset to default**: Select this option to discard your settings.
-- **New request**: Select this option to create a new request for permissions. For more information, see Create a request for permissions.
+ - **Reset to Default**: Select this option to discard your settings.
+- **New Request**: Select this option to create a new request for permissions. For more information, see Create a request for permissions.
### View pending requests
The **Processed** table displays information about the requests that have been p
## Make setting selections for requests and auto-approval
-The **Settings** subtab provides the following settings that you can use to make setting selections to **Request role/policy filters**, **Request settings**, and **Auto-approve** requests.
+The **Settings** subtab provides the following settings that you can use to make setting selections to **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** requests.
-- **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.-- **Authorization system**: Displays a list of authorization systems accounts you can access.
+- **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+- **Authorization System**: Displays a list of authorization systems accounts you can access.
- **Reload**: Select this option to refresh the displayed list of role/policy filters.-- **Create filter**: Select this option to create a new filter.
+- **Create Filter**: Select this option to create a new filter.
## Next steps
active-directory Cloudknox Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-access-keys.md
The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) prov
- **Users**: Tracks assigned permissions and usage of various identities. - **Groups**: Tracks assigned permissions and usage of the group and the group members.-- **Active resources**: Tracks active resources (used in the last 90 days).-- **Active tasks**: Tracks active tasks (performed in the last 90 days).-- **Access keys**: Tracks the permission usage of access keys for a given user.-- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
This article describes how to view usage analytics about access keys.
This article describes how to view usage analytics about access keys.
When you select **Access keys**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
-1. On the main **Analytics** dashboard, select **Access keys** from the drop-down list at the top of the screen.
+1. On the main **Analytics** dashboard, select **Access Keys** from the drop-down list at the top of the screen.
- The following components make up the **Access keys** dashboard:
+ The following components make up the **Access Keys** dashboard:
- - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- - **Authorization system**: Select from a **List** of accounts and **Folders***.
- - **Key status**: Select **All**, **Active**, or **Inactive**.
- - **Key activity state**: Select **All**, how long the access key has been used, or **Not used**.
- - **Key age**: Select **All** or how long ago the access key was created.
- - **Task type**: Select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders***.
+ - **Key Status**: Select **All**, **Active**, or **Inactive**.
+ - **Key Activity State**: Select **All**, how long the access key has been used, or **Not Used**.
+ - **Key Age**: Select **All** or how long ago the access key was created.
+ - **Task Type**: Select **All** tasks, **High Risk Tasks** or, for a list of tasks where users have deleted data, select **Delete Tasks**.
- **Search**: Enter criteria to find specific tasks. 1. Select **Apply** to display the criteria you've selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## View the results of your query
-The **Access keys** table displays the results of your query.
+The **Access Keys** table displays the results of your query.
-- **Access key ID**: Provides the ID for the access key.
+- **Access Key ID**: Provides the ID for the access key.
- To view details about the access keys, select the down arrow to the left of the ID. - The **Owner** name. - The **Account** number.-- The **Permission creep index (PCI)**: Provides the following information:
+- The **Permission Creep Index (PCI)**: Provides the following information:
- **Index**: A numeric value assigned to the PCI. - **Since**: How many days the PCI value has been at the displayed level. - **Tasks** Displays the number of **Granted** and **Executed** tasks. - **Resources**: The number of resources used.-- **Access key age**: How old the access key is, in days.-- **Last used**: How long ago the access key was last accessed.
+- **Access Key Age**: How old the access key is, in days.
+- **Last Used**: How long ago the access key was last accessed.
## Apply filters to your query
-There are many filter options within the **Active tasks** screen, including filters by **Authorization system**, filters by **User** and filters by **Task**.
+There are many filter options within the **Active Tasks** screen, including filters by **Authorization System**, filters by **User** and filters by **Task**.
Filters can be applied in one, two, or all three categories depending on the type of information you're looking for. ### Apply filters by authorization system type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by authorization system
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by key status
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Key status** dropdown, select the type of key: **All**, **Active**, or **Inactive**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key Status** dropdown, select the type of key: **All**, **Active**, or **Inactive**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by key activity status
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Key activity state** dropdown, select **All**, the duration for how long the access key has been used, or **Not used**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key Activity State** dropdown, select **All**, the duration for how long the access key has been used, or **Not Used**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by key age
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Key age** dropdown, select **All** or how long ago the access key was created.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key Age** dropdown, select **All** or how long ago the access key was created.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by task type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Task type** dropdown, select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type** dropdown, select **All** tasks, **High Risk Tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
Filters can be applied in one, two, or all three categories depending on the typ
- To view assigned permissions and usage by users, see [View usage analytics about users](cloudknox-usage-analytics-users.md). - To view assigned permissions and usage of the group and the group members, see [View usage analytics about groups](cloudknox-usage-analytics-groups.md). - To view active resources, see [View usage analytics about active resources](cloudknox-usage-analytics-active-resources.md).-- To view assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
+- To view assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-resources.md
The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) coll
- **Users**: Tracks assigned permissions and usage of various identities. - **Groups**: Tracks assigned permissions and usage of the group and the group members.-- **Active resources**: Tracks active resources (used in the last 90 days).-- **Active tasks**: Tracks active tasks (performed in the last 90 days).-- **Access keys**: Tracks the permission usage of access keys for a given user.-- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
This article describes how to view usage analytics about active resources. ## Create a query to view active resources
-1. On the main **Analytics** dashboard, select **Active resources** from the drop-down list at the top of the screen.
+1. On the main **Analytics** dashboard, select **Active Resources** from the drop-down list at the top of the screen.
- The dashboard only lists tasks that are active. The following components make up the **Active resources** dashboard:
+ The dashboard only lists tasks that are active. The following components make up the **Active Resources** dashboard:
1. From the dropdowns, select:
- - **Authorization system type**: The authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- - **Authorization system**: The **List** of accounts and **Folders** you want to include.
- - **Tasks type**: Select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
- - **Service resource type**: The service resource type.
+ - **Authorization System Type**: The authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: The **List** of accounts and **Folders** you want to include.
+ - **Tasks Type**: Select **All** tasks, **High Risk Tasks** or, for a list of tasks where users have deleted data, select **Delete Tasks**.
+ - **Service Resource Type**: The service resource type.
- **Search**: Enter criteria to find specific tasks. 1. Select **Apply** to display the criteria you've selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## View the results of your query
-The **Active resources** table displays the results of your query:
+The **Active Resources** table displays the results of your query:
- **Resource Name**: Provides the name of the task. - To view details about the task, select the down arrow. - **Account**: The name of the account.-- **Resources type**: The type of resources used, for example, **bucket** or **key**.
+- **Resources Type**: The type of resources used, for example, **bucket** or **key**.
- **Tasks**: Displays the number of **Granted** and **Executed** tasks.-- **Number of users**: The number of users with access and accessed.
+- **Number of Users**: The number of users with access and accessed.
- Select the ellipses **(...)** and select **Tags** to add a tag. ## Add a tag to an active resource 1. Select the ellipses **(...)** and select **Tags**.
-1. From the **Select a tag** dropdown, select a tag.
-1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
1. In the **Value (Optional)** box, enter a value.
-1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
-1. To add the tag to the serverless function, select **Add tag**.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
## Apply filters to your query
-There are many filter options within the **Active resources** screen, including filters by **Authorization system**, filters by **User** and filters by **Task**.
+There are many filter options within the **Active Resources** screen, including filters by **Authorization System**, filters by **User** and filters by **Task**.
Filters can be applied in one, two, or all three categories depending on the type of information you're looking for. ### Apply filters by authorization system
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by authorization system type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by task type You can filter user details by type of user, user role, app, or service used, or by resource.
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Task type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by service resource type You can filter user details by type of user, user role, app, or service used, or by resource.
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Service Resource type**, select the type of service resource.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Service Resource Type**, select the type of service resource.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## Export the results of your query
You can filter user details by type of user, user role, app, or service used, or
- To track assigned permissions and usage of users, see [View usage analytics about users](cloudknox-usage-analytics-users.md). - To track assigned permissions and usage of the group and the group members, see [View usage analytics about groups](cloudknox-usage-analytics-groups.md). - To track the permission usage of access keys for a given user, see [View usage analytics about access keys](cloudknox-usage-analytics-access-keys.md).-- To track assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
+- To track assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-tasks.md
The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) coll
- **Users**: Tracks assigned permissions and usage of various identities. - **Groups**: Tracks assigned permissions and usage of the group and the group members.-- **Active resources**: Tracks active resources (used in the last 90 days).-- **Active tasks**: Tracks active tasks (performed in the last 90 days).-- **Access keys**: Tracks the permission usage of access keys for a given user.-- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
This article describes how to view usage analytics about active tasks. ## Create a query to view active tasks
-When you select **Active tasks**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+When you select **Active Tasks**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
-1. On the main **Analytics** dashboard, select **Active tasks** from the drop-down list at the top of the screen.
+1. On the main **Analytics** dashboard, select **Active Tasks** from the drop-down list at the top of the screen.
- The dashboard only lists tasks that are active. The following components make up the **Active tasks** dashboard:
+ The dashboard only lists tasks that are active. The following components make up the **Active Tasks** dashboard:
- - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- - **Authorization system**: Select from a **List** of accounts and **Folders***.
- - **Tasks type**: Select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders***.
+ - **Tasks Type**: Select **All** tasks, **High Risk tasks** or, for a list of tasks where users have deleted data, select **Delete Tasks**.
- **Search**: Enter criteria to find specific tasks. 1. Select **Apply** to display the criteria you've selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## View the results of your query
-The **Active tasks** table displays the results of your query.
+The **Active Tasks** table displays the results of your query.
- **Task Name**: Provides the name of the task. - To view details about the task, select the down arrow in the table.
- - A **Normal task** icon displays to the left of the task name if the task is normal (that is, not risky).
- - A **Deleted task** icon displays to the left of the task name if the task involved deleting data.
- - A **High-risk task** icon displays to the left of the task name if the task is high-risk.
+ - A **Normal Task** icon displays to the left of the task name if the task is normal (that is, not risky).
+ - A **Deleted Task** icon displays to the left of the task name if the task involved deleting data.
+ - A **High-Risk Task** icon displays to the left of the task name if the task is high-risk.
- **Performed on (resources)**: The number of resources on which the task was used. - **Number of Users**: Displays how many users performed tasks. The tasks are organized into the following columns:
- - **With access**: Displays the number of users that have access to the task but haven't accessed it.
+ - **With Access**: Displays the number of users that have access to the task but haven't accessed it.
- **Accessed**: Displays the number of users that have accessed the task. ## Apply filters to your query
-There are many filter options within the **Active tasks** screen, including **Authorization system**, **User**, and **Task**.
+There are many filter options within the **Active Tasks** screen, including **Authorization System**, **User**, and **Task**.
Filters can be applied in one, two, or all three categories depending on the type of information you're looking for. ### Apply filters by authorization system type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by authorization system
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by task type You can filter user details by type of user, user role, app, or service used, or by resource.
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Task type** dropdown, select the type of tasks: **All**, **High risk tasks**, or **Delete tasks**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type** dropdown, select the type of tasks: **All**, **High Risk Tasks**, or **Delete Tasks**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## Export the results of your query
You can filter user details by type of user, user role, app, or service used, or
- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](cloudknox-usage-analytics-groups.md). - To view active resources, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md). - To view the permission usage of access keys for a given user, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).-- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-groups.md
The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) coll
- **Users**: Tracks assigned permissions and usage of various identities. - **Groups**: Tracks assigned permissions and usage of the group and the group members.-- **Active resources**: Tracks active resources (used in the last 90 days).-- **Active tasks**: Tracks active tasks (performed in the last 90 days).-- **Access keys**: Tracks the permission usage of access keys for a given user.-- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
This article describes how to view usage analytics about groups.
When you select **Groups**, the **Usage Analytics** dashboard provides a high-le
The following components make up the **Groups** dashboard:
- - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- - **Authorization system**: Select from a **List** of accounts and **Folders**.
- - **Group type**: Select **All**, **ED**, or **Local**.
- - **Group activity status**: Select **All**, **Active**, or **Inactive**.
- - **Tasks Type**: Select **All**, **High-risk tasks**, or **Delete tasks**
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders**.
+ - **Group Type**: Select **All**, **ED**, or **Local**.
+ - **Group Activity Status**: Select **All**, **Active**, or **Inactive**.
+ - **Tasks Type**: Select **All**, **High Risk Tasks**, or **Delete Tasks**
- **Search**: Enter group name to find specific group. 1. To display the criteria you've selected, select **Apply**.
- - **Reset filter**: Select to discard your changes.
+ - **Reset Filter**: Select to discard your changes.
## View the results of your query
The **Groups** table displays the results of your query:
- **Group Name**: Provides the name of the group. - To view details about the group, select the down arrow. -- A **Group type** icon displays to the left of the group name to describe the type of group (**ED** or **Local**).
+- A **Group Type** icon displays to the left of the group name to describe the type of group (**ED** or **Local**).
- The **Domain/Account** name.-- The **Permission creep index (PCI)**: Provides the following information:
+- The **Permission Creep Index (PCI)**: Provides the following information:
- **Index**: A numeric value assigned to the PCI. - **Since**: How many days the PCI value has been at the displayed level. - **Tasks**: Displays the number of **Granted** and **Executed** tasks.
The **Groups** table displays the results of your query:
## Add a tag to a group 1. Select the ellipses **(...)** and select **Tags**.
-1. From the **Select a tag** dropdown, select a tag.
-1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
1. In the **Value (Optional)** box, enter a value.
-1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
-1. To add the tag to the serverless function, select **Add tag**.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
## View detailed information about a group
-1. Select the down arrow to the left of the **Group name**.
+1. Select the down arrow to the left of the **Group Name**.
The list of **Tasks** organized by **Unused** and **Used** displays. 1. Select the arrow to the left of the group name to view details about the task. 1. Select **Information** (**i**) to view when the task was last used.
-1. From the **Tasks** dropdown, select **All tasks**, **High-risk tasks**, and **Delete tasks**.
+1. From the **Tasks** dropdown, select **All Tasks**, **High Risk Tasks**, and **Delete Tasks**.
1. The pane on the right displays a list of **Users**, **Policies** for **AWS** and **Roles** for **GCP or AZURE**, and **Tags**. ## Apply filters to your query
-There are many filter options within the **Groups** screen, including filters by **Authorization system type**, **Authorization system**, **Group type**, **Group activity status**, and **Tasks type**.
+There are many filter options within the **Groups** screen, including filters by **Authorization System Type**, **Authorization System**, **Group Type**, **Group Activity Status**, and **Tasks Type**.
Filters can be applied in one, two, or all three categories depending on the type of information you're looking for. ### Apply filters by authorization system type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by authorization system
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by group type You can filter user details by type of user, user role, app, or service used, or by resource.
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Group type** dropdown, select the type of user: **All**, **ED**, or **Local**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Group Type** dropdown, select the type of user: **All**, **ED**, or **Local**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by group activity status You can filter user details by type of user, user role, app, or service used, or by resource.
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Group activity status** dropdown, select the type of user: **All**, **Active**, or **Inactive**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Group Activity Status** dropdown, select the type of user: **All**, **Active**, or **Inactive**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by tasks type You can filter user details by type of user, user role, app, or service used, or by resource.
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Tasks type** dropdown, select the type of user: **All**, **High-risk tasks**, or **Delete tasks**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Tasks Type** dropdown, select the type of user: **All**, **High Risk Tasks**, or **Delete Tasks**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## Export the results of your query
active-directory Cloudknox Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-serverless-functions.md
The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) coll
- **Users**: Tracks assigned permissions and usage of various identities. - **Groups**: Tracks assigned permissions and usage of the group and the group members.-- **Active resources**: Tracks active resources (used in the last 90 days).-- **Active tasks**: Tracks active tasks (performed in the last 90 days).-- **Access keys**: Tracks the permission usage of access keys for a given user.-- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
This article describes how to view usage analytics about serverless functions. ## Create a query to view serverless functions
-When you select **Serverless functions**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+When you select **Serverless Functions**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
-1. On the main **Analytics** dashboard, select **Serverless functions** from the dropdown list at the top of the screen.
+1. On the main **Analytics** dashboard, select **Serverless Functions** from the dropdown list at the top of the screen.
- The following components make up the **Serverless functions** dashboard:
+ The following components make up the **Serverless Functions** dashboard:
- - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- - **Authorization system**: Select from a **List** of accounts and **Folders**.
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders**.
- **Search**: Enter criteria to find specific tasks. 1. Select **Apply** to display the criteria you've selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## View the results of your query
-The **Serverless functions** table displays the results of your query.
+The **Serverless Functions** table displays the results of your query.
-- **Function name**: Provides the name of the serverless function.
+- **Function Name**: Provides the name of the serverless function.
- To view details about a serverless function, select the down arrow to the left of the function name. -- A **Function type** icon displays to the left of the function name to describe the type of serverless function, for example **Lambda function**.-- The **Permission creep index (PCI)**: Provides the following information:
+- A **Function Type** icon displays to the left of the function name to describe the type of serverless function, for example **Lambda function**.
+- The **Permission Creep Index (PCI)**: Provides the following information:
- **Index**: A numeric value assigned to the PCI. - **Since**: How many days the PCI value has been at the displayed level. - **Tasks**: Displays the number of **Granted** and **Executed** tasks. - **Resources**: The number of resources used.-- **Last activity on**: The date the function was last accessed.
+- **Last Activity On**: The date the function was last accessed.
- Select the ellipses **(...)**, and then select **Tags** to add a tag. ## Add a tag to a serverless function 1. Select the ellipses **(...)** and select **Tags**.
-1. From the **Select a tag** dropdown, select a tag.
-1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
1. In the **Value (Optional)** box, enter a value.
-1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
-1. To add the tag to the serverless function, select **Add tag**.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
## View detailed information about a serverless function
The **Serverless functions** table displays the results of your query.
1. Select the arrow to the left of the task name to view details about the task. 1. Select **Information** (**i**) to view when the task was last used.
-1. From the **Tasks** dropdown, select **All tasks**, **High-risk tasks**, and **Delete tasks**.
+1. From the **Tasks** dropdown, select **All Tasks**, **High Risk Tasks**, and **Delete Tasks**.
## Apply filters to your query
-You can filter the **Serverless functions** results by **Authorization system type** and **Authorization system**.
+You can filter the **Serverless Functions** results by **Authorization System Type** and **Authorization System**.
### Apply filters by authorization system type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by authorization system
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
active-directory Cloudknox Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-users.md
The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) coll
- **Users**: Tracks assigned permissions and usage of various identities. - **Groups**: Tracks assigned permissions and usage of the group and the group members.-- **Active resources**: Tracks active resources (used in the last 90 days).-- **Active tasks**: Tracks active tasks (performed in the last 90 days).-- **Access keys**: Tracks the permission usage of access keys for a given user.-- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
This article describes how to view usage analytics about users.
When you select **Users**, the **Analytics** dashboard provides a high-level ove
The following components make up the **Users** dashboard:
- - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- - **Authorization system**: Select from a **List** of accounts and **Folders***.
- - **Identity type**: Select **All** identity types, **User**, **Role/App/Service a/c** or **Resource**.
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders***.
+ - **Identity Type**: Select **All** identity types, **User**, **Role/App/Service a/c** or **Resource**.
- **Search**: Enter criteria to find specific tasks. 1. Select **Apply** to display the criteria you've selected.
The **Identities** table displays the results of your query.
- **Name**: Provides the name of the group. - To view details about the group, select the down arrow. - The **Domain/Account** name.-- The **Permission creep index (PCI)**: Provides the following information:
+- The **Permission Creep Index (PCI)**: Provides the following information:
- **Index**: A numeric value assigned to the PCI. - **Since**: How many days the PCI value has been at the displayed level. - **Tasks**: Displays the number of **Granted** and **Executed** tasks. - **Resources**: The number of resources used.-- **User groups**: The number of users who accessed the group.-- **Last activity on**: The date the function was last accessed.
+- **User Groups**: The number of users who accessed the group.
+- **Last Activity On**: The date the function was last accessed.
- The ellipses **(...)**: Select **Tags** to add a tag. If you're using AWS, another selection is available from the ellipses menu: **Auto Remediate**. You can use this option to remediate your results automatically.
The **Identities** table displays the results of your query.
## Add a tag to a user 1. Select the ellipses **(...)** and select **Tags**.
-1. From the **Select a tag** dropdown, select a tag.
-1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
1. In the **Value (Optional)** box, enter a value.
-1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
-1. To add the tag to the serverless function, select **Add tag**.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
## Set the auto-remediate option (AWS only)
The **Identities** table displays the results of your query.
## Apply filters to your query
-There are many filter options within the **Users** screen, including filters by **Authorization system**, **Identity type**, and **Identity state**.
+There are many filter options within the **Users** screen, including filters by **Authorization System**, **Identity Type**, and **Identity State**.
Filters can be applied in one, two, or all three categories depending on the type of information you're looking for. ### Apply filters by authorization system type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by authorization system
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
1. Select **Apply** to run your query and display the information you selected. Select **Reset filter** to discard your changes. ### Apply filters by identity type
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Identity type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity Type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by identity subtype
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Identity subtype**, select the type of user: **All**, **ED**, **Local**, or **Cross-account**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity Subtype**, select the type of user: **All**, **ED**, **Local**, or **Cross Account**.
1. Select **Apply** to run your query and display the information you selected. Select **Reset filter** to discard your changes. ### Apply filters by identity state
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Identity state**, select the type of user: **All**, **Active**, or **Inactive**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity State**, select the type of user: **All**, **Active**, or **Inactive**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by identity filters
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Identity type**, select: **Risky** or **Inc. in PCI calculation only**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity Type**, select: **Risky** or **Incl. in PCI Calculation Only**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
### Apply filters by task type You can filter user details by type of user, user role, app, or service used, or by resource.
-1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
-1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Task type**, select the type of user: **All** or **High-risk tasks**.
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type**, select the type of user: **All** or **High Risk Tasks**.
1. Select **Apply** to run your query and display the information you selected.
- Select **Reset filter** to discard your changes.
+ Select **Reset Filter** to discard your changes.
## Export the results of your query
You can filter user details by type of user, user role, app, or service used, or
- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](cloudknox-usage-analytics-groups.md). - To view active resources, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md). - To view the permission usage of access keys for a given user, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).-- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Concept Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-attributes.md
na Previously updated : 02/18/2019 Last updated : 02/25/2021
To view the schema and verify it, follow these steps.
1. Go to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 1. Sign in with your global administrator account. 1. On the left, select **modify permissions** and ensure that **Directory.ReadWrite.All** is *Consented*.
-1. Run the query `https://graph.microsoft.com/beta/serviceprincipals/?$filter=startswith(Displayname,'Active')`. This query returns a filtered list of service principals.
+1. Run the query `https://graph.microsoft.com/beta/serviceprincipals/?$filter=startswith(DisplayName, ΓÇÿ{sync config name}ΓÇÖ)`. This query returns a filtered list of service principals. This can also be acquire via the App Registration node under Azure Active Directory.
1. Locate `"appDisplayName": "Active Directory to Azure Active Directory Provisioning"` and note the value for `"id"`. ``` "value": [
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
description: Use filter for devices in Conditional Access to enhance security po
Previously updated : 12/03/2021 Last updated : 02/28/2022
The following device attributes can be used with the filter for devices conditio
| physicalIds | Contains, NotContains | As an example all Windows Autopilot devices store ZTDId (a unique value assigned to all imported Windows Autopilot devices) in device physicalIds property. | (device.devicePhysicalIDs -contains "[ZTDId]:value") | | profileType | Equals, NotEquals | A valid profile type set for a device. Supported values are: RegisteredDeviceΓÇ»(default), SecureVM (used for Windows VMs in Azure enabled with Azure AD sign in.), Printer (used for printers), Shared (used for shared devices), IoT (used for IoT devices) | (device.profileType -notIn ["Printer", "Shared", "IoT"] | | systemLabels | Contains, NotContains | List of labels applied to the device by the system. Some of the supported values are: AzureResource (used for Windows VMs in Azure enabled with Azure AD sign in), M365Managed (used for devices managed using Microsoft Managed Desktop), MultiUser (used for shared devices) | (device.systemLabels -contains "M365Managed") |
-| trustType | Equals, NotEquals | A valid registered state for devices. Supported values are: AzureAD (used for Azure AD joined devices), ServerAD (used for Hybrid Azure AD joined devices), Workplace (used for Azure AD registered devices) | (device.trustType -notIn 'ServerAD, Workplace') |
+| trustType | Equals, NotEquals | A valid registered state for devices. Supported values are: AzureAD (used for Azure AD joined devices), ServerAD (used for Hybrid Azure AD joined devices), Workplace (used for Azure AD registered devices) | (device.trustType -ne 'Workplace') |
| extensionAttribute1-15 | Equals, NotEquals, StartsWith, NotStartsWith, EndsWith, NotEndsWith, Contains, NotContains, In, NotIn | extensionAttributes1-15 are attributes that customers can use for device objects. Customers can update any of the extensionAttributes1 through 15 with custom values and use them in the filter for devices condition in Conditional Access. Any string value can be used. | (device.extensionAttribute1 -eq 'SAW') | > [!NOTE]
active-directory Scenario Web Api Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-overview.md
This scenario, in which a protected web API calls other web APIs, builds on [Sce
## Specifics
-The app registration part that's related to API permissions is classical. The app configuration involves using the OAuth 2.0 On-Behalf-Of flow to exchange the JWT bearer token against a token for a downstream API. This token is added to the token cache, where it's available in the web API's controllers, and it can then acquire a token silently to call downstream APIs.
+The app registration part that's related to API permissions is classical. The app configuration involves using the OAuth 2.0 On-Behalf-Of flow to use the JWT bearer token for obtaining a second token for a downstream API. The second token in this case is added to the token cache, where it's available in the web API's controllers. This second token can be used to acquire an access token silently to call downstream APIs whenever required.
## Next steps
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
Instead of a client secret, you can provide a client certificate. The following
## Startup.cs
-Your web app will need to acquire a token for the downstream API. You specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApi(Configuration)`. This line exposes the `ITokenAcquisition` service that you can use in your controller and page actions. However, as you'll see in the following two options, it can be done more simply. You'll also need to choose a token cache implementation, for example `.AddInMemoryTokenCaches()`, in *Startup.cs*:
+Your web app will need to acquire a token for the downstream API. You specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApp(Configuration)`. This line exposes the `ITokenAcquisition` service that you can use in your controller and page actions. However, as you'll see in the following two options, it can be done more simply. You'll also need to choose a token cache implementation, for example `.AddInMemoryTokenCaches()`, in *Startup.cs*:
```csharp using Microsoft.Identity.Web;
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
See the following table for the validation differences of various properties for
| appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported | | Front-channel logout URL | https://localhost is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | https://localhost is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | https://localhost is allowed, http://localhost fails <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters <br><br> Wildcards aren't supported | | Display name | Maximum length of 120 characters | Maximum length of 120 characters | Maximum length of 90 characters |
-| Tags | Individual tag size must be between 1 and 256 characters (inclusive). No whitespaces or duplicate tags allowed. | Individual tag size must be between 1 and 256 characters (inclusive). No whitespaces or duplicate tags allowed. | Individual tag size must be between 1 and 256 characters (inclusive). No whitespaces or duplicate tags allowed. |
+| Tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags |
\* There's a global limit of about 1000 items across all the collection properties on the app object.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## February 2022
+
++
+
+
+[1776632](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1776632&triage=true&fullScreen=false&_a=edit)
+
+### General Availability - France digital accessibility requirement
+
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** End User Experiences
+
+
+This change provides users who are signing into Azure Active Directory on iOS, Android, and Web UI flavors information about the accessibility of Microsoft's online services via a link on the sign-in page. This ensures that the France digital accessibility compliance requirements are met. The change will only be available for French language experiences.[Learn more](https://www.microsoft.com/fr-fr/accessibility/accessibilite/accessibility-statement)
+
+
+
+
+[1424495](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1424495&triage=true&fullScreen=false&_a=edit)
+
+### General Availability - Downloadable access review history report
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+
+With Azure Active Directory (Azure AD) Access Reviews, you can create a downloadable review history to help your organization gain more insight. The report pulls the decisions that were taken by reviewers when a report is created. These reports can be constructed to include specific access reviews, for a specific time frame, and can be filtered to include different review types and review results.[Learn more](../governance/access-reviews-downloadable-review-history.md)
+
+++++
+
+
+[1309010](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1309010&triage=true&fullScreen=false&_a=edit)
+
+### Public Preview of Identity Protection for Workload Identities
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+Azure AD Identity Protection is extending its core capabilities of detecting, investigating, and remediating identity-based risk to workload identities. This allows organizations to better protect their applications, service principals, and managed identities. We are also extending Conditional Access so you can block at-risk workload identities. [Learn more](../identity-protection/concept-workload-identity-risk.md)
+
++
+
+
+[1213729](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1213729&triage=true&fullScreen=false&_a=edit)
+
+### Public Preview - Cross-tenant access settings for B2B collaboration
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** Collaboration
+**Clouds impacted:** China;Public (Microsoft 365, GCC);US Gov (GCC-H, DoD)
+
+
+Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multifactor authentication (MFA), device compliance, and hybrid Azure AD joined devices. [Learn more](../external-identities/cross-tenant-access-overview.md)
+
++
+
+
+[1424498](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1424498&triage=true&fullScreen=false&_a=edit)
+
+### Public preview - Create Azure AD access reviews with multiple stages of reviewers
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+
+Use multi-stage reviews to create Azure AD access reviews in sequential stages, each with its own set of reviewers and configurations. Supports multiple stages of reviewers to satisfy scenarios such as: independent groups of reviewers reaching quorum, escalations to other reviewers, and reducing burden by allowing for later stage reviewers to see a filtered-down list. For public preview, multi-stage reviews are only supported on reviews of groups and applications. [Learn more](../governance/create-access-review.md)
+
++
+
+
+[1775818](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1775818&triage=true&fullScreen=false&_a=edit)
+
+### New Federated Apps available in Azure AD Application gallery - February 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+
+In February 2022 we added the following 20 new applications in our App gallery with Federation support
+
+[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/cirros-sl/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
+
+You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](https://aka.ms/AppsTutorial),
+
+For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](https://aka.ms/AzureADAppRequest)
++
+
++
+
+
+[1242804](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1242804&triage=true&fullScreen=false&_a=edit)
+
+### Two new MDA detections in Identity Protection
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+Identity Protection has added two new detections from Microsoft Defender for Cloud Apps, (formerly MCAS). The Mass Access to Sensitive Files detection detects anomalous user activity, and the Unusual Addition of Credentials to an OAuth app detects suspicious service principal activity.[Learn more](../identity-protection/concept-identity-protection-risks.md)
+
++
+
+
+[1780796](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1780796&triage=true&fullScreen=false&_a=edit)
+
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - February 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+[BullseyeTDP](../saas-apps/bullseyetdp-provisioning-tutorial.md)
+[GitHub Enterprise Managed User (OIDC)](../saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md)
+[Gong](../saas-apps/gong-provisioning-tutorial.md)
+[LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md)
+[ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)
+For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+
+
+[1686037](https://identitydivision.visualstudio.com/IAM/IXR/_queries?id=1686037&triage=true&fullScreen=false&_a=edit)
+
+### General Availability - Privileged Identity Management (PIM) role activation for SharePoint Online enhancements
+
+**Type:** Changed feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+
+We have improved the Privileged Identity management (PIM) time to role activation for SharePoint Online. Now, when activating a role in PIM for SharePoint Online, you should be able to use your permissions right away in SharePoint Online. This change will roll out in stages, so you might not yet see these improvements in your organization. [Learn more](../privileged-identity-management/pim-how-to-activate-role.md)
+
+++
+
+
+ ## January 2022
active-directory Tutorial Manage Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md
The My Apps portal enables administrators and users to manage the applications u
You can keep the resources for future use, or if you're not going to continue to use the resources created in this tutorial, delete them with the following steps.
-## Delete the application
+### Delete the application
1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to delete. 1. In the **Manage** section of the left menu, select **Properties**. 1. At the top of the **Properties** pane, select **Delete**, and then select **Yes** to confirm you want to delete the application from your Azure AD tenant.
-## Delete the conditional access policy
+### Delete the conditional access policy
1. Select **Enterprise applications**. 1. Under **Security**, select **Conditional Access**. 1. Search for and select **MFA Pilot**. 1. Select **Delete** at the top of the pane.
-## Delete the group
+### Delete the group
1. Select **Azure Active Directory**, and then select **Groups**. 1. From the **Groups - All groups** page, search for and select the **MFA-Test-Group** group.
active-directory Overview Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md
++
+ Title: What is Azure Active Directory recommendations (preview)? | Microsoft Docs
+description: Provides a general overview of Azure Active Directory recommendations.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: e2b3d8ce-708a-46e4-b474-123792f35526
+
+ms.devlang: na
+
+ na
++ Last updated : 02/28/2022+++
+# Customer intent: As an Azure AD administrator, I want guidance to so that I can keep my Azure AD tenant in a healthy state.
+++
+# What is Azure Active Directory recommendations (preview)?
+
+This feature is supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Ideally, you want your Azure Active Directory (Azure AD) tenant to be in a secure and healthy state. However, trying to keep your knowledge regarding the management of the various components in your tenant up to date can become overwhelming.
+
+This is where Azure AD recommendations can help you.
+
+The Azure AD recommendations feature provides you personalized insights with actionable guidance to:
+
+- Help you identify opportunities to implement best practices for Azure AD-related features.
+- Improve the state of your Azure AD tenant.
+
+This article gives you an overview of how you can use Azure AD recommendations.
+++
+## What it is
+
+The [Azure Advisor](../../advisor/advisor-overview.md) is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. It analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, Reliability (formerly called High availability), and security of your Azure resources.
+
+Azure AD recommendations:
+
+- Is the Azure AD specific implementation of Azure Advisor.
+- Supports you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state.
+
+## What is a recommendation object?
+
+Azure AD tracks the status of a recommendation in a related object. This object includes attributes that are used to characterize the recommendation and a body to store the actionable guidance.
++
+Each object is characterized by:
+
+- **Title** - A short summary of what the recommendation is about.
+
+- **Priority** - Possible values are: low, medium, high
+
+- **Status** - Possible values are: Active, Dismissed, Postponed, CompletedByUser, CompletedBySystem.
+
+ - A recommendation is marked as CompletedByUser if you mark the recommendation as complete.
+
+ - A recommendation is marked as CompletedBySystem if a recommendation that did once apply is no longer applicable to you because you have taken the necessary steps.
+
+
+- **Impacted Resources** - A definition of the scope of a recommendation. Possible values are either a list of the impacted resources or **Tenant level**.
+
+- **Updated at** - The timestamp of the last status update.
++
+![Reporting](./media/overview-recommendations/recommendations-object.png)
+++
+The body of a recommendation object contains the actionable guidance:
+
+- **Description** - An explanation of what it is that Azure AD has detected and related background information.
+
+- **Value** - An explanation of why completing the recommendation will benefit you, and the value of the associated feature.
+
+- **Action Plan** - Detailed instructions to step-by-step implement a recommendation.
+++
+## How it works
+
+On a daily basis, Azure AD analyzes the configuration of your tenant. During an analysis, Azure AD compares the data of the known recommendations with the actual configuration. If a recommendation is flagged as applicable to your tenant, the recommendation status and its corresponding resources are marked as active.
++
+In the recommendations or resource list, you can use the **Status** information to determine your action item.
+
+As an administrator, you should review your tenant's recommendations, and their associated resources periodically.
+
+- **Dismiss**
+
+- **Mark complete**
+
+- **Postpone**
+
+- **Reactivate**
++
+### Dismiss
+
+If you don't like a recommendation, or if you have another reason for not applying it, you can dismiss it. In this case, Azure AD asks you for a reason for dismissing a recommendation.
+
+![Help us provide better recommendations](./media/overview-recommendations/provide-better-recommendations.png)
++
+### Mark as complete
+
+Use this state to indicate that you have:
+
+- Completed the recommendation.
+- Taken action for an individual resource.
+
+A recommendation or resource that has been marked as complete is again evaluated when Azure AD compares the available recommendations with your current configuration.
++
+### Postpone
+
+Postpone a recommendation or resource to address it in the future. The recommendation or resource will be marked as Active again when the date that the recommendation or resource is postponed to occurs.
+
+### Reactivate
+Accidentally dismissed, completed, or postponed a recommendation or resource. Mark it as active again to keep it top of mind.
++
+## Common tasks
+
+### Enable recommendations
+
+To enable your Azure AD recommendations:
+
+1. Navigate to the **[Preview features](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/PreviewHub)** page.
+2. Set the **State** to **On**.
+
+ ![Enable Azure AD recommendations](./media/overview-recommendations/enable-azure-ad-recommendations.png)
+++
+### Manage recommendations
+
+To manage your Azure AD recommendations:
+
+1. Navigate to the [Azure AD overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page.
+
+2. On the Azure AD overview page, in the toolbar, click **Recommendations (Preview)**.
+
+ ![Manage Azure AD recommendations](./media/overview-recommendations/manage-azure-ad-recommendations.png)
++++
+### Update the status of a resource
+
+To update the status of a resource, you have to right click a resource to bring up the edit menu.
+++
+## Next steps
+
+* [Activity logs in Azure Monitor](concept-activity-logs-azure-monitor.md)
+* [Stream logs to event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md)
+* [Send logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Air Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/air-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Air'
+description: Learn how to configure single sign-on between Azure Active Directory and Air.
++++++++ Last updated : 02/14/2022++++
+# Tutorial: Azure AD SSO integration with Air
+
+In this tutorial, you'll learn how to integrate Air with Azure Active Directory (Azure AD). When you integrate Air with Azure AD, you can:
+
+* Control in Azure AD who has access to Air.
+* Enable your users to be automatically signed-in to Air with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Air single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Air supports **SP and IDP** initiated SSO.
+
+## Adding Air from the gallery
+
+To configure the integration of Air into Azure AD, you need to add Air from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Air** in the search box.
+1. Select **Air** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Air
+
+Configure and test Azure AD SSO with Air using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Air.
+
+To configure and test Azure AD SSO with Air, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Air SSO](#configure-air-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Air test user](#create-air-test-user)** - to have a counterpart of B.Simon in Air that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Air** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type the value:
+ `urn:amazon:cognito:sp:us-east-1_hFBg5izBk`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://auth.air.inc/saml2/idpresponse`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://api.air.inc/integrations/saml/login/<CustomerID>`
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Sign-on URL. Contact [Air Client support team](mailto:dev@air.inc) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Air.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Air**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Air SSO
+
+1. Log in to the Air website as an administrator.
+
+1. Click on the **Workspace** at the top left corner.
+
+1. Go to the **Settings** -> **SECURITY &IDENTITY** tab and perform the perform the following steps:
+
+ ![Screenshot for Air configiration](./media/air-tutorial/integration.png)
+
+ a. In the **Manage approved email domains** text box, add your organizations email domains to the approved domains list to allow users with these domains to authenticate using SAML SSO.
+
+ b. Copy **Single sign-on URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ c. In the SAML metadata URL text box, paste the **App Federation Metadata Url** value which you have copied in the Azure portal.
+
+ d. Click **Enable SAML SSO**.
+
+### Create Air test user
+
+Log in to the Air website as an administrator.
+
+1. Click on the **Workspace** at the top left corner.
+
+1. Go to the **Settings** -> **MEMBERS** tab and click **Add members**.
+
+1. Give the Email address and click **Invite**.
+
+ ![Screenshot for User creation](./media/air-tutorial/user-new.png)
++
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on Test this application in Azure portal. This will redirect to Air Sign on URL where you can initiate the login flow.
+
+* Go to Air Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Air for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Air tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Air for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure Air you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
++
active-directory Cloudmore Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudmore-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Cloudmore | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Cloudmore'
description: Learn how to configure single sign-on between Azure Active Directory and Cloudmore.
Previously updated : 10/23/2019 Last updated : 02/23/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Cloudmore
+# Tutorial: Azure AD SSO integration with Cloudmore
In this tutorial, you'll learn how to integrate Cloudmore with Azure Active Directory (Azure AD). When you integrate Cloudmore with Azure AD, you can:
In this tutorial, you'll learn how to integrate Cloudmore with Azure Active Dire
* Enable your users to be automatically signed-in to Cloudmore with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Cloudmore supports **SP and IDP** initiated SSO
+* Cloudmore supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Cloudmore from the gallery
+## Add Cloudmore from the gallery
To configure the integration of Cloudmore into Azure AD, you need to add Cloudmore from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Cloudmore** in the search box. 1. Select **Cloudmore** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Cloudmore
+## Configure and test Azure AD SSO for Cloudmore
Configure and test Azure AD SSO with Cloudmore using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cloudmore.
-To configure and test Azure AD SSO with Cloudmore, complete the following building blocks:
+To configure and test Azure AD SSO with Cloudmore, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Cloudmore, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Cloudmore** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Cloudmore** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
- ![Cloudmore Domain and URLs single sign-on information](common/preintegrated.png)
- 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://www.cloudmore.com` 1. Click **Save**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Cloudmore**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Cloudmore. Work with [Clou
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Cloudmore Sign on URL where you can initiate the login flow.
-When you click the Cloudmore tile in the Access Panel, you should be automatically signed in to the Cloudmore for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Go to Cloudmore Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cloudmore for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Cloudmore tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cloudmore for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Cloudmore with Azure AD](https://aad.portal.azure.com/)
+Once you configure Cloudmore you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cloudsign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudsign-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with CloudSign | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with CloudSign'
description: Learn how to configure single sign-on between Azure Active Directory and CloudSign.
Previously updated : 07/15/2020 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with CloudSign
+# Tutorial: Azure AD SSO integration with CloudSign
In this tutorial, you'll learn how to integrate CloudSign with Azure Active Directory (Azure AD). When you integrate CloudSign with Azure AD, you can:
In this tutorial, you'll learn how to integrate CloudSign with Azure Active Dire
* Enable your users to be automatically signed-in to CloudSign with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* CloudSign supports **SP** initiated SSO
-
-* Once you configure CloudSign you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* CloudSign supports **SP** initiated SSO.
-## Adding CloudSign from the gallery
+## Add CloudSign from the gallery
To configure the integration of CloudSign into Azure AD, you need to add CloudSign from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **CloudSign** in the search box. 1. Select **CloudSign** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for CloudSign Configure and test Azure AD SSO with CloudSign using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CloudSign.
-To configure and test Azure AD SSO with CloudSign, complete the following building blocks:
+To configure and test Azure AD SSO with CloudSign, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with CloudSign, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **CloudSign** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **CloudSign** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type the URL:
- `https://www.cloudsign.jp/login`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a value using the following pattern:
`urn:amazon:cognito:sp:ap-northeast-1_<CUSTOM_ID>`
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://cloudsign-<CUSTOM_ID>.auth.ap-northeast-1.amazoncognito.com/saml2/idpresponse`
+ c. In the **Sign on URL** text box, type the URL:
+ `https://www.cloudsign.jp/login`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Reply URL and Identifier. Contact [CloudSign Client support team](mailto:contact@cloudsign.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [CloudSign Client support team](mailto:contact@cloudsign.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **CloudSign**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in CloudSign. Work with [Clou
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the CloudSign tile in the Access Panel, you should be automatically signed in to the CloudSign for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal. This will redirect to CloudSign Sign-on URL where you can initiate the login flow.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Go to CloudSign Sign-on URL directly and initiate the login flow from there.
-- [Try CloudSign with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the CloudSign tile in the My Apps, this will redirect to CloudSign Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect CloudSign with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure CloudSign you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Ezrentout Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ezrentout-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with EZRentOut | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with EZRentOut'
description: Learn how to configure single sign-on between Azure Active Directory and EZRentOut.
Previously updated : 12/04/2019 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with EZRentOut
+# Tutorial: Azure AD SSO integration with EZRentOut
In this tutorial, you'll learn how to integrate EZRentOut with Azure Active Directory (Azure AD). When you integrate EZRentOut with Azure AD, you can:
In this tutorial, you'll learn how to integrate EZRentOut with Azure Active Dire
* Enable your users to be automatically signed-in to EZRentOut with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* EZRentOut supports **SP** initiated SSO
-* EZRentOut supports **Just In Time** user provisioning
+* EZRentOut supports **SP** initiated SSO.
+* EZRentOut supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding EZRentOut from the gallery
+## Add EZRentOut from the gallery
To configure the integration of EZRentOut into Azure AD, you need to add EZRentOut from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **EZRentOut** in the search box. 1. Select **EZRentOut** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for EZRentOut
+## Configure and test Azure AD SSO for EZRentOut
Configure and test Azure AD SSO with EZRentOut using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in EZRentOut.
-To configure and test Azure AD SSO with EZRentOut, complete the following building blocks:
+To configure and test Azure AD SSO with EZRentOut, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure EZRentOut SSO](#configure-ezrentout-sso)** - to configure the single sign-on settings on application side.
- * **[Create EZRentOut test user](#create-ezrentout-test-user)** - to have a counterpart of B.Simon in EZRentOut that is linked to the Azure AD representation of user.
+ 1. **[Create EZRentOut test user](#create-ezrentout-test-user)** - to have a counterpart of B.Simon in EZRentOut that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **EZRentOut** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **EZRentOut** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.ezrentout.com/users/sign_in`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **EZRentOut**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called B.Simon is created in EZRentOut. EZRentOut suppor
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the EZRentOut tile in the Access Panel, you should be automatically signed in to the EZRentOut for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to EZRentOut Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to EZRentOut Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the EZRentOut tile in the My Apps, this will redirect to EZRentOut Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try EZRentOut with Azure AD](https://aad.portal.azure.com/)
+Once you configure EZRentOut you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Foko Retail Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foko-retail-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Foko Retail | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Foko Retail'
description: Learn how to configure single sign-on between Azure Active Directory and Foko Retail.
Previously updated : 11/18/2019 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Foko Retail
+# Tutorial: Azure AD SSO integration with Foko Retail
In this tutorial, you'll learn how to integrate Foko Retail with Azure Active Directory (Azure AD). When you integrate Foko Retail with Azure AD, you can:
In this tutorial, you'll learn how to integrate Foko Retail with Azure Active Di
* Enable your users to be automatically signed-in to Foko Retail with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Foko Retail supports **SP** initiated SSO
+* Foko Retail supports **SP** initiated SSO.
-## Adding Foko Retail from the gallery
+## Add Foko Retail from the gallery
To configure the integration of Foko Retail into Azure AD, you need to add Foko Retail from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Foko Retail** in the search box. 1. Select **Foko Retail** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Foko Retail
+## Configure and test Azure AD SSO for Foko Retail
Configure and test Azure AD SSO with Foko Retail using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Foko Retail.
-To configure and test Azure AD SSO with Foko Retail, complete the following building blocks:
+To configure and test Azure AD SSO with Foko Retail, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Foko Retail SSO](#configure-foko-retail-sso)** - to configure the single sign-on settings on application side.
- * **[Create Foko Retail test user](#create-foko-retail-test-user)** - to have a counterpart of B.Simon in Foko Retail that is linked to the Azure AD representation of user.
+ 1. **[Create Foko Retail test user](#create-foko-retail-test-user)** - to have a counterpart of B.Simon in Foko Retail that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Foko Retail** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Foko Retail** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://api.foko.io/sso/{$CUSTOM_ID}/login`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://api.foko.io/sso/{$CUSTOM_ID}/metadata.xml`
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://api.foko.io/sso/{$CUSTOM_ID}/login`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Foko Retail Client support team](mailto:support@fokoretail.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Foko Retail Client support team](mailto:support@fokoretail.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Foko Retail**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Foko Retail. Work with [Fo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Foko Retail tile in the Access Panel, you should be automatically signed in to the Foko Retail for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Foko Retail Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Foko Retail Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Foko Retail tile in the My Apps, this will redirect to Foko Retail Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Foko Retail with Azure AD](https://aad.portal.azure.com/)
+Once you configure Foko Retail you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Raketa Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/raketa-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Raketa | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Raketa'
description: Learn how to configure single sign-on between Azure Active Directory and Raketa.
Previously updated : 07/28/2020 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Raketa
+# Tutorial: Azure AD SSO integration with Raketa
In this tutorial, you'll learn how to integrate Raketa with Azure Active Directory (Azure AD). When you integrate Raketa with Azure AD, you can:
In this tutorial, you'll learn how to integrate Raketa with Azure Active Directo
* Enable your users to be automatically signed-in to Raketa with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Raketa supports **SP** initiated SSO.
-* Once you configure Raketa you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
-## Adding Raketa from the gallery
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Raketa from the gallery
To configure the integration of Raketa into Azure AD, you need to add Raketa from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service [1]. ![rkt_1](./media/raketa-tutorial/azure-active-directory.png)
To configure the integration of Raketa into Azure AD, you need to add Raketa fro
1. Select **Raketa** from results panel [7] and then click on **Add** button [8].
- ![rkt_3](./media/raketa-tutorial/add-btn.png)
+ ![rkt_3](./media/raketa-tutorial/results.png)
-
-## Configure and test Azure AD single sign-on for Raketa
+## Configure and test Azure AD SSO for Raketa
Configure and test Azure AD SSO with Raketa using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Raketa.
-To configure and test Azure AD SSO with Raketa, complete the following building blocks:
+To configure and test Azure AD SSO with Raketa, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Raketa, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Raketa** application integration page, find the **Manage** section and select **single sign-on** [9].
+1. In the Azure portal, on the **Raketa** application integration page, find the **Manage** section and select **single sign-on** [9].
- ![rkt_4](./media/raketa-tutorial/manage-sso.png)
+ ![rkt_4](./media/raketa-tutorial/integration.png)
1. On the **Select a single sign-on method** page [9], select **SAML** [10].
- ![rkt_5](./media/raketa-tutorial/saml.png)
+ ![rkt_5](./media/raketa-tutorial/method.png)
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** [11] to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** [11] to edit the settings.
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
1. In the **Identifier (Entity ID)** [12] and **Sign on URL** [14] text boxes, type the URL: `https://raketa.travel/`. 1. In the **Reply URL** text box [13], type a URL using the following pattern: `https://raketa.travel/sso/acs?clientId=<CLIENT_ID>`.
- ![rkt_6](./media/raketa-tutorial/enter-urls.png)
+ ![rkt_6](./media/raketa-tutorial/values.png)
> [!NOTE] > The Reply URL value is not real. Update the value with the actual Reply URL. Contact [Raketa Client support team](mailto:help@raketa.travel) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Logout URL [18] ΓÇô The web-page URL, which is used to redirect the users after logout.
- ![rkt_7](./media/raketa-tutorial/copy-urls.png)
-
+ ![rkt_7](./media/raketa-tutorial/authentication.png)
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
![rkt_9](./media/raketa-tutorial/create-user.png) - ### Assign the Azure AD test user In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Raketa.
In this section, you create a user called B.Simon in Raketa. Work with [Raketa s
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Raketa tile in the Access Panel, you should be automatically signed in to the Raketa for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal. This will redirect to Raketa Sign-on URL where you can initiate the login flow.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Go to Raketa Sign-on URL directly and initiate the login flow from there.
-- [Try Raketa with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Raketa tile in the My Apps, this will redirect to Raketa Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Raketa with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Raketa you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Terraform Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terraform-enterprise-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Terraform Enterprise | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Terraform Enterprise'
description: Learn how to configure single sign-on between Azure Active Directory and Terraform Enterprise.
Previously updated : 04/05/2021 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Terraform Enterprise
+# Tutorial: Azure AD SSO integration with Terraform Enterprise
In this tutorial, you'll learn how to integrate Terraform Enterprise with Azure Active Directory (Azure AD). When you integrate Terraform Enterprise with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<TFE HOSTNAME>/session`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<TFE HOSTNAME>/users/saml/metadata`
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<TFE HOSTNAME>/users/saml/auth`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<TFE HOSTNAME>/`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Terraform Enterprise Client support team](https://support.hashicorp.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Terraform Enterprise Client support team](https://support.hashicorp.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Terraform Enterprise SSO
-To configure single sign-on on **Terraform Enterprise** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Terraform Enterprise support team](https://support.hashicorp.com). They set this setting to have the SAML SSO connection set properly on both sides.
+Navigate to `https://<TFE_HOSTNAME>/app/admin/saml` and perform the following steps in the **SAML Settings** page:
+
+![Screenshot: Terraform Enterprise SAML Settings](./media/terraform-enterprise-tutorial/sso-aad-saml-tfe-saml-settings.png)
+
+a. Enable the **Enable SAML single sign-on** check box.
+
+b. In the **Single Sign-On URL** textbox, paste the **Login URL** value which you copied from the Azure portal.
+
+c. In the **Single Log-out URL** textbox, paste the **Login URL** value which you copied from the Azure portal.
+
+d. Open the downloaded **Certificate** from the Azure portal into Notepad and paste the content into the **IDP CERTIFICATE** textbox.
### Create Terraform Enterprise test user
active-directory Ultipro Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ultipro-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with UltiPro | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and UltiPro.
+ Title: 'Tutorial: Azure AD SSO integration with UKG Pro'
+description: Learn how to configure single sign-on between Azure Active Directory and UKG Pro.
Previously updated : 12/24/2020 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory integration with UltiPro
+# Tutorial: Azure AD SSO integration with UKG Pro
-In this tutorial, you'll learn how to integrate UltiPro with Azure Active Directory (Azure AD). When you integrate UltiPro with Azure AD, you can:
+In this tutorial, you'll learn how to integrate UKG Pro with Azure Active Directory (Azure AD). When you integrate UKG Pro with Azure AD, you can:
-* Control in Azure AD who has access to UltiPro.
-* Enable your users to be automatically signed-in to UltiPro with their Azure AD accounts.
+* Control in Azure AD who has access to UKG Pro.
+* Enable your users to be automatically signed-in to UKG Pro with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal.
In this tutorial, you'll learn how to integrate UltiPro with Azure Active Direct
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* UltiPro single sign-on (SSO) enabled subscription.
+* UKG Pro single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* UltiPro supports **SP** initiated SSO.
+* UKG Pro supports **SP** initiated SSO.
-## Adding UltiPro from the gallery
+## Adding UKG Pro from the gallery
-To configure the integration of UltiPro into Azure AD, you need to add UltiPro from the gallery to your list of managed SaaS apps.
+To configure the integration of UKG Pro into Azure AD, you need to add UKG Pro from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **UltiPro** in the search box.
-1. Select **UltiPro** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **UKG Pro** in the search box.
+1. Select **UKG Pro** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for UltiPro
+## Configure and test Azure AD SSO for UKG Pro
-Configure and test Azure AD SSO with UltiPro using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in UltiPro.
+Configure and test Azure AD SSO with UKG Pro using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in UKG Pro.
-To configure and test Azure AD SSO with UltiPro, perform the following steps:
+To configure and test Azure AD SSO with UKG Pro, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-2. **[Configure UltiPro SSO](#configure-ultipro-sso)** - to configure the Single Sign-On settings on application side.
- 1. **[Create UltiPro test user](#create-ultipro-test-user)** - to have a counterpart of B.Simon in UltiPro that is linked to the Azure AD representation of user.
+2. **[Configure UKG Pro SSO](#configure-ukg-pro-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create UKG Pro test user](#create-ukg-pro-test-user)** - to have a counterpart of B.Simon in UKG Pro that is linked to the Azure AD representation of user.
3. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **UltiPro** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **UKG Pro** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
| `https://<companyname>.ultipro.ca/<instancename>` | > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [UltiPro Client support team](https://www.ultimatesoftware.com/ContactUs) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [UKG Pro Client support team](https://www.ultimatesoftware.com/ContactUs) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-6. On the **Set up UltiPro** section, copy the appropriate URL(s) as per your requirement.
+6. On the **Set up UKG Pro** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to UltiPro.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to UKG Pro.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **UltiPro**.
+1. In the applications list, select **UKG Pro**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure UltiPro SSO
+## Configure UKG Pro SSO
-To configure single sign-on on **UltiPro** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [UltiPro support team](https://www.ultimatesoftware.com/ContactUs). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **UKG Pro** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [UKG Pro support team](https://www.ultimatesoftware.com/ContactUs). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create UltiPro test user
+### Create UKG Pro test user
-In this section, you create a user called Britta Simon in UltiPro. Work with [UltiPro support team](https://www.ultimatesoftware.com/ContactUs) to add the users in the UltiPro platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in UKG Pro. Work with [UKG Pro support team](https://www.ultimatesoftware.com/ContactUs) to add the users in the UKG Pro platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to UltiPro Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to UKG Pro Sign-on URL where you can initiate the login flow.
-* Go to UltiPro Sign-on URL directly and initiate the login flow from there.
+* Go to UKG Pro Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the UltiPro tile in the My Apps, this will redirect to UltiPro Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the UKG Pro tile in the My Apps, this will redirect to UKG Pro Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure the UltiPro you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure the UKG Pro you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Zendesk Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Zendesk | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Zendesk'
description: Learn how to configure single sign-on between Azure Active Directory and Zendesk.
Previously updated : 12/28/2020 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Zendesk
+# Tutorial: Azure AD SSO integration with Zendesk
In this tutorial, you'll learn how to integrate Zendesk with Azure Active Directory (Azure AD). When you integrate Zendesk with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Zendesk supports **SP** initiated SSO
-* Zendesk supports [**Automated** user provisioning](zendesk-provisioning-tutorial.md)
+* Zendesk supports **SP** initiated SSO.
+* Zendesk supports [**Automated** user provisioning](zendesk-provisioning-tutorial.md).
## Adding Zendesk from the gallery
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you want to setup Zendesk manually, open a new web browser window and sign into your Zendesk company site as an administrator and perform the following steps:
-1. In the **Zendesk Admin Center**, click on **Security settings** in the **Security** tab.
+1. In the **Zendesk Admin Center**, Go to the **Account -> Security -> Single sign-on** page and click **Configure** in the **SAML**.
![Screenshot shows the Zendesk Admin Center with Security settings selected.](./media/zendesk-tutorial/settings.png "Security")
-1. Go to the **Single sign-on** page and click on **Edit** in the **SAML**.
-
- ![Screenshot shows the Single sign-on page with Edit selected.](./media/zendesk-tutorial/saml-sso.png "Security")
-
-1. Perform the following steps in the **SSO** page.
+1. Perform the following steps in the **Single sign-on** page.
![Single sign-on](./media/zendesk-tutorial/saml-configuration.png "Single sign-on")
- a. In **SAML SSO URL** textbox, paste the value of **Login URL** which you have copied from Azure portal.
+ a. Check the **Enabled**.
+
+ b. In **SAML SSO URL** textbox, paste the value of **Login URL** which you have copied from Azure portal.
- b. In **Certificate Fingerprint** textbox, paste the **Thumbprint** value of certificate which you have copied from Azure portal.
+ c. In **Certificate fingerprint** textbox, paste the **Thumbprint** value of certificate which you have copied from Azure portal.
- c. In **Remote Logout URL** textbox, paste the value of **Logout URL** which you have copied from Azure portal.
+ d. In **Remote logout URL** textbox, paste the value of **Logout URL** which you have copied from Azure portal.
- d. Click **Save**.
+ e. Click **Save**.
### Create Zendesk test user
advisor Advisor Alerts Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-portal.md
Title: Create Azure Advisor alerts for new recommendations using Azure portal description: Create Azure Advisor alerts for new recommendation-+ Last updated 09/09/2019
advisor Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-overview.md
Title: Introduction to Azure Advisor description: Use Azure Advisor to optimize your Azure deployments.-+ Last updated 09/27/2020
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
Use `az keyvault secret set` to store the standard domain user credential as a s
az keyvault secret set --vault-name MyAKSGMSAVault --name "GMSADomainUserCred" --value "$Domain\\$DomainUsername:$DomainUserPassword" ```
+> [!NOTE]
+> Use the Fully Qualified Domain Name for the Domain rather than the Partially Qualified Domain Name that may be used on internal networks.
++ ## Optional: Use a custom VNET with custom DNS Your domain controller needs to be configured through DNS so it is reachable by the AKS cluster. You can configure your network and DNS outside of your AKS cluster to allow your cluster to access the domain controller. Alternatively, you can configure a custom VNET with a custom DNS using Azure CNI with your AKS cluster to provide access to your domain controller. For more details, see [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][aks-cni].
credspec:
Sid: $GMSA_SID ``` ++ Create a *gmsa-role.yaml* with the following. ```yml
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth
```xml <quota calls="number" bandwidth="kilobytes" renewal-period="seconds">
- <api name="API name" id="API id" calls="number" renewal-period="seconds">
- <operation name="operation name" id="operation id" calls="number" renewal-period="seconds" />
+ <api name="API name" id="API id" calls="number">
+ <operation name="operation name" id="operation id" calls="number" />
</api> </quota> ```
The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth
| name | The name of the API or operation for which the quota applies. | Yes | N/A | | bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A | | calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The time period in seconds after which the quota resets. When it's set to `0` the period is set to infinite. | Yes | N/A |
+| renewal-period | The time period in seconds after which the quota resets. When it's set to `0` the period is set to infinite.| Yes | N/A |
### Usage
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
API developers face challenges when working with Resource Manager templates:
* API developers often work with the [OpenAPI Specification](https://github.com/OAI/OpenAPI-Specification) and might not be familiar with Resource Manager schemas. Authoring templates manually might be error-prone.
- A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#Creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
+ A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/APIM_ARMTemplate/README.md#creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
-* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#extractor) in the resource kit can help generate templates by extracting configurations from their API Management instances.
+* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/APIM_ARMTemplate/README.md#creator) in the resource kit can help generate templates by extracting configurations from their API Management instances.
## Workflow
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Self-hosted gateways require outbound TCP/IP connectivity to Azure on port 443.
To operate properly, each self-hosted gateway needs outbound connectivity on port 443 to the following endpoints associated with its cloud-based API Management instance:
-* The public IP address of the API Management instance in its primary location
-* The hostname of the instance's management endpoint: `<apim-service-name>.management.azure-api.net`
-* The hostname of the instance's associated blob storage account: `<blob-storage-account-name>.blob.core.windows.net`
-* The hostname of the instance's associated table storage account: `<table-storage-account-name>.table.core.windows.net`
-* Public IP addresses from the Storage [service tag](../virtual-network/service-tags-overview.md) corresponding to the primary location of the API Management instance
+- [Gateway v2 requirements](#gateway-v2-requirements)
+- [Gateway v1 requirements](#gateway-v1-requirements)
> [!IMPORTANT] > * DNS hostnames must be resolvable to IP addresses and the corresponding IP addresses must be reachable.
If integrated with your API Management instance, also enable outbound connectivi
* [Application Insights](api-management-howto-app-insights.md) * [External cache](api-management-howto-cache-external.md)
+#### Gateway v2 requirements
+
+The self-hosted gateway v2 requires the following:
+
+* The public IP address of the API Management instance in its primary location
+* The hostname of the instance's configuration endpoint: `<apim-service-name>.configuration.azure-api.net`
+
+Additionally,customers that use API inspector or quotas in their policies have to ensure that the following additional dependencies are accessible:
+
+* The hostname of the instance's associated blob storage account: `<blob-storage-account-name>.blob.core.windows.net`
+* The hostname of the instance's associated table storage account: `<table-storage-account-name>.table.core.windows.net`
+* Public IP addresses from the Storage [service tag](../virtual-network/service-tags-overview.md) corresponding to the primary location of the API Management instance
+
+#### Gateway v1 requirements
+
+The self-hosted gateway v1 requires the following:
+
+* The public IP address of the API Management instance in its primary location
+* The hostname of the instance's management endpoint: `<apim-service-name>.management.azure-api.net`
+* The hostname of the instance's associated blob storage account: `<blob-storage-account-name>.blob.core.windows.net`
+* The hostname of the instance's associated table storage account: `<table-storage-account-name>.table.core.windows.net`
+* Public IP addresses from the Storage [service tag](../virtual-network/service-tags-overview.md) corresponding to the primary location of the API Management instance
+ ### Connectivity failures When connectivity to Azure is lost, the self-hosted gateway is unable to receive configuration updates, report its status, or upload telemetry.
api-management Validate Service Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-service-updates.md
+
+ Title: Validate Azure API Management service updates
+description: Apply the Azure safe deployment approach with your Azure API Management instances to validate service updates and avoid disruptions to your production environments.
+++ Last updated : 02/25/2022+++
+# Validate service updates to avoid disruption to your production API Management instances
+
+*"One of the value propositions of the cloud is that itΓÇÖs continually improving, delivering new capabilities and features, as well as security and reliability enhancements. But since the platform is continuously evolving, change is inevitable." - Mark Russinovich, CTO, Azure*
+
+Microsoft uses a safe deployment practices framework to thoroughly test, monitor, and validate service updates, and then deploy them to Azure regions using a phased approach. Even so, service updates that reach your API Management instances could introduce unanticipated risks to your production workloads and disrupt your API consumers. Learn how you can apply our safe deployment approach to reduce risks by validating the updates before they reach your production API Management environments.
+
+## What is the Azure safe deployment practices framework?
+
+Azure deploys updates for a given service in a series of pre-production and production steps using a [safe deployment practices (SDP) framework](https://azure.microsoft.com/blog/advancing-safe-deployment-practices/). This framework is shown in simplified form in the following image:
++
+Deployment phases include:
+
+* **Development and test** - Azure engineering teams iterate on and validate updates for their services in development and test environments, with strict quality gates.
+
+ Careful monitoring, validation, and extensive testing for regressions during these stages reduce the risk that software changes will negatively affect customers' Azure workloads in production.
+
+* **Production** - Production-ready updates are then introduced to customers' Azure services in a phased production rollout pipeline:
+
+ * **Canary regions** receive updates first. These regions, known formally as **Early Updates Access Programs (EUAP)** regions, are full, production-level environments where scenarios can be validated at scale by Azure engineering teams and by invited customers. Currently, Azure canary regions are **East US 2 EUAP** and **Central US EUAP**.
+
+ > [!NOTE]
+ > While the EUAP regions are production-ready, capacity may be limited, and services can be disrupted from time to time by disaster recovery drills and other testing by Azure engineering teams.
+
+ * A **pilot** region supported for production use with an SLA receives the updates next. Currently, the pilot region is **West Central US**.
+
+ * After an observation period in the pilot region, the service updates are gradually introduced to remaining regions, broadening customers' exposure.
+
+## How do I safely deploy updates to my API Management instances?
+
+As an Azure customer, you're not able to control when to apply service updates to your API Management instances - updates are applied automatically. However, to minimize risk, you can use a strategy to deploy your noncritical instances to regions that receive updates before the regions running your production instances.
+
+* The instance that receives updates first is effectively your canary deployment.
+
+ Use this instance to monitor for any issues caused by the updates against the baseline production instances. With monitoring, identify and mitigate potential regressions before your production services are affected.
+
+ > [!IMPORTANT]
+ > If your canary instance experiences issues associated with the update process, please open an Azure support request as soon as possible.
+
+* After you validate the canary deployment, you have greater confidence in updates that come later to your production instances.
+
+See [example strategies](#canary-deployment-strategies) to create and use a canary deployment of API Management, later in this article.
+
+## Know when your instances are receiving updates
+
+As a first step, ensure that you know about service updates that are expected or are in progress.
+
+* API Management updates are announced on the [API Management GitHub repo](https://github.com/Azure/API-Management/releases). We recommend that you subscribe to receive notifications from this repository to know when update rollouts begin.
+
+* Monitor service updates that are taking place in your API Management instance by using the Azure [Activity log](../azure-monitor/essentials/activity-log.md). The "Scheduled maintenance" event is emitted when an update begins.
+
+ :::image type="content" source="media/validate-service-updates/scheduled-maintenance.png" alt-text="Scheduled maintenance event in Activity log":::
+
+ To receive notifications automatically, [set up an alert](../azure-monitor/alerts/alerts-activity-log.md) on the Activity log.
+
+* Updates roll out to regions in the following phases: Azure EUAP regions, followed by West Central US, followed by remaining regions in several later phases. The sequence of regions updated in the later deployment phases differs from service to service. You can expect at least 24 hours between each phase of the production rollout.
+
+* Within a region, API Management instances in the Premium tier receive updates several hours later than those in other service tiers.
+
+> [!TIP]
+> If your API Management instance is deployed to multiple locations (regions), the timing of updates is determined by the instance's **Primary** location.
+
+## Canary deployment strategies
+
+Here are example strategies to use an API Management instance as a canary deployment that receives updates earlier than your production instances.
+
+* **Deploy in EUAP region** - If you have access to an Azure EUAP region, you can use an instance there to validate updates as soon as they're released to the production pipeline. Learn about the [Azure region access request process](/troubleshoot/azure/general/region-access-request-process).
+
+ > [!NOTE]
+ > Because of capacity constraints in EUAP regions, you might not be able to scale API Management instances as needed.
+
+* **Deploy in pilot region** - Use an instance in the West Central US to simulate your production environment, or use it in production for noncritical API traffic. While this region receives updates after the EUAP regions, a deployment there is more likely to identify regressions that are specific to your service configuration.
+
+* **Deploy duplicate instances in a region** - If your production workload is a Premium tier instance in a specific region, consider deploying a similarly configured instance in a lower tier that receives updates earlier. For example, configure a pre-production instance in the Developer tier to validate updates.
+
+## Next steps
+
+* Learn [how to monitor](api-management-howto-use-azure-monitor.md) your API Management instance.
+* Learn about other options to [observe](observability.md) your API Management instance.
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
Title: Tutorial - Web app accesses Microsoft Graph as the app| Azure
-description: In this tutorial, you learn how to access data in Microsoft Graph by using managed identities.
+ Title: Tutorial - .NET Web app accesses Microsoft Graph as the app| Azure
+description: In this tutorial, you learn how to access data in Microsoft Graph from a .NET web app by using managed identities.
Last updated 01/21/2022
+ms.devlang: csharp
#Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
-# Tutorial: Access Microsoft Graph from a secured app as the app
+# Tutorial: Access Microsoft Graph from a secured .NET app as the app
-Learn how to access Microsoft Graph from a web app running on Azure App Service.
--
-You want to call Microsoft Graph for the web app. A safe way to give your web app access to data is to use a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md). A managed identity from Azure Active Directory allows App Service to access resources through role-based access control (RBAC), without requiring app credentials. After assigning a managed identity to your web app, Azure takes care of the creation and distribution of a certificate. You don't have to worry about managing secrets or app credentials.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Create a system-assigned managed identity on a web app.
-> * Add Microsoft Graph API permissions to a managed identity.
-> * Call Microsoft Graph from a web app by using managed identities.
--
-## Prerequisites
-
-* A web application running on Azure App Service that has the [App Service authentication/authorization module enabled](scenario-secure-app-authentication-app-service.md).
-
-## Enable managed identity on app
-
-If you create and publish your web app through Visual Studio, the managed identity was enabled on your app for you. In your app service, select **Identity** in the left pane and then select **System assigned**. Verify that **Status** is set to **On**. If not, select **Save** and then select **Yes** to enable the system-assigned managed identity. When the managed identity is enabled, the status is set to **On** and the object ID is available.
-
-Take note of the **Object ID** value, which you'll need in the next step.
--
-## Grant access to Microsoft Graph
-
-When accessing the Microsoft Graph, the managed identity needs to have proper permissions for the operation it wants to perform. Currently, there's no option to assign such permissions through the Azure portal. The following script will add the requested Microsoft Graph API permissions to the managed identity service principal object.
-
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-# Install the module. (You need admin on the machine.)
-# Install-Module AzureAD.
-
-# Your tenant ID (in the Azure portal, under Azure Active Directory > Overview).
-$TenantID="<tenant-id>"
-$resourceGroup = "securewebappresourcegroup"
-$webAppName="SecureWebApp-20201102125811"
-
-# Get the ID of the managed identity for the web app.
-$spID = (Get-AzWebApp -ResourceGroupName $resourceGroup -Name $webAppName).identity.principalid
-
-# Check the Microsoft Graph documentation for the permission you need for the operation.
-$PermissionName = "User.Read.All"
-
-Connect-AzureAD -TenantId $TenantID
-
-# Get the service principal for Microsoft Graph.
-# First result should be AppId 00000003-0000-0000-c000-000000000000
-$GraphServicePrincipal = Get-AzureADServicePrincipal -SearchString "Microsoft Graph" | Select-Object -first 1
-
-# Assign permissions to the managed identity service principal.
-$AppRole = $GraphServicePrincipal.AppRoles | `
-Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"}
-
-New-AzureAdServiceAppRoleAssignment -ObjectId $spID -PrincipalId $spID `
--ResourceId $GraphServicePrincipal.ObjectId -Id $AppRole.Id
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az login
-
-webAppName="SecureWebApp-20201106120003"
-
-spId=$(az resource list -n $webAppName --query [*].identity.principalId --out tsv)
-
-graphResourceId=$(az ad sp list --display-name "Microsoft Graph" --query [0].objectId --out tsv)
-
-appRoleId=$(az ad sp list --display-name "Microsoft Graph" --query "[0].appRoles[?value=='User.Read.All' && contains(allowedMemberTypes, 'Application')].id" --output tsv)
-
-uri=https://graph.microsoft.com/v1.0/servicePrincipals/$spId/appRoleAssignments
-
-body="{'principalId':'$spId','resourceId':'$graphResourceId','appRoleId':'$appRoleId'}"
-
-az rest --method post --uri $uri --body $body --headers "Content-Type=application/json"
-```
---
-After executing the script, you can verify in the [Azure portal](https://portal.azure.com) that the requested API permissions are assigned to the managed identity.
-
-Go to **Azure Active Directory**, and then select **Enterprise applications**. This pane displays all the service principals in your tenant. In **Managed Identities**, select the service principal for the managed identity.
-
-If you're following this tutorial, there are two service principals with the same display name (SecureWebApp2020094113531, for example). The service principal that has a **Homepage URL** represents the web app in your tenant. The service principal that appears in **Managed Identities** should *not* have a **Homepage URL** listed and the **Object ID** should match the object ID value of the managed identity in the [previous step](#enable-managed-identity-on-app).
-
-Select the service principal for the managed identity.
--
-In **Overview**, select **Permissions**, and you'll see the added permissions for Microsoft Graph.
- ## Call Microsoft Graph
-# [C#](#tab/programming-language-csharp)
The [ChainedTokenCredential](/dotnet/api/azure.identity.chainedtokencredential), [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential), and [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) classes are used to get a token credential for your code to authorize requests to Microsoft Graph. Create an instance of the [ChainedTokenCredential](/dotnet/api/azure.identity.chainedtokencredential) class, which uses the managed identity in the App Service environment or the development environment variables to fetch tokens and attach them to the service client. The following code example gets the authenticated token credential and uses it to create a service client object, which gets the users in the group.
-To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/3-WebApp-graphapi-managed-identity).
+To see this code as part of a sample application, see the:
+* [Sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/3-WebApp-graphapi-managed-identity).
### Install the Microsoft.Identity.Web.MicrosoftGraph client library package
Run the install commands.
Install-Package Microsoft.Identity.Web.MicrosoftGraph ```
-### Example
+### .NET Example
```csharp using System;
public async Task OnGetAsync()
} ```
-# [Node.js](#tab/programming-language-nodejs)
-
-The `DefaultAzureCredential` class from [@azure/identity](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) package is used to get a token credential for your code to authorize requests to Azure Storage. Create an instance of the `DefaultAzureCredential` class, which uses the managed identity to fetch tokens and attach them to the service client. The following code example gets the authenticated token credential and uses it to create a service client object, which gets the users in the group.
-
-To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/3-WebApp-graphapi-managed-identity).
-
-### Example
-
-```nodejs
-const graphHelper = require('../utils/graphHelper');
-const { DefaultAzureCredential } = require("@azure/identity");
-
-exports.getUsersPage = async(req, res, next) => {
-
- const defaultAzureCredential = new DefaultAzureCredential();
-
- try {
- const tokenResponse = await defaultAzureCredential.getToken("https://graph.microsoft.com/.default");
-
- const graphClient = graphHelper.getAuthenticatedClient(tokenResponse.token);
-
- const users = await graphClient
- .api('/users')
- .get();
-
- res.render('users', { user: req.session.user, users: users });
- } catch (error) {
- next(error);
- }
-}
-```
-
-To query Microsoft Graph, the sample uses the [Microsoft Graph JavaScript SDK](https://github.com/microsoftgraph/msgraph-sdk-javascript). The code for this is located in [utils/graphHelper.js](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/blob/main/3-WebApp-graphapi-managed-identity/controllers/graphController.js) of the full sample:
-
-```nodejs
-getAuthenticatedClient = (accessToken) => {
- // Initialize Graph client
- const client = graph.Client.init({
- // Use the provided access token to authenticate requests
- authProvider: (done) => {
- done(null, accessToken);
- }
- });
-
- return client;
-}
-```
--
-## Clean up resources
-
-If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](scenario-secure-app-clean-up-resources.md).
-
-## Next steps
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
->
-> * Create a system-assigned managed identity on a web app.
-> * Add Microsoft Graph API permissions to a managed identity.
-> * Call Microsoft Graph from a web app by using managed identities.
-Learn how to connect a [.NET Core app](tutorial-dotnetcore-sqldb-app.md), [Python app](tutorial-python-postgresql-app.md), [Java app](tutorial-java-spring-cosmosdb.md), or [Node.js app](tutorial-nodejs-mongodb-app.md) to a database.
app-service Scenario Secure App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md
Your web app now has the required permissions and also adds Microsoft Graph's cl
# [C#](#tab/programming-language-csharp) Using the [Microsoft.Identity.Web library](https://github.com/AzureAD/microsoft-identity-web/), the web app gets an access token for authentication with Microsoft Graph. In version 1.2.0 and later, the Microsoft.Identity.Web library integrates with and can run alongside the App Service authentication/authorization module. Microsoft.Identity.Web detects that the web app is hosted in App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed along to authenticated requests with the Microsoft Graph API.
-To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
+To see this code as part of a sample application, see the:
+* [Sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
> [!NOTE] > The Microsoft.Identity.Web library isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](tutorial-auth-aad.md#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
Title: Tutorial - Web app accesses storage by using managed identities | Azure
-description: In this tutorial, you learn how to access Azure Storage for an app by using managed identities.
+ Title: "Tutorial - .NET Web app accesses storage by using managed identities | Azure"
+description: In this tutorial, you learn how to access Azure Storage for a .NET app by using managed identities.
Previously updated : 11/02/2021 Last updated : 02/16/2022
+ms.devlang: csharp, azurecli
#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
-# Tutorial: Access Azure Storage from a web app
+# Tutorial: Access Azure services from a .NET web app
-Learn how to access Azure Storage for a web app (not a signed-in user) running on Azure App Service by using managed identities.
-
-You want to add access to the Azure data plane (Azure Storage, Azure SQL Database, Azure Key Vault, or other services) from your web app. You could use a shared key, but then you have to worry about operational security of who can create, deploy, and manage the secret. It's also possible that the key could be checked into GitHub, which hackers know how to scan for. A safer way to give your web app access to data is to use [managed identities](../active-directory/managed-identities-azure-resources/overview.md).
-
-A managed identity from Azure Active Directory (Azure AD) allows App Service to access resources through role-based access control (RBAC), without requiring app credentials. After assigning a managed identity to your web app, Azure takes care of the creation and distribution of a certificate. People don't have to worry about managing secrets or app credentials.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Create a system-assigned managed identity on a web app.
-> * Create a storage account and an Azure Blob Storage container.
-> * Access storage from a web app by using managed identities.
--
-## Prerequisites
-
-* A web application running on Azure App Service that has the [App Service authentication/authorization module enabled](scenario-secure-app-authentication-app-service.md).
-
-## Enable managed identity on an app
-
-If you create and publish your web app through Visual Studio, the managed identity was enabled on your app for you. In your app service, select **Identity** in the left pane, and then select **System assigned**. Verify that the **Status** is set to **On**. If not, select **Save** and then select **Yes** to enable the system-assigned managed identity. When the managed identity is enabled, the status is set to **On** and the object ID is available.
--
-This step creates a new object ID, different than the app ID created in the **Authentication/Authorization** pane. Copy the object ID of the system-assigned managed identity. You'll need it later.
-
-## Create a storage account and Blob Storage container
-
-Now you're ready to create a storage account and Blob Storage container.
-
-Every storage account must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group or use an existing resource group. This article shows how to create a new resource group.
-
-A general-purpose v2 storage account provides access to all of the Azure Storage
-
-Blobs in Azure Storage are organized into containers. Before you can upload a blob later in this tutorial, you must first create a container.
-
-# [Portal](#tab/azure-portal)
-
-To create a general-purpose v2 storage account in the Azure portal, follow these steps.
-
-1. On the Azure portal menu, select **All services**. In the list of resources, enter **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**.
-
-1. In the **Storage Accounts** window that appears, select **Add**.
-
-1. Select the subscription in which to create the storage account.
-
-1. Under the **Resource group** field, select the resource group that contains your web app from the drop-down menu.
-
-1. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length and can include numbers and lowercase letters only.
-
-1. Select a location for your storage account, or use the default location.
-
-1. Leave these fields set to their default values:
-
- |Field|Value|
- |--|--|
- |Deployment model|Resource Manager|
- |Performance|Standard|
- |Account kind|StorageV2 (general-purpose v2)|
- |Replication|Read-access geo-redundant storage (RA-GRS)|
- |Access tier|Hot|
-
-1. Select **Review + Create** to review your storage account settings and create the account.
-
-1. Select **Create**.
-
-To create a Blob Storage container in Azure Storage, follow these steps.
-
-1. Go to your new storage account in the Azure portal.
-
-1. In the left menu for the storage account, scroll to the **Data storage** section, and then select **Containers**.
-
-1. Select the **+ Container** button.
-
-1. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character.
-
-1. Set the level of public access to the container. The default level is **Private (no anonymous access)**.
-
-1. Select **OK** to create the container.
-
-# [PowerShell](#tab/azure-powershell)
-
-To create a general-purpose v2 storage account and Blob Storage container, run the following script. Specify the name of the resource group that contains your web app. Enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length and can include numbers and lowercase letters only.
-
-Specify the location for your storage account. To see a list of locations valid for your subscription, run ```Get-AzLocation | select Location```. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character.
-
-Remember to replace placeholder values in angle brackets with your own values.
-
-```powershell
-Connect-AzAccount
-
-$resourceGroup = "securewebappresourcegroup"
-$location = "<location>"
-$storageName="securewebappstorage"
-$containerName = "securewebappblobcontainer"
-
-$storageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroup `
- -Name $storageName `
- -Location $location `
- -SkuName Standard_RAGRS `
- -Kind StorageV2
-
-$ctx = $storageAccount.Context
-
-New-AzStorageContainer -Name $containerName -Context $ctx -Permission blob
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To create a general-purpose v2 storage account and Blob Storage container, run the following script. Specify the name of the resource group that contains your web app. Enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length and can include numbers and lowercase letters only.
-
-Specify the location for your storage account. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character.
-
-The following example uses your Azure AD account to authorize the operation to create the container. Before you create the container, assign the Storage Blob Data Contributor role to yourself. Even if you're the account owner, you need explicit permissions to perform data operations against the storage account.
-
-Remember to replace placeholder values in angle brackets with your own values.
-
-```azurecli-interactive
-az login
-
-az storage account create \
- --name securewebappstorage \
- --resource-group securewebappresourcegroup \
- --location <location> \
- --sku Standard_ZRS \
- --encryption-services blob
-
-storageId=$(az storage account show -n securewebappstorage -g securewebappresourcegroup --query id --out tsv)
-
-az ad signed-in-user show --query objectId -o tsv | az role assignment create \
- --role "Storage Blob Data Contributor" \
- --assignee @- \
- --scope $storageId
-
-az storage container create \
- --account-name securewebappstorage \
- --name securewebappblobcontainer \
- --auth-mode login
-```
---
-## Grant access to the storage account
-
-You need to grant your web app access to the storage account before you can create, read, or delete blobs. In a previous step, you configured the web app running on App Service with a managed identity. Using Azure RBAC, you can give the managed identity access to another resource, just like any security principal. The Storage Blob Data Contributor role gives the web app (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data.
-
-# [Portal](#tab/azure-portal)
-
-In the [Azure portal](https://portal.azure.com), go into your storage account to grant your web app access. Select **Access control (IAM)** in the left pane, and then select **Role assignments**. You'll see a list of who has access to the storage account. Now you want to add a role assignment to a robot, the app service that needs access to the storage account. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-
-Assign the **Storage Blob Data Contributor** role to the **App Service** at subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
-Your web app now has access to your storage account.
-
-# [PowerShell](#tab/azure-powershell)
-
-Run the following script to assign your web app (represented by a system-assigned managed identity) the Storage Blob Data Contributor role on your storage account.
-
-```powershell
-$resourceGroup = "securewebappresourcegroup"
-$webAppName="SecureWebApp20201102125811"
-$storageName="securewebappstorage"
-
-$spID = (Get-AzWebApp -ResourceGroupName $resourceGroup -Name $webAppName).identity.principalid
-$storageId= (Get-AzStorageAccount -ResourceGroupName $resourceGroup -Name $storageName).Id
-New-AzRoleAssignment -ObjectId $spID -RoleDefinitionName "Storage Blob Data Contributor" -Scope $storageId
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-Run the following script to assign your web app (represented by a system-assigned managed identity) the Storage Blob Data Contributor role on your storage account.
-
-```azurecli-interactive
-spID=$(az resource list -n SecureWebApp20201102125811 --query [*].identity.principalId --out tsv)
-
-storageId=$(az storage account show -n securewebappstorage -g securewebappresourcegroup --query id --out tsv)
-
-az role assignment create --assignee $spID --role 'Storage Blob Data Contributor' --scope $storageId
-```
+## Access Blob Storage
-
-## Access Blob Storage
-# [C#](#tab/programming-language-csharp)
The [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class is used to get a token credential for your code to authorize requests to Azure Storage. Create an instance of the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class, which uses the managed identity to fetch tokens and attach them to the service client. The following code example gets the authenticated token credential and uses it to create a service client object, which uploads a new blob. To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/1-WebApp-storage-managed-identity).
Install the [Blob Storage NuGet package](https://www.nuget.org/packages/Azure.St
#### .NET Core command-line
-Open a command line, and switch to the directory that contains your project file.
+1. Open a command line, and switch to the directory that contains your project file.
-Run the install commands.
+1. Run the install commands.
-```dotnetcli
-dotnet add package Azure.Storage.Blobs
-
-dotnet add package Azure.Identity
-```
+ ```dotnetcli
+ dotnet add package Azure.Storage.Blobs
+
+ dotnet add package Azure.Identity
+ ```
#### Package Manager Console
-Open the project or solution in Visual Studio, and open the console by using the **Tools** > **NuGet Package Manager** > **Package Manager Console** command.
+1. Open the project or solution in Visual Studio, and open the console by using the **Tools** > **NuGet Package Manager** > **Package Manager Console** command.
-Run the install commands.
-```powershell
-Install-Package Azure.Storage.Blobs
+1. Run the install commands.
+ ```powershell
+ Install-Package Azure.Storage.Blobs
+
+ Install-Package Azure.Identity
+ ```
-Install-Package Azure.Identity
-```
-
-### Example
+## .NET example
```csharp using System;
static public async Task UploadBlob(string accountName, string containerName, st
} ```
-# [Node.js](#tab/programming-language-nodejs)
-The `DefaultAzureCredential` class from [@azure/identity](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) package is used to get a token credential for your code to authorize requests to Azure Storage. The `BlobServiceClient` class from [@azure/storage-blob](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) package is used to upload a new blob to storage. Create an instance of the `DefaultAzureCredential` class, which uses the managed identity to fetch tokens and attach them to the blob service client. The following code example gets the authenticated token credential and uses it to create a service client object, which uploads a new blob.
-
-To see this code as part of a sample application, see *StorageHelper.js* in the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/1-WebApp-storage-managed-identity).
-
-### Example
-
-```nodejs
-const { DefaultAzureCredential } = require("@azure/identity");
-const { BlobServiceClient } = require("@azure/storage-blob");
-const defaultAzureCredential = new DefaultAzureCredential();
-
-// Some code omitted for brevity.
-
-async function uploadBlob(accountName, containerName, blobName, blobContents) {
- const blobServiceClient = new BlobServiceClient(
- `https://${accountName}.blob.core.windows.net`,
- defaultAzureCredential
- );
-
- const containerClient = blobServiceClient.getContainerClient(containerName);
-
- try {
- await containerClient.createIfNotExists();
- const blockBlobClient = containerClient.getBlockBlobClient(blobName);
- const uploadBlobResponse = await blockBlobClient.upload(blobContents, blobContents.length);
- console.log(`Upload block blob ${blobName} successfully`, uploadBlobResponse.requestId);
- } catch (error) {
- console.log(error);
- }
-}
-```
--
-## Clean up resources
-
-If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](scenario-secure-app-clean-up-resources.md).
-
-## Next steps
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
->
-> * Create a system-assigned managed identity.
-> * Create a storage account and Blob Storage container.
-> * Access storage from a web app by using managed identities.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)
-
-> [!div class="nextstepaction"]
-> [App Service accesses Microsoft Graph on behalf of the user](scenario-secure-app-access-microsoft-graph-as-user.md)
-> [!div class="nextstepaction"]
-> [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md)
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
+
+ Title: Tutorial - JavaScript Web app accesses Microsoft Graph as the app| Azure
+description: In this tutorial, you learn how to access data in Microsoft Graph from a JavaScript web app by using managed identities.
+++++++ Last updated : 01/21/2022++
+ms.devlang: javascript
+
+#Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
++
+# Tutorial: Access Microsoft Graph from a secured JavaScript app as the app
++
+## Call Microsoft Graph
+
+The `DefaultAzureCredential` class from [@azure/identity](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) package is used to get a token credential for your code to authorize requests to Azure Storage. Create an instance of the `DefaultAzureCredential` class, which uses the managed identity to fetch tokens and attach them to the service client. The following code example gets the authenticated token credential and uses it to create a service client object, which gets the users in the group.
+
+To see this code as part of a sample application, see the: * [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/3-WebApp-graphapi-managed-identity).
+
+```nodejs
+const graphHelper = require('../utils/graphHelper');
+const { DefaultAzureCredential } = require("@azure/identity");
+
+exports.getUsersPage = async(req, res, next) => {
+
+ const defaultAzureCredential = new DefaultAzureCredential();
+
+ try {
+ const tokenResponse = await defaultAzureCredential.getToken("https://graph.microsoft.com/.default");
+
+ const graphClient = graphHelper.getAuthenticatedClient(tokenResponse.token);
+
+ const users = await graphClient
+ .api('/users')
+ .get();
+
+ res.render('users', { user: req.session.user, users: users });
+ } catch (error) {
+ next(error);
+ }
+}
+```
+
+To query Microsoft Graph, the sample uses the [Microsoft Graph JavaScript SDK](https://github.com/microsoftgraph/msgraph-sdk-javascript). The code for this is located in [utils/graphHelper.js](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/blob/main/3-WebApp-graphapi-managed-identity/controllers/graphController.js) of the full sample:
+
+```nodejs
+getAuthenticatedClient = (accessToken) => {
+ // Initialize Graph client
+ const client = graph.Client.init({
+ // Use the provided access token to authenticate requests
+ authProvider: (done) => {
+ done(null, accessToken);
+ }
+ });
+
+ return client;
+}
+```
+++
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
+
+ Title: Tutorial - JavaScript Web app accesses storage by using managed identities | Azure
+description: In this tutorial, you learn how to access Azure Storage for a JavaScript app by using managed identities.
++++++ Last updated : 02/16/2022++
+ms.devlang: javascript, azurecli
+
+#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
++
+# Tutorial: Access Azure services from a JavaScript web app
++
+## Access Blob Storage
+The `DefaultAzureCredential` class from [@azure/identity](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) package is used to get a token credential for your code to authorize requests to Azure Storage. The `BlobServiceClient` class from [@azure/storage-blob](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) package is used to upload a new blob to storage. Create an instance of the `DefaultAzureCredential` class, which uses the managed identity to fetch tokens and attach them to the blob service client. The following code example gets the authenticated token credential and uses it to create a service client object, which uploads a new blob.
+
+To see this code as part of a sample application, see *StorageHelper.js* in the:
+* [Sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/1-WebApp-storage-managed-identity).
+
+## JavaScript example
+
+```nodejs
+const { DefaultAzureCredential } = require("@azure/identity");
+const { BlobServiceClient } = require("@azure/storage-blob");
+const defaultAzureCredential = new DefaultAzureCredential();
+
+// Some code omitted for brevity.
+
+async function uploadBlob(accountName, containerName, blobName, blobContents) {
+ const blobServiceClient = new BlobServiceClient(
+ `https://${accountName}.blob.core.windows.net`,
+ defaultAzureCredential
+ );
+
+ const containerClient = blobServiceClient.getContainerClient(containerName);
+
+ try {
+ await containerClient.createIfNotExists();
+ const blockBlobClient = containerClient.getBlockBlobClient(blobName);
+ const uploadBlobResponse = await blockBlobClient.upload(blobContents, blobContents.length);
+ console.log(`Upload block blob ${blobName} successfully`, uploadBlobResponse.requestId);
+ } catch (error) {
+ console.log(error);
+ }
+}
+```
++
app-service Tutorial Connect Msi Key Vault Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-php.md
+
+ Title: 'Tutorial: PHP connect to Azure services securely with Key Vault'
+description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a PHP web app
+ms.devlang: csharp, azurecli
+ Last updated : 10/26/2021+++++
+# Tutorial: Secure Cognitive Service connection from PHP App Service using Key Vault
+++
+## Configure PHP app
+
+Clone the sample repository locally and deploy the sample application to App Service. Replace *\<app-name>* with a unique name.
+
+```azurecli-interactive
+# Clone and prepare sample application
+git clone https://github.com/Azure-Samples/app-service-language-detector.git
+cd app-service-language-detector/php
+zip default.zip index.php
+
+# Save app name as variable for convenience
+appName=<app-name>
+
+az appservice plan create --resource-group $groupName --name $appName --sku FREE --location $region
+az webapp create --resource-group $groupName --plan $appName --name $appName
+az webapp deployment source config-zip --resource-group $groupName --name $appName --src ./default.zip
+```
+
+## Configure secrets as app settings
+
app-service Tutorial Connect Msi Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault.md
Title: 'Tutorial: Connect to Azure services securely with Key Vault'
-description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively.
+ Title: 'Tutorial: .NET connect to Azure services securely with Key Vault'
+description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a .NET web app.
+ms.devlang: csharp, azurecli
Last updated 10/26/2021
-# Tutorial: Secure Cognitive Service connection from App Service using Key Vault
+# Tutorial: Secure Cognitive Service connection from .NET App Service using Key Vault
-[Azure App Service](overview.md) can use [managed identities](overview-managed-identity.md) to connect to back-end services without a connection string, which eliminates connection secrets to manage and keeps your back-end connectivity secure in a production environment. For back-end services that don't support managed identities and still requires connection secrets, you can use Key Vault to manage connection secrets. This tutorial uses Cognitive Services as an example to show you how it's done in practice. When you're finished, you have an app that makes programmatic calls to Cognitive Services, without storing any connection secrets inside App Service.
-> [!TIP]
-> Azure Cognitive Services do [support authentication through managed identities](../cognitive-services/authentication.md#authorize-access-to-managed-identities), but this tutorial uses the [subscription key authentication](../cognitive-services/authentication.md#authenticate-with-a-single-service-subscription-key) to demonstrate how you could connect to an Azure service that doesn't support managed identities from App Services.
-![Architecture diagram for tutorial scenario.](./media/tutorial-connect-msi-key-vault/architecture.png)
+## Configure .NET app
-With this architecture:
+Clone the sample repository locally and deploy the sample application to App Service. Replace *\<app-name>* with a unique name.
-- Connectivity to Key Vault is secured by managed identities-- App Service accesses the secrets using [Key Vault references](app-service-key-vault-references.md) as app settings.-- Access to the key vault is restricted to the app. App contributors, such as administrators, may have complete control of the App Service resources, and at the same time have no access to the Key Vault secrets.-- If your application code already accesses connection secrets with app settings, no change is required.-
-What you will learn:
-
-> [!div class="checklist"]
-> * Enable managed identities
-> * Use managed identities to connect to Key Vault
-> * Use Key Vault references
-> * Access Cognitive Services
-
-## Prerequisites
-
-Prepare your environment for the Azure CLI.
--
-## Create app with connectivity to Cognitive Services
-
-1. Create a resource group to contain all of your resources:
-
- ```azurecli-interactive
- # Save resource group name as variable for convenience
- groupName=myKVResourceGroup
- region=westeurope
-
- az group create --name $groupName --location $region
- ```
-
-1. Create a Cognitive Services resource. Replace *\<cs-resource-name>* with a unique name of your choice.
-
- ```azurecli-interactive
- # Save resource name as variable for convenience.
- csResourceName=<cs-resource-name>
-
- az cognitiveservices account create --resource-group $groupName --name $csResourceName --location $region --kind TextAnalytics --sku F0 --custom-domain $csResourceName
- ```
-
- > [!NOTE]
- > `--sku F0` creates a free tier Cognitive Services resource. Each subscription is limited to a quota of one free-tier `TextAnalytics` resource. If you're already over the quota, use `--sku S` instead.
-
-1. Clone the sample repository locally and deploy the sample application to App Service. Replace *\<app-name>* with a unique name.
-
- ### [.NET 5](#tab/dotnet)
-
- ```azurecli-interactive
- # Save app name as variable for convenience
- appName=<app-name>
-
- # Clone sample application
- git clone https://github.com/Azure-Samples/app-service-language-detector.git
- cd app-service-language-detector/dotnet
-
- az webapp up --sku F1 --resource-group $groupName --name $appName --plan $appName --location $region
- ```
-
- ### [PHP](#tab/php)
-
- ```azurecli-interactive
- # Clone and prepare sample application
- git clone https://github.com/Azure-Samples/app-service-language-detector.git
- cd app-service-language-detector/php
- zip default.zip index.php
-
- # Save app name as variable for convenience
- appName=<app-name>
-
- az appservice plan create --resource-group $groupName --name $appName --sku FREE --location $region
- az webapp create --resource-group $groupName --plan $appName --name $appName
- az webapp deployment source config-zip --resource-group $groupName --name $appName --src ./default.zip
- ```
-
- --
-
-1. Configure the Cognitive Services secrets as app settings `CS_ACCOUNT_NAME` and `CS_ACCOUNT_KEY`.
-
- ```azurecli-interactive
- # Get subscription key for Cognitive Services resource
- csKey1=$(az cognitiveservices account keys list --resource-group $groupName --name $csResourceName --query key1 --output tsv)
-
- az webapp config appsettings set --resource-group $groupName --name $appName --settings CS_ACCOUNT_NAME="$csResourceName" CS_ACCOUNT_KEY="$csKey1"
- ````
-
-1. In the browser, navigate to your deploy app at `<app-name>.azurewebsites.net` and try out the language detector with strings in various languages.
-
- ![Screenshot that shows deployed language detector app in App Service.](./media/tutorial-connect-msi-key-vault/deployed-app.png)
-
- If you look at the application code, you may notice the debug output for the detection results in the same font color as the background. You can see it by trying to highlight the white space directly below the result.
-
-## Secure back-end connectivity
-
-At the moment, connection secrets are stored as app settings in your App Service app. This approach is already securing connection secrets from your application codebase. However, any contributor who can manage your app can also see the app settings. In this step, you move the connection secrets to a key vault, and lock down access so that only you can manage it and only the App Service app can read it using its managed identity.
-
-1. Create a key vault. Replace *\<vault-name>* with a unique name.
-
- ```azurecli-interactive
- # Save app name as variable for convenience
- vaultName=<vault-name>
-
- az keyvault create --resource-group $groupName --name $vaultName --location $region --sku standard --enable-rbac-authorization
- ```
-
- The `--enable-rbac-authorization` parameter [sets Azure role-based access control (RBAC) as the permission model](../key-vault/general/rbac-guide.md#using-azure-rbac-secret-key-and-certificate-permissions-with-key-vault). This setting by default invalidates all access policies permissions.
-
-1. Give yourself the *Key Vault Secrets Officer* RBAC role for the vault.
-
- ```azurecli-interactive
- vaultResourceId=$(az keyvault show --name $vaultName --query id --output tsv)
- myId=$(az ad signed-in-user show --query objectId --output tsv)
- az role assignment create --role "Key Vault Secrets Officer" --assignee-object-id $myId --assignee-principal-type User --scope $vaultResourceId
- ```
-
-1. Enable the system-assigned managed identity for your app, and give it the *Key Vault Secrets User* RBAC role for the vault.
-
- ```azurecli-interactive
- az webapp identity assign --resource-group $groupName --name $appName --scope $vaultResourceId --role "Key Vault Secrets User"
- ```
-
-1. Add the Cognitive Services resource name and subscription key as secrets to the vault, and save their IDs as environment variables for the next step.
-
- ```azurecli-interactive
- csResourceKVUri=$(az keyvault secret set --vault-name $vaultName --name csresource --value $csResourceName --query id --output tsv)
- csKeyKVUri=$(az keyvault secret set --vault-name $vaultName --name cskey --value $csKey1 --query id --output tsv)
- ```
-
-1. Previously, you set the secrets as app settings `CS_ACCOUNT_NAME` and `CS_ACCOUNT_KEY` in your app. Now, set them as [key vault references](app-service-key-vault-references.md) instead.
-
- ```azurecli-interactive
- az webapp config appsettings set --resource-group $groupName --name $appName --settings CS_ACCOUNT_NAME="@Microsoft.KeyVault(SecretUri=$csResourceKVUri)" CS_ACCOUNT_KEY="@Microsoft.KeyVault(SecretUri=$csKeyKVUri)"
- ```
-
-1. In the browser, navigate to `<app-name>.azurewebsites.net` again. If you get detection results back, then you're connecting to the Cognitive Services endpoint with key vault references.
-
-Congratulations, your app is now connecting to Cognitive Services using secrets kept in your key vault, without any changes to your application code.
-
-## Clean up resources
+```azurecli-interactive
+# Save app name as variable for convenience
+appName=<app-name>
-In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+# Clone sample application
+git clone https://github.com/Azure-Samples/app-service-language-detector.git
+cd app-service-language-detector/dotnet
-```azurecli-interactive
-az group delete --name $groupName
+az webapp up --sku F1 --resource-group $groupName --name $appName --plan $appName --location $region
```
-This command may take a minute to run.
-
-## Next steps
+## Configure secrets as app settings
-- [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)-- [Integrate your app with an Azure virtual network](overview-vnet-integration.md)-- [App Service networking features](networking-features.md)
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
Title: 'Tutorial: Access data with managed identity'
-description: Learn how to make database connectivity more secure by using a managed identity, and also how to apply it to other Azure services.
+description: Secure database connectivity with managed identity from .NET web app, and also how to apply it to other Azure services.
ms.devlang: csharp Previously updated : 01/27/2022 Last updated : 02/16/2022
-# Tutorial: Connect to SQL Database from App Service without secrets using a managed identity
+# Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity
[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you'll add managed identity to the sample web app you built in one of the following tutorials:
app-service Tutorial Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-overview.md
Title: 'Securely connect to Azure resources'
description: Your app service may need to connect to other Azure services such as a database, storage, or another app. This overview recommends the more secure method for connecting. Previously updated : 01/26/2022 Last updated : 02/16/2022 # Securely connect to Azure services and databases from Azure App Service
Your app service may need to connect to other Azure services such as a database,
|Connection method|When to use| |--|--|
-|[Direct connection from App Service managed identity](#connect-to-azure-services-with-managed-identity)|Dependent service [supports managed identity](../active-directory/managed-identities-azure-resources/managed-identities-status.md)<br><br>* Best for enterprise-level security<br>* Connection to dependent service is secured with managed identity<br>* Large team or automated connection string and secret management<br>* Don't manage credentials manually.<br>* Credentials arenΓÇÖt accessible to you.|
-|[Connect using Key Vault secrets from App Service managed identity](#connect-to-key-vault-with-managed-identity)|Dependent service doesn't support managed identity<br><br>* Best for enterprise-level security<br>* Connection includes non-Azure services such as GitHub, Twitter, Facebook, Google<br>* Large team or automated connection string and secret management<br>* Don't manage credentials manually.<br>* Credentials arenΓÇÖt accessible to you.<br>* Manage connection information with environment variables.|
-|[Connect with app settings](#connect-with-app-settings)|* Best for small team or individual owner of Azure resources.<br>* Stage 1 of multi-stage migration to Azure<br>* Temporary or proof-of-concept applications<br>* Manually manage connection information with environment variables|
+|[Direct connection from App Service managed identity](#connect-to-azure-services-with-managed-identity)|Dependent service [supports managed identity](../active-directory/managed-identities-azure-resources/managed-identities-status.md)<br><br>* Best for enterprise-level security.<br>* Connection to dependent service is secured with managed identity.<br>* Large team or automated connection string and secret management.<br>* Don't manage credentials manually.<br>* Credentials arenΓÇÖt accessible to you.<br>* An Azure Active Directory Identity is required to access. Services include Microsoft Graph or Azure management SDKs.|
+|[Connect using Key Vault secrets from App Service managed identity](#connect-to-key-vault-with-managed-identity)|Dependent service doesn't support managed identity.<br><br>* Best for enterprise-level security.<br>* Connection includes non-Azure services such as GitHub, Twitter, Facebook, Google<br>* Large team or automated connection string and secret management<br>* Don't manage credentials manually.<br>* Credentials arenΓÇÖt accessible to you.<br>* Manage connection information with environment variables.|
+|[Connect with app settings](#connect-with-app-settings)|* Best for small team or individual owner of Azure resources.<br>* Stage 1 of multi-stage migration to Azure.<br>* Temporary or proof-of-concept applications.<br>* Manually manage connection information with environment variables.|
## Connect to Azure services with managed identity
attestation Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-portal.md
description: In this quickstart, you'll learn how to set up and configure an att
-+ Last updated 08/31/2020
attestation Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-template.md
Title: Create an Azure Attestation certificate by using Azure Resource Manager t
description: Learn how to create an Azure Attestation certificate by using Azure Resource Manager template. -+
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-webhooks.md
param
[object] $WebhookData )
-if ($WebhookData.RequestBody) {
- $names = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)
+write-output "start"
+write-output ("object type: {0}" -f $WebhookData.gettype())
+write-output $WebhookData
+#write-warning (Test-Json -Json $WebhookData)
+$Payload = $WebhookData | ConvertFrom-Json
+write-output "`n`n"
+write-output $Payload.WebhookName
+write-output $Payload.RequestBody
+write-output $Payload.RequestHeader
+write-output "end"
+
+if ($Payload.RequestBody) {
+ $names = (ConvertFrom-Json -InputObject $Payload.RequestBody)
foreach ($x in $names) {
Automation webhooks can also be created using [Azure Resource Manager](../azure-
## Next steps
-* To trigger a runbook from an alert, see [Use an alert to trigger an Azure Automation runbook](automation-create-alert-triggered-runbook.md).
+* To trigger a runbook from an alert, see [Use an alert to trigger an Azure Automation runbook](automation-create-alert-triggered-runbook.md).
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
To quickly create a Kubernetes cluster, use Azure Kubernetes Services (AKS).
1. Under **Basics**, 1. Specify your **Subscription**. 1. Create a resource group, or specify an existing resource group.
- 1. Specify a cluster name
- 1. Specify a region
- 1. Under **Availability zones**, remove all selected zones. You should not specify any zones.
- 1. Verify the Kubernetes version. For minimum supported version, see [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md).
- 1. Under **Node size**, select a node size for your cluster based on the [Sizing guidance](sizing-guidance.md).
- 1. For **Scale method**, select **Manual**.
+ 2. For **Cluster preset configuration**, review the available options and select for your workload. For a development/test proof of concept, use **Dev/Test**. Select a configuration with at least 4 vCPUs.
+ 3. Specify a cluster name.
+ 4. Specify a region.
+ 5. Under **Availability zones**, remove all selected zones. You should not specify any zones.
+ 6. Verify the Kubernetes version. For minimum supported version, see [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md).
+ 7. Under **Node size**, select a node size for your cluster based on the [Sizing guidance](sizing-guidance.md).
+ 8. For **Scale method**, select **Manual**.
1. Click **Review + create**. 1. Click **Create**.
azure-arc Create Sql Managed Instance Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md
This document walks you through the steps for installing Azure SQL Managed Insta
- Enter and confirm a password for the SQL Server instance - Select the storage class as appropriate for data - Select the storage class as appropriate for logs
+ - Select the storage class as appropriate for backups
+
+ > [!NOTE]
+>Note: Starting with the February release, a ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
+If no storage class is specified for backups, the default storage class in kubernetes is used and if this is not RWX capable, the Arc SQL Managed Instance installation may not succeed.
- Click the **Deploy** button
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md
az sql mi-arc create --help
To create a SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes:
+> [!NOTE]
+> Starting with the February release, a ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
+If no storage class is specified for backups, the default storage class in kubernetes is used and if this is not RWX capable, the Arc SQL Managed Instance installation may not succeed.
+++ ### [Indirectly connected mode](#tab/indirectly) ```azurecli
-az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s
+az sql mi-arc create -n <instanceName> --storage-class-backups <RWX capable storageclass> --k8s-namespace <namespace> --use-k8s
``` Example: ```azurecli
-az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s
+az sql mi-arc create -n sqldemo --storage-class-backups mybackups --k8s-namespace my-namespace --use-k8s
``` ### [Directly connected mode](#tab/directly) ```azurecli
-az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> -ΓÇôsubscription <subscription> --custom-location <custom-location>
+az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> -ΓÇôsubscription <subscription> --custom-location <custom-location> --storage-class-backups <RWX capable storageclass>
``` Example: ```azurecli
-az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 -ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location
+az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 -ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --storage-class-backups mybackups
```
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI
- Ensure you have met all the common prerequisites for cluster extensions listed [here](extensions.md#prerequisites). - Use az k8s-extension CLI version >= v0.4.0
-## Basic Installation of Azure Arc-enabled OSM on an Azure Arc-enabled Kubernetes Cluster
+## Basic installation of Azure Arc-enabled OSM
The following steps assume that you already have a cluster with a supported Kubernetes distribution connected to Azure Arc. Ensure that your KUBECONFIG environment variable points to the kubeconfig of the Arc-enabled Kubernetes cluster.
You should see output similar to the output shown below. It may take 3-5 minutes
} ```
-## Custom Installations of Azure Arc-enabled OSM
+## Custom installations of Azure Arc-enabled OSM
The following sections describe certain custom installations of Azure Arc-enabled OSM. Custom installations require setting values of OSM by in a JSON file and passing them into `k8s-extension create` CLI command as described below.
It may take 3-5 minutes for the actual OSM helm chart to get deployed to the clu
To ensure that the privileged init container setting is not reverted to the default, pass in the "osm.osm.enablePrivilegedInitContainer" : "true" configuration setting to all subsequent az k8s-extension create commands.
+### Enable High Availability features on installation
+OSM's control plane components are built with High Availability and Fault Tolerance in mind. This section describes how to
+enable Horizontal Pod Autoscaling (HPA) and Pod Disruption Budget (PDB) during installation. Read more on the design
+considerations of High Availability on OSM [here](https://openservicemesh.io/docs/guides/ha_scale/high_availability/).
+
+#### Horizontal Pod Autoscaling (HPA)
+HPA automatically scales up or down control plane pods based on the average target CPU utilization (%) and average target
+memory utilization (%) defined by the user. To enable HPA and set applicable values on OSM control plane pods during installation, create or
+append to your existing JSON settings file as below, repeating the key/value pairs for each control plane pod
+(`osmController`, `injector`) that you want to enable HPA on.
+
+```json
+{
+ "osm.osm.<control_plane_pod>.autoScale.enable" : "true",
+ "osm.osm.<control_plane_pod>.autoScale.minReplicas" : "<allowed values: 1-10>",
+ "osm.osm.<control_plane_pod>.autoScale.maxReplicas" : "<allowed values: 1-10>",
+ "osm.osm.<control_plane_pod>.autoScale.cpu.targetAverageUtilization" : "<allowed values 0-100>",
+ "osm.osm.<control_plane_pod>.autoScale.memory.targetAverageUtilization" : "<allowed values 0-100>"
+}
+```
+
+Now, [install OSM with custom values](#setting-values-during-osm-installation).
+
+#### Pod Disruption Budget (PDB)
+In order to prevent disruptions during planned outages, control plane pods `osm-controller` and `osm-injector` have a PDB
+that ensures there is always at least 1 pod corresponding to each control plane application.
+
+To enable PDB, create or append to your existing JSON settings file as follows for each desired control plane pod
+(`osmController`, `injector`):
+```json
+{
+ "osm.osm.<control_plane_pod>.enablePodDisruptionBudget" : "true"
+}
+```
+
+Now, [install OSM with custom values](#setting-values-during-osm-installation).
+ ### Install OSM with cert-manager for Certificate Management [cert-manager](https://cert-manager.io/) is a provider that can be used for issuing signed certificates to OSM without the need for storing private keys in Kubernetes. Refer to OSM's [cert-manager documentation](https://release-v0-11.docs.openservicemesh.io/docs/guides/certificates/)
and [demo](https://docs.openservicemesh.io/docs/demos/cert-manager_integration/)
> [!NOTE] > Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system`.
-To install OSM with cert-manager as the certificate provider, create a JSON file with the `certificateProvider.kind` value set to
-cert-manager as shown below. If you would like to change from default cert-manager values specified in OSM documentation,
+To install OSM with cert-manager as the certificate provider, create or append to your existing JSON settings file the `certificateProvider.kind`
+value set to cert-manager as shown below. If you would like to change from default cert-manager values specified in OSM documentation,
also include and update the subsequent `certmanager.issuer` lines. ```json
and [demo](https://docs.openservicemesh.io/docs/demos/ingress_contour/) to learn
> [!NOTE] > Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system`.
-To set required values for configuring Contour during OSM installation, create the following JSON file:
+To set required values for configuring Contour during OSM installation, append the following to your JSON settings file:
```json { "osm.osm.osmNamespace" : "arc-osm-system",
To set required values for configuring Contour during OSM installation, create t
Now, [install OSM with custom values](#setting-values-during-osm-installation). ### Setting values during OSM installation
-Values that need to be set during OSM installation need to be saved to a JSON file and passed in through the Azure CLI
+Any values that need to be set during OSM installation need to be saved to a single JSON file and passed in through the Azure CLI
install command. Once you have created a JSON file with applicable values as described in above custom installation sections, set the
Run the `az k8s-extension create` command to create the OSM extension, passing i
## Install Azure Arc-enabled OSM using ARM template
-After connecting your cluster to Azure Arc, create a json file with the following format, making sure to update the \<cluster-name\> and \<osm-arc-version\> values:
+After connecting your cluster to Azure Arc, create a JSON file with the following format, making sure to update the \<cluster-name\> and \<osm-arc-version\> values:
```json {
azure-fluid-relay Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/overview/overview.md
Last updated 08/19/2021-+
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-linux-custom-image.md
A function app on Azure manages the execution of your functions in your hosting
az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0 ```
- In the [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command, the *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az_functionapp_config_container_show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az_functionapp_config_container_set) command to deploy from a different image.
+ In the [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command, the *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az_functionapp_config_container_show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az_functionapp_config_container_set) command to deploy from a different image. NOTE: If you are using a custom container registry then the *deployment-container-image-name* parameter will refer to the registry URL.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
A function app on Azure manages the execution of your functions in your hosting
```
- In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also replace `<APP_NAME>` with a globally unique name appropriate to you, and `<DOCKER_ID>` with your DockerHub ID.
+ In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also replace `<APP_NAME>` with a globally unique name appropriate to you, and `<DOCKER_ID>` with your DockerHub ID. When deploying from a custom container registry, use the `deployment-container-image-name` parameter to indicate the URL of the registry.
> [!TIP] > You can use the [`DisableColor` setting](functions-host-json.md#console) in the host.json file to prevent ANSI control characters from being written to the container logs.
az group delete --name AzureFunctionsContainer-rg
+ [Scale and hosting options](functions-scale.md) + [Kubernetes-based serverless hosting](functions-kubernetes-keda.md)
-[authorization keys]: functions-bindings-http-webhook-trigger.md#authorization-keys
+[authorization keys]: functions-bindings-http-webhook-trigger.md#authorization-keys
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
description: Understand how to use C# to develop and publish code as class libra
ms.devlang: csharp Previously updated : 07/24/2021 Last updated : 02/08/2022 # Develop C# class library functions using Azure Functions
namespace ServiceBusCancellationToken
} ```
-As in the previous example, you commonly iterate through an array using a `foreach` loop. Within this loop and before processing the message, you should check the value of `cancellationToken.IsCancellationRequested` to see if cancellation is pending. In the case where `IsCancellationRequested` is `true`, you might need to take some actions to prepare for a graceful shutdown. For example, you might want to log the status of your code before the shutdown, or perhaps write to a persisted store the portion of the message batch which hasn't yet been processed. If you write this kind of information to a persisted store, your startup code needs to check the store for any unprocessed message batches that were written during shutdown. What your code needs to do during graceful shutdown depends on your specific scenario.
-
-Azure Event Hubs is an other trigger that supports batch processing messages. The following example is a function method definition for an Event Hubs trigger with a cancellation token that accepts an incoming batch as an array of [EventData](/dotnet/api/microsoft.azure.eventhubs.eventdata) objects:
-
-```csharp
-public async Task Run([EventHubTrigger("csharpguitar", Connection = "EH_CONN")]
- EventData[] events, CancellationToken cancellationToken, ILogger log)
-```
-
-The pattern to process a batch of Event Hubs events is similar to the previous example of processing a batch of Service Bus messages. In each case, you should check the cancellation token for a cancellation state before processing each item in the array. When a pending shutdown is detected in the middle of the batch, handle it gracefully based on your business requirements.
- ## Logging In your function code, you can write output to logs that appear as traces in Application Insights. The recommended way to write to the logs is to include a parameter of type [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger), which is typically named `log`. Version 1.x of the Functions runtime used `TraceWriter`, which also writes to Application Insights, but doesn't support structured logging. Don't use `Console.Write` to write your logs, since this data isn't captured by Application Insights.
Don't call `TrackRequest` or `StartOperation<RequestTelemetry>` because you'll s
Don't set `telemetryClient.Context.Operation.Id`. This global setting causes incorrect correlation when many functions are running simultaneously. Instead, create a new telemetry instance (`DependencyTelemetry`, `EventTelemetry`) and modify its `Context` property. Then pass in the telemetry instance to the corresponding `Track` method on `TelemetryClient` (`TrackDependency()`, `TrackEvent()`, `TrackMetric()`). This method ensures that the telemetry has the correct correlation details for the current function invocation.
+## Testing functions in C# in Visual Studio
+
+The following example describes how to create a C# Function app in Visual Studio and run and tests with [xUnit](https://github.com/xunit/xunit).
+
+![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
+
+### Setup
+
+To set up your environment, create a Function and test app. The following steps help you create the apps and functions required to support the tests:
+
+1. [Create a new Functions app](functions-get-started.md) and name it **Functions**
+2. [Create an HTTP function from the template](functions-get-started.md) and name it **MyHttpTrigger**.
+3. [Create a timer function from the template](functions-create-scheduled-function.md) and name it **MyTimerTrigger**.
+4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**.
+5. Use NuGet to add a reference from the test app to [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/)
+6. [Reference the *Functions* app](/visualstudio/ide/managing-references-in-a-project) from *Functions.Tests* app.
+
+### Create test classes
+
+Now that the projects are created, you can create the classes used to run the automated tests.
+
+Each function takes an instance of [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) to handle message logging. Some tests either don't log messages or have no concern for how logging is implemented. Other tests need to evaluate messages logged to determine whether a test is passing.
+
+You'll create a new class named `ListLogger` which holds an internal list of messages to evaluate during a testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
+
+Create a new class in *Functions.Tests* project named **NullScope.cs** and enter the following code:
+
+```csharp
+using System;
+
+namespace Functions.Tests
+{
+ public class NullScope : IDisposable
+ {
+ public static NullScope Instance { get; } = new NullScope();
+
+ private NullScope() { }
+
+ public void Dispose() { }
+ }
+}
+```
+
+Next, create a new class in *Functions.Tests* project named **ListLogger.cs** and enter the following code:
+
+```csharp
+using Microsoft.Extensions.Logging;
+using System;
+using System.Collections.Generic;
+using System.Text;
+
+namespace Functions.Tests
+{
+ public class ListLogger : ILogger
+ {
+ public IList<string> Logs;
+
+ public IDisposable BeginScope<TState>(TState state) => NullScope.Instance;
+
+ public bool IsEnabled(LogLevel logLevel) => false;
+
+ public ListLogger()
+ {
+ this.Logs = new List<string>();
+ }
+
+ public void Log<TState>(LogLevel logLevel,
+ EventId eventId,
+ TState state,
+ Exception exception,
+ Func<TState, Exception, string> formatter)
+ {
+ string message = formatter(state, exception);
+ this.Logs.Add(message);
+ }
+ }
+}
+```
+
+The `ListLogger` class implements the following members as contracted by the `ILogger` interface:
+
+- **BeginScope**: Scopes add context to your logging. In this case, the test just points to the static instance on the `NullScope` class to allow the test to function.
+
+- **IsEnabled**: A default value of `false` is provided.
+
+- **Log**: This method uses the provided `formatter` function to format the message and then adds the resulting text to the `Logs` collection.
+
+The `Logs` collection is an instance of `List<string>` and is initialized in the constructor.
+
+Next, create a new file in *Functions.Tests* project named **LoggerTypes.cs** and enter the following code:
+
+```csharp
+namespace Functions.Tests
+{
+ public enum LoggerTypes
+ {
+ Null,
+ List
+ }
+}
+```
+
+This enumeration specifies the type of logger used by the tests.
+
+Now create a new class in *Functions.Tests* project named **TestFactory.cs** and enter the following code:
+
+```csharp
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Http.Internal;
+using Microsoft.Extensions.Logging;
+using Microsoft.Extensions.Logging.Abstractions;
+using Microsoft.Extensions.Primitives;
+using System.Collections.Generic;
+
+namespace Functions.Tests
+{
+ public class TestFactory
+ {
+ public static IEnumerable<object[]> Data()
+ {
+ return new List<object[]>
+ {
+ new object[] { "name", "Bill" },
+ new object[] { "name", "Paul" },
+ new object[] { "name", "Steve" }
+
+ };
+ }
+
+ private static Dictionary<string, StringValues> CreateDictionary(string key, string value)
+ {
+ var qs = new Dictionary<string, StringValues>
+ {
+ { key, value }
+ };
+ return qs;
+ }
+
+ public static HttpRequest CreateHttpRequest(string queryStringKey, string queryStringValue)
+ {
+ var context = new DefaultHttpContext();
+ var request = context.Request;
+ request.Query = new QueryCollection(CreateDictionary(queryStringKey, queryStringValue));
+ return request;
+ }
+
+ public static ILogger CreateLogger(LoggerTypes type = LoggerTypes.Null)
+ {
+ ILogger logger;
+
+ if (type == LoggerTypes.List)
+ {
+ logger = new ListLogger();
+ }
+ else
+ {
+ logger = NullLoggerFactory.Instance.CreateLogger("Null Logger");
+ }
+
+ return logger;
+ }
+ }
+}
+```
+
+The `TestFactory` class implements the following members:
+
+- **Data**: This property returns an [IEnumerable](/dotnet/api/system.collections.ienumerable) collection of sample data. The key value pairs represent values that are passed into a query string.
+
+- **CreateDictionary**: This method accepts a key/value pair as arguments and returns a new `Dictionary` used to create `QueryCollection` to represent query string values.
+
+- **CreateHttpRequest**: This method creates an HTTP request initialized with the given query string parameters.
+
+- **CreateLogger**: Based on the logger type, this method returns a logger class used for testing. The `ListLogger` keeps track of logged messages available for evaluation in tests.
+
+Finally, create a new class in *Functions.Tests* project named **FunctionsTests.cs** and enter the following code:
+
+```csharp
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+using Xunit;
+
+namespace Functions.Tests
+{
+ public class FunctionsTests
+ {
+ private readonly ILogger logger = TestFactory.CreateLogger();
+
+ [Fact]
+ public async void Http_trigger_should_return_known_string()
+ {
+ var request = TestFactory.CreateHttpRequest("name", "Bill");
+ var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
+ Assert.Equal("Hello, Bill. This HTTP triggered function executed successfully.", response.Value);
+ }
+
+ [Theory]
+ [MemberData(nameof(TestFactory.Data), MemberType = typeof(TestFactory))]
+ public async void Http_trigger_should_return_known_string_from_member_data(string queryStringKey, string queryStringValue)
+ {
+ var request = TestFactory.CreateHttpRequest(queryStringKey, queryStringValue);
+ var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
+ Assert.Equal($"Hello, {queryStringValue}. This HTTP triggered function executed successfully.", response.Value);
+ }
+
+ [Fact]
+ public void Timer_should_log_message()
+ {
+ var logger = (ListLogger)TestFactory.CreateLogger(LoggerTypes.List);
+ MyTimerTrigger.Run(null, logger);
+ var msg = logger.Logs[0];
+ Assert.Contains("C# Timer trigger function executed at", msg);
+ }
+ }
+}
+```
+
+The members implemented in this class are:
+
+- **Http_trigger_should_return_known_string**: This test creates a request with the query string values of `name=Bill` to an HTTP function and checks that the expected response is returned.
+
+- **Http_trigger_should_return_string_from_member_data**: This test uses xUnit attributes to provide sample data to the HTTP function.
+
+- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer functions. Once the function is run, then the log is checked to ensure the expected message is present.
+
+If you want to access application settings in your tests, you can [inject](functions-dotnet-dependency-injection.md) an `IConfiguration` instance with mocked environment variable values into your function.
+
+### Run tests
+
+To run the tests, navigate to the **Test Explorer** and click **Run all**.
+
+![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
+
+### Debug tests
+
+To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and click **Run > Debug Last Run**.
+ ## Environment variables To get an environment variable or an app setting value, use `System.Environment.GetEnvironmentVariable`, as shown in the following code example:
azure-functions Functions How To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md
Title: Continuously update function app code using Azure Pipelines
description: Learn how to set up an Azure DevOps pipeline that targets Azure Functions. Previously updated : 12/08/2021 Last updated : 02/25/2022 ms.devlang: azurecli
ms.devlang: azurecli
# Continuous delivery with Azure Pipelines
-Automatically deploy to Azure Functions with [Azure Pipelines](/azure/devops/pipelines/). Azure Pipelines lets you automate your software development and continuously test, build, and deploy your code.
+Use [Azure Pipelines](/azure/devops/pipelines/) to automatically deploy to Azure Functions. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/).
-YAML pipelines aren't available for Azure DevOps 2019 and earlier.
+YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a pipeline and can be a script or task (pre-packaged script). [Learn about the key concepts and components that make up a pipeline](/azure/devops/pipelines/get-started/key-pipelines-concepts).
+YAML pipelines aren't available for Azure DevOps 2019 and earlier.
## Prerequisites * A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com).
-* An Azure DevOps organization. If you don't have one, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up). (An Azure DevOps organization is different from your GitHub organization. You can give your DevOps organization and your GitHub organization the same name if you want alignment between them.)
-
- If your team already has one, then make sure you're an administrator of the Azure DevOps project that you want to use.
+* An Azure DevOps organization. If you don't have one, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up). If your team already has one, then make sure you're an administrator of the Azure DevOps project that you want to use.
-* An ability to run pipelines on Microsoft-hosted agents. You can either purchase a [parallel job](/azure/devops/pipelines/licensing/concurrent-jobs) or you can request a free tier. To request a free tier, follow the instructions in [this article](/azure/devops/pipelines/licensing/concurrent-jobs). Note that it may take us 2-3 business days to grant access to the free tier.
+* An ability to run pipelines on Microsoft-hosted agents. You can either purchase a [parallel job](/azure/devops/pipelines/licensing/concurrent-jobs) or you can request a free tier.
## Create your function app
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
description: Understand how to develop functions by using JavaScript.
ms.assetid: 45dedd78-3ff9-411f-bb4b-16d29a11384c Previously updated : 11/18/2021 Last updated : 02/24/2022 ms.devlang: javascript
This guide contains detailed information to help you succeed developing Azure Functions using JavaScript.
-As an Express.js, Node.js, or JavaScript developer, if you are new to Azure Functions, please consider first reading one of the following articles:
+As an Express.js, Node.js, or JavaScript developer, if you're new to Azure Functions, please consider first reading one of the following articles:
| Getting started | Concepts| Guided learning | | -- | -- | -- |
Your exported function is passed a number of arguments on execution. The first a
# [2.x+](#tab/v2-v3-v4-export)
-When using the [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) declaration or plain JavaScript [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise) in version 2.x, 3.x, or 4.x of the Functions runtime, you do not need to explicitly call the [`context.done`](#contextdone-method) callback to signal that your function has completed. Your function completes when the exported async function/Promise completes.
+When using the [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) declaration or plain JavaScript [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise) in version 2.x, 3.x, or 4.x of the Functions runtime, you don't need to explicitly call the [`context.done`](#contextdone-method) callback to signal that your function has completed. Your function completes when the exported async function/Promise completes.
The following example is a simple function that logs that it was triggered and immediately completes execution.
In JavaScript, [bindings](functions-triggers-bindings.md) are configured and def
### Inputs Input are divided into two categories in Azure Functions: one is the trigger input and the other is the additional input. Trigger and other input bindings (bindings of `direction === "in"`) can be read by a function in three ways:
+ - **_[Recommended]_ As parameters passed to your function.** They're passed to the function in the same order that they're defined in *function.json*. The `name` property defined in *function.json* doesn't need to match the name of your parameter, although it should.
```javascript module.exports = async function(context, myTrigger, myInput, myOtherInput) { ... };
Returns a named object that contains trigger metadata and function invocation da
# [2.x](#tab/v2-v3-v4-done)
-In 2.x, 3.x, and 4.x, the function should be marked as async even if there is no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function.
+In 2.x, 3.x, and 4.x, the function should be marked as async even if there's no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function.
```javascript //you don't need an awaited function call inside to use async
module.exports = async function (context, req) {
``` # [1.x](#tab/v1-done)
-The **context.done** method is used by 1.x synchronous functions. In 2.x, 3.x, and 4.x, the function should be marked as async even if there is no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function.
+The **context.done** method is used by 1.x synchronous functions. In 2.x, 3.x, and 4.x, the function should be marked as async even if there's no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function.
```javascript module.exports = function (context, req) {
When you work with HTTP triggers, you can access the HTTP request and response o
```
-Note that request and response keys are in lowercase.
+Request and response keys are in lowercase.
## Scaling and concurrency
module.exports = async function(context) {
> [!NOTE] > You should define a `package.json` file at the root of your Function App. Defining the file lets all functions in the app share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by adding a `package.json` file in the folder of a specific function.
-When deploying Function Apps from source control, any `package.json` file present in your repo, will trigger an `npm install` in its folder during deployment. But when deploying via the Portal or CLI, you will have to manually install the packages.
+When deploying Function Apps from source control, any `package.json` file present in your repo, will trigger an `npm install` in its folder during deployment. But when deploying via the Portal or CLI, you'll have to manually install the packages.
There are two ways to install packages on your Function App:
There are two ways to install packages on your Function App:
### <a name="using-kudu"></a>Using Kudu (Windows only) 1. Go to `https://<function_app_name>.scm.azurewebsites.net`.
-2. Click **Debug Console** > **CMD**.
+2. Select **Debug Console** > **CMD**.
3. Go to `D:\home\site\wwwroot`, and then drag your package.json file to the **wwwroot** folder at the top half of the page. You can upload files to your function app in other ways also. For more information, see [How to update function app files](functions-reference.md#fileupdate).
const myObj = new MyObj();
module.exports = myObj; ```
-In this example, it is important to note that although an object is being exported, there are no guarantees for preserving state between executions.
+In this example, it's important to note that although an object is being exported, there are no guarantees for preserving state between executions.
-## Local Debugging
+## Local debugging
When started with the `--inspect` parameter, a Node.js process listens for a debugging client on the specified port. In Azure Functions 2.x or higher, you can specify arguments to pass into the Node.js process that runs your code by adding the environment variable or App Setting `languageWorkers:node:arguments = <args>`.
To debug locally, add `"languageWorkers:node:arguments": "--inspect=5858"` under
When debugging using VS Code, the `--inspect` parameter is automatically added using the `port` value in the project's launch.json file.
-In version 1.x, setting `languageWorkers:node:arguments` will not work. The debug port can be selected with the [`--nodeDebugPort`](./functions-run-local.md#start) parameter on Azure Functions Core Tools.
+In version 1.x, setting `languageWorkers:node:arguments` won't work. The debug port can be selected with the [`--nodeDebugPort`](./functions-run-local.md#start) parameter on Azure Functions Core Tools.
> [!NOTE] > You can only configure `languageWorkers:node:arguments` when running the function app locally.
+## Testing
+
+Testing your functions includes:
+
+* **HTTP end-to-end**: To test a function from its HTTP endpoint, you can use any tool that can make an HTTP request such as cURL, Postman, or JavaScript's fetch method.
+* **Integration testing**: Integration test includes the function app layer. This testing means you need to control the parameters into the function including the request and the context. The context is unique to each kind of trigger and means you need to know the incoming and outgoing bindings for that [trigger type](functions-triggers-bindings.md?tabs=javascript#supported-bindings).
+
+ Learn more about integration testing and mocking the context layer with an experimental GitHub repo, [https://github.com/anthonychu/azure-functions-test-utils](https://github.com/anthonychu/azure-functions-test-utils).
+
+* **Unit testing**: Unit testing is performed within the function app. You can use any tool that can test JavaScript, such as Jest or Mocha.
+ ## TypeScript When you target version 2.x or higher of the Functions runtime, both [Azure Functions for Visual Studio Code](./create-first-function-cli-typescript.md) and the [Azure Functions Core Tools](functions-run-local.md) let you create function apps using a template that supports TypeScript function app projects. The template generates `package.json` and `tsconfig.json` project files that make it easier to transpile, run, and publish JavaScript functions from TypeScript code with these tools. A generated `.funcignore` file is used to indicate which files are excluded when a project is published to Azure.
-TypeScript files (.ts) are transpiled into JavaScript files (.js) in the `dist` output directory. TypeScript templates use the [`scriptFile` parameter](#using-scriptfile) in `function.json` to indicate the location of the corresponding .js file in the `dist` folder. The output location is set by the template by using `outDir` parameter in the `tsconfig.json` file. If you change this setting or the name of the folder, the runtime is not able to find the code to run.
+TypeScript files (.ts) are transpiled into JavaScript files (.js) in the `dist` output directory. TypeScript templates use the [`scriptFile` parameter](#using-scriptfile) in `function.json` to indicate the location of the corresponding .js file in the `dist` folder. The output location is set by the template by using `outDir` parameter in the `tsconfig.json` file. If you change this setting or the name of the folder, the runtime isn't able to find the code to run.
The way that you locally develop and deploy from a TypeScript project depends on your development tool.
When you work with JavaScript functions, be aware of the considerations in the f
### Choose single-vCPU App Service plans
-When you create a function app that uses the App Service plan, we recommend that you select a single-vCPU plan rather than a plan with multiple vCPUs. Today, Functions runs JavaScript functions more efficiently on single-vCPU VMs, and using larger VMs does not produce the expected performance improvements. When necessary, you can manually scale out by adding more single-vCPU VM instances, or you can enable autoscale. For more information, see [Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json).
+When you create a function app that uses the App Service plan, we recommend that you select a single-vCPU plan rather than a plan with multiple vCPUs. Today, Functions runs JavaScript functions more efficiently on single-vCPU VMs, and using larger VMs doesn't produce the expected performance improvements. When necessary, you can manually scale out by adding more single-vCPU VM instances, or you can enable autoscale. For more information, see [Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json).
### Cold Start
-When developing Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the fact that when your function app starts for the first time after a period of inactivity, it takes longer to start up. For JavaScript functions with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use the run from package model by default, but if you're experiencing large cold starts and are not running this way, this change can offer a significant improvement.
+When developing Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the fact that when your function app starts for the first time after a period of inactivity, it takes longer to start up. For JavaScript functions with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use the run from package model by default, but if you're experiencing large cold starts and aren't running this way, this change can offer a significant improvement.
### Connection Limits
When you use a service-specific client in an Azure Functions application, don't
When writing Azure Functions in JavaScript, you should write code using the `async` and `await` keywords. Writing code using `async` and `await` instead of callbacks or `.then` and `.catch` with Promises helps avoid two common problems: - Throwing uncaught exceptions that [crash the Node.js process](https://nodejs.org/api/process.html#process_warning_using_uncaughtexception_correctly), potentially affecting the execution of other functions.
+ - Unexpected behavior, such as missing logs from context.log, caused by asynchronous calls that aren't properly awaited.
-In the example below, the asynchronous method `fs.readFile` is invoked with an error-first callback function as its second parameter. This code causes both of the issues mentioned above. An exception that is not explicitly caught in the correct scope crashed the entire process (issue #1). Calling the 1.x `context.done()` outside of the scope of the callback function means that the function invocation may end before the file is read (issue #2). In this example, calling 1.x `context.done()` too early results in missing log entries starting with `Data from file:`.
+In the example below, the asynchronous method `fs.readFile` is invoked with an error-first callback function as its second parameter. This code causes both of the issues mentioned above. An exception that isn't explicitly caught in the correct scope crashed the entire process (issue #1). Calling the 1.x `context.done()` outside of the scope of the callback function means that the function invocation may end before the file is read (issue #2). In this example, calling 1.x `context.done()` too early results in missing log entries starting with `Data from file:`.
```javascript // NOT RECOMMENDED PATTERN
azure-functions Functions Test A Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-test-a-function.md
- Title: Testing Azure Functions
-description: Create automated tests for a C# Function in Visual Studio and JavaScript Function in VS Code
---- Previously updated : 03/25/2019---
-# Strategies for testing your code in Azure Functions
-
-This article demonstrates how to create automated tests for Azure Functions.
-
-Testing all code is recommended, however you may get the best results by wrapping up a Function's logic and creating tests outside the Function. Abstracting logic away limits a Function's lines of code and allows the Function to be solely responsible for calling other classes or modules. This article, however, demonstrates how to create automated tests against an HTTP and timer-triggered functions.
-
-The content that follows is split into two different sections meant to target different languages and environments. You can learn to build tests in:
--- [C# in Visual Studio with xUnit](#c-in-visual-studio)-- [JavaScript in VS Code with Jest](#javascript-in-vs-code)-- [Python using pytest](./functions-reference-python.md?tabs=application-level#unit-testing)-
-The sample repository is available on [GitHub](https://github.com/Azure-Samples/azure-functions-tests).
-
-## C# in Visual Studio
-
-The following example describes how to create a C# Function app in Visual Studio and run and tests with [xUnit](https://github.com/xunit/xunit).
-
-![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
-
-### Setup
-
-To set up your environment, create a Function and test app. The following steps help you create the apps and functions required to support the tests:
-
-1. [Create a new Functions app](./functions-get-started.md) and name it **Functions**
-2. [Create an HTTP function from the template](./functions-get-started.md) and name it **MyHttpTrigger**.
-3. [Create a timer function from the template](./functions-create-scheduled-function.md) and name it **MyTimerTrigger**.
-4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**.
-5. Use NuGet to add a reference from the test app to [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/)
-6. [Reference the *Functions* app](/visualstudio/ide/managing-references-in-a-project) from *Functions.Tests* app.
-
-### Create test classes
-
-Now that the projects are created, you can create the classes used to run the automated tests.
-
-Each function takes an instance of [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) to handle message logging. Some tests either don't log messages or have no concern for how logging is implemented. Other tests need to evaluate messages logged to determine whether a test is passing.
-
-You'll create a new class named `ListLogger` which holds an internal list of messages to evaluate during a testing. To implement the required `ILogger` interface, the class needs a scope. The following class mocks a scope for the test cases to pass to the `ListLogger` class.
-
-Create a new class in *Functions.Tests* project named **NullScope.cs** and enter the following code:
-
-```csharp
-using System;
-
-namespace Functions.Tests
-{
- public class NullScope : IDisposable
- {
- public static NullScope Instance { get; } = new NullScope();
-
- private NullScope() { }
-
- public void Dispose() { }
- }
-}
-```
-
-Next, create a new class in *Functions.Tests* project named **ListLogger.cs** and enter the following code:
-
-```csharp
-using Microsoft.Extensions.Logging;
-using System;
-using System.Collections.Generic;
-using System.Text;
-
-namespace Functions.Tests
-{
- public class ListLogger : ILogger
- {
- public IList<string> Logs;
-
- public IDisposable BeginScope<TState>(TState state) => NullScope.Instance;
-
- public bool IsEnabled(LogLevel logLevel) => false;
-
- public ListLogger()
- {
- this.Logs = new List<string>();
- }
-
- public void Log<TState>(LogLevel logLevel,
- EventId eventId,
- TState state,
- Exception exception,
- Func<TState, Exception, string> formatter)
- {
- string message = formatter(state, exception);
- this.Logs.Add(message);
- }
- }
-}
-```
-
-The `ListLogger` class implements the following members as contracted by the `ILogger` interface:
--- **BeginScope**: Scopes add context to your logging. In this case, the test just points to the static instance on the `NullScope` class to allow the test to function.--- **IsEnabled**: A default value of `false` is provided.--- **Log**: This method uses the provided `formatter` function to format the message and then adds the resulting text to the `Logs` collection.-
-The `Logs` collection is an instance of `List<string>` and is initialized in the constructor.
-
-Next, create a new file in *Functions.Tests* project named **LoggerTypes.cs** and enter the following code:
-
-```csharp
-namespace Functions.Tests
-{
- public enum LoggerTypes
- {
- Null,
- List
- }
-}
-```
-
-This enumeration specifies the type of logger used by the tests.
-
-Now create a new class in *Functions.Tests* project named **TestFactory.cs** and enter the following code:
-
-```csharp
-using Microsoft.AspNetCore.Http;
-using Microsoft.AspNetCore.Http.Internal;
-using Microsoft.Extensions.Logging;
-using Microsoft.Extensions.Logging.Abstractions;
-using Microsoft.Extensions.Primitives;
-using System.Collections.Generic;
-
-namespace Functions.Tests
-{
- public class TestFactory
- {
- public static IEnumerable<object[]> Data()
- {
- return new List<object[]>
- {
- new object[] { "name", "Bill" },
- new object[] { "name", "Paul" },
- new object[] { "name", "Steve" }
-
- };
- }
-
- private static Dictionary<string, StringValues> CreateDictionary(string key, string value)
- {
- var qs = new Dictionary<string, StringValues>
- {
- { key, value }
- };
- return qs;
- }
-
- public static HttpRequest CreateHttpRequest(string queryStringKey, string queryStringValue)
- {
- var context = new DefaultHttpContext();
- var request = context.Request;
- request.Query = new QueryCollection(CreateDictionary(queryStringKey, queryStringValue));
- return request;
- }
-
- public static ILogger CreateLogger(LoggerTypes type = LoggerTypes.Null)
- {
- ILogger logger;
-
- if (type == LoggerTypes.List)
- {
- logger = new ListLogger();
- }
- else
- {
- logger = NullLoggerFactory.Instance.CreateLogger("Null Logger");
- }
-
- return logger;
- }
- }
-}
-```
-
-The `TestFactory` class implements the following members:
--- **Data**: This property returns an [IEnumerable](/dotnet/api/system.collections.ienumerable) collection of sample data. The key value pairs represent values that are passed into a query string.--- **CreateDictionary**: This method accepts a key/value pair as arguments and returns a new `Dictionary` used to create `QueryCollection` to represent query string values.--- **CreateHttpRequest**: This method creates an HTTP request initialized with the given query string parameters.--- **CreateLogger**: Based on the logger type, this method returns a logger class used for testing. The `ListLogger` keeps track of logged messages available for evaluation in tests.-
-Finally, create a new class in *Functions.Tests* project named **FunctionsTests.cs** and enter the following code:
-
-```csharp
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Logging;
-using Xunit;
-
-namespace Functions.Tests
-{
- public class FunctionsTests
- {
- private readonly ILogger logger = TestFactory.CreateLogger();
-
- [Fact]
- public async void Http_trigger_should_return_known_string()
- {
- var request = TestFactory.CreateHttpRequest("name", "Bill");
- var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
- Assert.Equal("Hello, Bill. This HTTP triggered function executed successfully.", response.Value);
- }
-
- [Theory]
- [MemberData(nameof(TestFactory.Data), MemberType = typeof(TestFactory))]
- public async void Http_trigger_should_return_known_string_from_member_data(string queryStringKey, string queryStringValue)
- {
- var request = TestFactory.CreateHttpRequest(queryStringKey, queryStringValue);
- var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
- Assert.Equal($"Hello, {queryStringValue}. This HTTP triggered function executed successfully.", response.Value);
- }
-
- [Fact]
- public void Timer_should_log_message()
- {
- var logger = (ListLogger)TestFactory.CreateLogger(LoggerTypes.List);
- MyTimerTrigger.Run(null, logger);
- var msg = logger.Logs[0];
- Assert.Contains("C# Timer trigger function executed at", msg);
- }
- }
-}
-```
-
-The members implemented in this class are:
--- **Http_trigger_should_return_known_string**: This test creates a request with the query string values of `name=Bill` to an HTTP function and checks that the expected response is returned.--- **Http_trigger_should_return_string_from_member_data**: This test uses xUnit attributes to provide sample data to the HTTP function.--- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer functions. Once the function is run, then the log is checked to ensure the expected message is present.-
-If you want to access application settings in your tests, you can [inject](./functions-dotnet-dependency-injection.md) an `IConfiguration` instance with mocked environment variable values into your function.
-
-### Run tests
-
-To run the tests, navigate to the **Test Explorer** and click **Run all**.
-
-![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
-
-### Debug tests
-
-To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and click **Run > Debug Last Run**.
-
-## JavaScript in VS Code
-
-The following example describes how to create a JavaScript Function app in VS Code and run and tests with [Jest](https://jestjs.io). This procedure uses the [VS Code Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) to create Azure Functions.
-
-![Testing Azure Functions with JavaScript in VS Code](./media/functions-test-a-function/azure-functions-test-vs-code-jest.png)
-
-### Setup
-
-To set up your environment, initialize a new Node.js app in an empty folder by running `npm init`.
-
-```bash
-npm init -y
-```
-
-Next, install Jest by running the following command:
-
-```bash
-npm i jest
-```
-
-Now update _package.json_ to replace the existing test command with the following command:
-
-```bash
-"scripts": {
- "test": "jest"
-}
-```
-
-### Create test modules
-
-With the project initialized, you can create the modules used to run the automated tests. Begin by creating a new folder named *testing* to hold the support modules.
-
-In the *testing* folder add a new file, name it **defaultContext.js**, and add the following code:
-
-```javascript
-module.exports = {
- log: jest.fn()
-};
-```
-
-This module mocks the *log* function to represent the default execution context.
-
-Next, add a new file, name it **defaultTimer.js**, and add the following code:
-
-```javascript
-module.exports = {
- IsPastDue: false
-};
-```
-
-This module implements the `IsPastDue` property to stand is as a fake timer instance. Timer configurations like NCRONTAB expressions are not required here as the test harness is simply calling the function directly to test the outcome.
-
-Next, use the VS Code Functions extension to [create a new JavaScript HTTP Function](/azure/developer/javascript/tutorial-vscode-serverless-node-01) and name it *HttpTrigger*. Once the function is created, add a new file in the same folder named **index.test.js**, and add the following code:
-
-```javascript
-const httpFunction = require('./index');
-const context = require('../testing/defaultContext')
-
-test('Http trigger should return known text', async () => {
-
- const request = {
- query: { name: 'Bill' }
- };
-
- await httpFunction(context, request);
-
- expect(context.log.mock.calls.length).toBe(1);
- expect(context.res.body).toEqual('Hello Bill');
-});
-```
-
-The HTTP function from the template returns a string of "Hello" concatenated with the name provided in the query string. This test creates a fake instance of a request and passes it to the HTTP function. The test checks that the *log* method is called once and the returned text equals "Hello Bill".
-
-Next, use the VS Code Functions extension to create a new JavaScript Timer Function and name it *TimerTrigger*. Once the function is created, add a new file in the same folder named **index.test.js**, and add the following code:
-
-```javascript
-const timerFunction = require('./index');
-const context = require('../testing/defaultContext');
-const timer = require('../testing/defaultTimer');
-
-test('Timer trigger should log message', () => {
- timerFunction(context, timer);
- expect(context.log.mock.calls.length).toBe(1);
-});
-```
-
-The timer function from the template logs a message at the end of the body of the function. This test ensures the *log* function is called once.
-
-### Run tests
-
-To run the tests, press **CTRL + ~** to open the command window, and run `npm test`:
-
-```bash
-npm test
-```
-
-![Testing Azure Functions with JavaScript in VS Code](./media/functions-test-a-function/azure-functions-test-vs-code-jest.png)
-
-### Debug tests
-
-To debug your tests, add the following configuration to your *launch.json* file:
-
-```json
-{
- "type": "node",
- "request": "launch",
- "name": "Jest Tests",
- "disableOptimisticBPs": true,
- "program": "${workspaceRoot}/node_modules/jest/bin/jest.js",
- "args": [
- "-i"
- ],
- "internalConsoleOptions": "openOnSessionStart"
-}
-```
-
-Next, set a breakpoint in your test and press **F5**.
-
-## Next steps
-
-Now that you've learned how to write automated tests for your functions, continue with these resources:
--- [Manually run a non HTTP-triggered function](./functions-manually-run-non-http.md)-- [Azure Functions error handling](./functions-bindings-error-pages.md)-- [Azure Function Event Grid Trigger Local Debugging](./functions-debug-event-grid-trigger-local.md)
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
The following features have known limitations in Azure Government:
- Limitations with B2B Collaboration in supported Azure US Government tenants: - For more information about B2B collaboration limitations in Azure Government and to find out if B2B collaboration is available in your Azure Government tenant, see [Azure AD B2B in government and national clouds](../active-directory/external-identities/b2b-government-national-clouds.md). - B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error.
- - Microsoft 365 Groups are not supported for B2B users and can't be enabled.
- Limitations with multifactor authentication: - Hardware OATH tokens are not available in Azure Government.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Defender for IoT](../../defender-for-iot/index.yml) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | | [Microsoft Graph](/graph/) | &#x2705; | &#x2705; | | [Microsoft Intune](/mem/intune/) | &#x2705; | &#x2705; |
-| [Microsoft Sentinel](../../sentinel/index.yml) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; |
+| [Microsoft Sentinel](../../sentinel/index.yml) | &#x2705; | &#x2705; |
| [Microsoft Stream](/stream/) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; |
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-aspnetcore-linux.md
After you complete this walkthrough, your app can collect Profiler traces like t
## Prerequisites The following instructions apply to all Windows, Linux, and Mac development environments:
-* Install the [.NET Core SDK 2.1.2 or later](https://dotnet.microsoft.com/download/archives).
+* Install the [.NET Core SDK 3.1 or later](https://dotnet.microsoft.com/download/dotnet).
* Install Git by following the instructions at [Getting Started - Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). ## Set up the project locally
The following instructions apply to all Windows, Linux, and Mac development envi
{ services.AddApplicationInsightsTelemetry(); // Add this line of code to enable Application Insights. services.AddServiceProfiler(); // Add this line of code to Enable Profiler
- services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
+ services.AddControllersWithViews();
} ```
The following instructions apply to all Windows, Linux, and Mac development envi
1. Create the web app environment by using App Service on Linux:
- ![Create the Linux web app](./media/profiler-aspnetcore-linux/create-linux-appservice.png)
+ :::image type="content" source="./media/profiler-aspnetcore-linux/create-linux-appservice.png" alt-text="Create the Linux web app":::
2. Create the deployment credentials:
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
Once the data collection rule has been created, the application needs to be give
## Send sample data The following PowerShell code sends data to the endpoint using HTTP REST fundamentals.
+1. Run the following PowerShell command which adds a required assembly for the script.
+
+ ```powershell
+ Add-Type -AssemblyName System.Web
+ ```
+ 1. Replace the parameters in the *step 0* section with values from the resources that you just created. You may also want to replace the sample data in the *step 2* section with your own. ```powershell
The following PowerShell code sends data to the endpoint using HTTP REST fundame
$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
- ### If the above line throws an 'Unable to find type [System.Web.HttpUtility].' error, execute the line below separately from the rest of the code
- # Add-Type -AssemblyName System.Web
################## ### Step 2: Load up some sample data.
API limits have been exceeded. The limits are currently set to 500MB of data/min
### Script returns error code 503 Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
-### Script returns error `Unable to find type [System.Web.HttpUtility]`
-Run the last line in section 1 of the script for a fix and execute it directly. Executing it uncommented as part of the script will not resolve the issue. The command must be executed separately.
- ### You don't receive an error, but data doesn't appear in the workspace The data may take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
## Generate sample data The following PowerShell script both generates sample data to configure the custom table and sends sample data to the custom logs API to test the configuration.
-1. Update the values of `$tenantId`, `$appId`, and `$appSecret` with the values you noted for **Directory (tenant) ID**, **Application (client) ID**, and secret **Value** and then save with the file name *LogGenerator.ps1*.
+1. Run the following PowerShell command which adds a required assembly for the script.
+
+ ```powershell
+ Add-Type -AssemblyName System.Web
+ ```
+
+2. Update the values of `$tenantId`, `$appId`, and `$appSecret` with the values you noted for **Directory (tenant) ID**, **Application (client) ID**, and secret **Value** and then save with the file name *LogGenerator.ps1*.
``` PowerShell param ([Parameter(Mandatory=$true)] $Log, $Type="file", $Output, $DcrImmutableId, $DceURI, $Table)
The following PowerShell script both generates sample data to configure the cust
$headers = @{"Content-Type" = "application/x-www-form-urlencoded" }; $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
- ## If the above line throws an 'Unable to find type [System.Web.HttpUtility].' error, execute the line below separately from the rest of the code
- # Add-Type -AssemblyName System.Web
## Generate and send some data foreach ($line in $file_data) {
The following PowerShell script both generates sample data to configure the cust
} ```
-2. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called *sample_access.log*. Run the script using the following command to read this data and create a JSON file called *data_sample.json* that you can send to the custom logs API.
+3. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called *sample_access.log*.
```PowerShell .\LogGenerator.ps1 -Log "sample_access.log" -Type "file" -Output "data_sample.json" ```
-3. Run the script using the following command to read this data and create a JSON file called *data_sample.json* that you can send to the custom logs API.
+4. Run the script using the following command to read this data and create a JSON file called *data_sample.json* that you can send to the custom logs API.
- :::image type="content" source="media/tutorial-custom-logs/new-custom-log.png" lightbox="media/tutorial-custom-logs/new-custom-log.png" alt-text="Screenshot showing new DCR-based custom log.":::
- ## Add custom log table Before you can send data to the workspace, you need to create the custom table that the data will be sent to.
Allow at least 30 minutes for the configuration to take effect. You may also exp
1. Run the following command providing the values that you collected for your data collection rule and data collection endpoint. The script will start ingesting data by placing calls to the API at pace of approximately 1 record per second. ```PowerShell
-.\LogGenerator.ps1 -Log "sample_access.log" -Type "API" -Table "ApacheAccess_CL" -DcrImmutableId <immutable ID> -DceUrl <data collection endpoint URL>
+.\LogGenerator.ps1 -Log "sample_access.log" -Type "API" -Table "ApacheAccess_CL" -DcrImmutableId <immutable ID> -DceUri <data collection endpoint URL>
``` 2. From Log Analytics, query your newly created table to verify that data arrived and if it is transformed properly.
API limits have been exceeded. The limits are currently set to 500MB of data/min
### Script returns error code 503 Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
-### Script returns error `Unable to find type [System.Web.HttpUtility]`
-Run the last line in section 1 of the script for a fix and execute it directly. Executing it uncommented as part of the script will not resolve the issue. The command must be executed separately.
- ### You don't receive an error, but data doesn't appear in the workspace The data may take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 01/26/2022 Last updated : 02/28/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
Azure NetApp Files standard network features are supported for the following regions:
+* France Central
* North Central US * South Central US
-* West US 3
-* West Europe
+* West Europe
+* West US 3
## Considerations
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
Title: Use Azure portal to deploy service catalog app
description: Shows consumers of Managed Applications how to deploy a service catalog app through the Azure portal. -+ Last updated 10/04/2018
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Title: Publish service catalog managed app
description: Shows how to create an Azure managed application that is intended for members of your organization. -+ Last updated 08/16/2021
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 09/14/2021 Last updated : 02/28/2022
The resources providers that are marked with **- registered** are registered by
| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | | Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
-| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
+| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Azure Route Server](../../route-server/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) | | Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) | | Microsoft.ObjectStore | Object Store |
azure-sql Accelerated Database Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/accelerated-database-recovery.md
The following types of workloads benefit most from ADR:
- Many DDLs are executed in one transaction. For example, in one transaction, rapidly creating and dropping temp tables.
- - A table has very large number of partitions/indexes that are modified. For example, a DROP TABLE operation on such table would require a large reservation of SLOG memory, which would delay truncation of the transaction log and delay undo/redo operations. The workaround can be drop the indexes individually and gradually, then drop the table. For more information on the SLOG, see [ADR recovery components](/sql/relational-databases/accelerated-database-recovery-conceptsadr-recovery-components).
+ - A table has very large number of partitions/indexes that are modified. For example, a DROP TABLE operation on such table would require a large reservation of SLOG memory, which would delay truncation of the transaction log and delay undo/redo operations. The workaround can be drop the indexes individually and gradually, then drop the table. For more information on the SLOG, see [ADR recovery components](/sql/relational-databases/accelerated-database-recovery-concepts).
- Prevent or reduce unnecessary aborted situations. A high abort rate will put pressure on the PVS cleaner and lower ADR performance. The aborts may come from a high rate of deadlocks, duplicate keys, or other constraint violations.
The following types of workloads benefit most from ADR:
- To activate the PVS cleanup process manually between workloads or during maintenance windows, use `sys.sp_persistent_version_cleanup`. For more information, see [sys.sp_persistent_version_cleanup](/sql/relational-databases/system-stored-procedures/sys-sp-persistent-version-cleanup-transact-sql). -- If you observe issues either with storage usage, high abort transaction and other factors, see [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshooting).
+- If you observe issues either with storage usage, high abort transaction and other factors, see [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshoot).
## Next steps - [Accelerated database recovery](/sql/relational-databases/accelerated-database-recovery-concepts)-- [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshooting).
+- [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshoot).
azure-sql Transparent Data Encryption Byok Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-byok-key-rotation.md
Last updated 12/15/2021
This article describes key rotation for a [server](logical-servers.md) using a TDE protector from Azure Key Vault. Rotating the logical TDE Protector for a server means switching to a new asymmetric key that protects the databases on the server. Key rotation is an online operation and should only take a few seconds to complete, because this only decrypts and re-encrypts the database's data encryption key, not the entire database.
-This guide discusses two options to rotate the TDE protector on the server.
+## Important considerations when rotating the TDE Protector
+- When the TDE protector is changed/rotated, old backups of the database, including backed-up log files, are not updated to use the latest TDE protector. To restore a backup encrypted with a TDE protector from Key Vault, make sure that the key material is available to the target server. Therefore, we recommend that you keep all the old versions of the TDE protector in Azure Key Vault (AKV), so database backups can be restored.
+- Even when switching from customer managed key (CMK) to service-managed key, keep all previously used keys in AKV. This ensures database backups, including backed-up log files, can be restored with the TDE protectors stored in AKV.
+- Apart from old backups, transaction log files might also require access to the older TDE Protector. To determine if there are any remaining logs that still require the older key, after performing key rotation, use the [sys.dm_db_log_info](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-info-transact-sql) dynamic management view (DMV). This DMV returns information on the virtual log file (VLF) of the transantion log along with its encryption key thumbprint of the VLF.
+- Older keys need to be kept in AKV and available to the server based on the backup retention period configured as back of backup retention policies on the database. This helps ensure any Long Term Retention (LTR) backups on the server can still be restored using the older keys.
+ > [!NOTE] > A paused dedicated SQL pool in Azure Synapse Analytics must be resumed before key rotations. > [!IMPORTANT]
-> Do not delete previous versions of the key after a rollover. When keys are rolled over, some data is still encrypted with the previous keys, such as older database backups.
+> Do not delete previous versions of the key after a rollover. When keys are rolled over, some data is still encrypted with the previous keys, such as older database backups, backed-up log files and transaction log files.
> [!NOTE] > This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse workspaces, see [Azure Synapse Analytics encryption](../../synapse-analytics/security/workspaces-encryption.md).
The following examples use [az sql server tde-key set](/powershell/module/az.sql
- In case of a security risk, learn how to remove a potentially compromised TDE protector: [Remove a potentially compromised key](transparent-data-encryption-byok-remove-tde-protector.md). -- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault using PowerShell](transparent-data-encryption-byok-configure.md).
+- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault using PowerShell](transparent-data-encryption-byok-configure.md).
azure-video-analyzer Get Started Detect Motion Emit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/get-started-detect-motion-emit-events.md
description: This quickstart walks you through the steps to get started with Azu
Last updated 11/04/2021-+ # Quickstart: Get started with Azure Video Analyzer
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
Use the following commands to create these items.
```bash az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
+ > [!NOTE]
+ > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
# [C#](#tab/csharp) ```bash az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
- > [!NOTE]
- > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime` parameter to supported value.
1. Deploy the function project to Azure:
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
Use the following commands to create these item.
```bash az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
+ > [!NOTE]
+ > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
# [C#](#tab/csharp) ```bash az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
- > [!NOTE]
- > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime` parameter to supported value.
1. Deploy the function project to Azure:
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive Tier overview description: Learn about Archive Tier Support for Azure Backup. Previously updated : 10/23/2021 Last updated : 02/28/2022
You can view the archive tier pricing from our [pricing page](azure-backup-prici
| Workloads | Preview | Generally available | | | | |
-| SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | None | Australia East, Central India, North Europe, South East Asia, East Asia, Australia South East, Canada Central, Brazil South, Canada East, France Central, France South, Japan East, Japan West, Korea Central, Korea South, South India, UK West, UK South, Central US, East US 2, West US, West US 2, West Central US, East US, South Central US, North Central US, West Europe, US Gov Virginia, US Gov Texas, US Gov Arizona. |
-| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe, Australia South East, France Central, France South, Japan West, Korea Central, Korea South. | None |
+| SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | None | Australia East, Central India, North Europe, South East Asia, East Asia, Australia South East, Canada Central, Brazil South, Canada East, France Central, France South, Japan East, Japan West, Korea Central, Korea South, South India, UK West, UK South, Central US, East US 2, West US, West US 2, West Central US, East US, South Central US, North Central US, West Europe, US Gov Virginia, US Gov Texas, US Gov Arizona, UAE North, Germany West Central, China East 2, China North 2, Norway East. |
+| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe, Australia South East, France Central, France South, Japan West, Korea Central, Korea South, UAE North, Germany West Central, Norway East. | None |
## Frequently asked questions
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
description: Provides a summary of support settings and limitations when backing
Last updated 5/07/2020 +++ # Support matrix for Azure file share backup
You can use the [Azure Backup service](./backup-overview.md) to back up Azure fi
## Supported regions
-### GA regions for Azure file shares backup
-
-Azure file shares backup is available in all regions **except** for: Germany Central (Sovereign), Germany Northeast (Sovereign), China East, China East 2, China North, China North 2, US Gov Iowa
+Azure file shares backup is available in all regions, **except** for Germany Central (Sovereign), Germany Northeast (Sovereign), China East, China East 2, China North, China North 2, and US Gov Iowa.
## Supported storage accounts
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
This article explains how to use _Enhanced policy_ to configure _Multiple Backups Per Day_ and back up [Trusted Launch VMs](../virtual-machines/trusted-launch.md) with Azure Backup service. _Enhanced policy_ for VM backup is in preview.
-Azure Backup now supports _Enhanced policy_ that's needed to support new Azure offerings. For example, [Trusted Launch VM](../virtual-machines/trusted-launch.md) is supported with _Enhanced policy_ only. To enroll your subscription for backup of Trusted Launch VM, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com).
+Azure Backup now supports _Enhanced policy_ that's needed to support new Azure offerings. For example, [Trusted Launch VM](../virtual-machines/trusted-launch.md) is supported with _Enhanced policy_ only.
>[!Important] >The existing [default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) wonΓÇÖt support protecting newer Azure offerings, such as Trusted Launch VM, UltraSSD, Shared disk, and Confidential Azure VMs.
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-rbac-rs-vault.md
Title: Manage Backups with Azure role-based access control
description: Use Azure role-based access control to manage access to backup management operations in Recovery Services vault. Previously updated : 01/27/2022 Last updated : 02/28/2022
The following table captures the Backup management actions and corresponding min
### Minimum role requirements for the Azure File share backup
-The following table captures the Backup management actions and corresponding role required to perform Azure File share operation.
+The following table captures the Backup management actions and corresponding Azure role required to perform that operation.
| Management Operation | Role Required | Resources | | | | |
-| Enable backup of Azure File shares | Backup Contributor |Recovery Services vault |
-| | Storage Account Backup Contributor | Storage account resource |
+| Enable backup from Recovery Services vault | Backup Contributor | Recovery Services vault |
+| | Storage account Contributor | Storage account resource |
+| Enable backup from file share blade | Backup Contributor | Recovery Services vault |
+| | Storage account Contributor | Storage account Resource |
+| | Contributor | Subscription |
| On-demand backup of VM | Backup Operator | Recovery Services vault | | Restore File share | Backup Operator | Recovery Services vault | | | Storage Account Backup Contributor | Storage account resources where restore source and Target file shares are present |
The following table captures the Backup management actions and corresponding rol
| Unregister storage account from vault |Backup Contributor | Recovery Services vault | | |Storage Account Contributor | Storage account resource|
+>[!Note]
+>If you've contributor access at the resource group level and want to configure backup from file share blade, ensure to get *microsoft.recoveryservices/Locations/backupStatus/action* permission at the subscription level. To do so, create a [*custom role*](../role-based-access-control/custom-roles-portal.md#start-from-scratch) and assign this permission.
+ ## Next steps * [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md): Get started with Azure RBAC in the Azure portal.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported (in preview) <br><br> To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup only through [Recovery Services vault](./backup-azure-arm-vms-prepare.md) and [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm). <br><br> **Feature details** <br> <ul><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Currently, you can restore as [Create VM](./backup-azure-arm-restore-vms.md#create-a-vm), or [Restore disk](./backup-azure-arm-restore-vms.md#restore-disks) only. </li><li> [vTPM state](../virtual-machines/trusted-launch.md#vtpm) doesn't persist while you restore a VM from a recovery point. Therefore, scenarios that require vTPM persistence may not work across backup and restore operations. </li></ul>
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported (in preview) <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Currently, you can restore as [Create VM](./backup-azure-arm-restore-vms.md#create-a-vm), or [Restore disk](./backup-azure-arm-restore-vms.md#restore-disks) only. </li><li> Backup is supported in all regions where Trusted Launch VM is available. </li></ul>
## VM storage support
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
At this time, IPv6 is not supported. Azure Bastion supports IPv4 only. This mean
Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select does not overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a private DNS zone with the following in the name: * core.windows.net * azure.com
+* vault.azure.net
Note that if you are using a Private endpoint integrated Azure Private DNS Zone, the [recommended DNS zone name](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for several Azure services overlap with the names listed above. The use of Azure Bastion is *not* supported with these setups.
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
You can configure this setting using the following methods:
An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances. This is called **host scaling**.
-Each instance can support 10 concurrent RDP connections and 50 concurrent SSH connections. The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.
+Each instance can support 25 concurrent RDP connections and 50 concurrent SSH connections for medium workloads (see [Azure subscription limits and quotas](../azure-resource-manager/management/azure-subscription-service-limits.md) for more information). The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.
Instances are created in the AzureBastionSubnet. To allow for host scaling, the AzureBastionSubnet should be /26 or larger. Using a smaller subnet limits the number of instances you can create. For more information about the AzureBastionSubnet, see the [subnets](#subnet) section in this article.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
Title: 'Quickstart: Deploy Bastion from VM settings'
+ Title: 'Quickstart: Deploy Bastion with default settings'
-description: Learn how to create an Azure Bastion host from virtual machine settings and connect to the VM securely through your browser via private IP address.
+description: Learn how to deploy Bastion with default settings from the Azure portal.
#Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
-# Quickstart: Deploy Azure Bastion from VM settings
+# Quickstart: Deploy Azure Bastion with default settings
-This quickstart article shows you how to deploy Azure Bastion to your virtual network from the Azure portal based on settings from an existing virtual machine. After you deploy Bastion, the RDP/SSH experience is available to all of the virtual machines in the virtual network. Azure Bastion is a PaaS service that is maintained for you. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+In this quickstart, you'll learn how to deploy Azure Bastion with default settings to your virtual network using the Azure portal. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
-In this quickstart, after you deploy Bastion, you connect to your VM via private IP address using the Azure portal. When you connect to the VM, it doesn't need a public IP address, client software, agent, or a special configuration. If your VM has a public IP address that you don't need for anything else, you can remove it.
+In this quickstart, you deploy Bastion from your VM resource using the Azure portal. Bastion is deployed using default settings that are based on the virtual network in which your VM is located. You then connect to your VM using RDP/SSH connectivity and the VM's private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it.
+
+Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
## <a name="prereq"></a>Prerequisites * **An Azure account with an active subscription**. If you don't have one, [create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-* **A VM in a VNet**. This quickstart lets you quickly deploy Bastion to a VNet using settings from the virtual machine to which you want to connect. Bastion pulls the required values from the VM and deploys to the VNet based on these values. The virtual machine itself doesn't become a bastion host.
+* **A VM in a VNet**.
+
+ When you deploy Bastion using default values, the values are pulled from the VNet in which your VM resides. This VM doesn't become a part of the Bastion deployment itself, but you do connect to it later in the exercise.
- * If you don't already have a VM in a VNet, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md).
- * If you need example values, see the provided [Example values](#values).
- * If you already have a virtual network, make sure to select it on the Networking tab when you create your VM.
- * If you don't already have a virtual network, you can create one at the same time you create your VM.
- * You don't need to have a public IP address for this VM in order to connect via Azure Bastion.
+ * If you don't already have a VM in a VNet, create one using [Quickstart: Create a Windows VM](../virtual-machines/windows/quick-create-portal.md), or [Quickstart: Create a Linux VM](../virtual-machines/linux/quick-create-portal.md).
+ * If you need example values, see the [Example values](#values) section.
+ * If you already have a virtual network, make sure it's selected on the Networking tab when you create your VM.
+ * If you don't have a virtual network, you can create one at the same time you create your VM.
* **Required VM roles:**
You can use the following example values when creating this configuration, or yo
**Bastion values:**
-When you deploy from VM settings, Bastion is automatically configured with default values. You don't need to specify any additional values for this exercise. However, once Bastion deploys, you can later modify [configuration settings](configuration-settings.md). For example, the SKU that is automatically configured is the Basic SKU. To support more Bastion features, you can easily [upgrade the SKU](upgrade-sku.md) after the deployment completes.
+When you deploy from VM settings, Bastion is automatically configured with default values.
-After completing this configuration, you'll have an Azure Bastion deployment with the values listed in the following table:
+ You can't modify or specify additional values for a default deployment. However, once Bastion deploys, you can later modify [settings](configuration-settings.md). For example, the default SKU is the Basic SKU. You can later upgrade to the Standard SKU to support more features.
-|**Name** | **Value** |
+|**Name** | **Default value** |
||| |AzureBastionSubnet | This subnet is created within the VNet as a /26 | |SKU | Basic | | Name | Based on the virtual network name | | Public IP address name | Based on the virtual network name |
-## <a name="createvmset"></a>Deploy Bastion to a VNet
+## <a name="createvmset"></a>Deploy Bastion
-There are a few different ways to deploy Bastion to a virtual network. In this quickstart, you deploy Bastion from your virtual machine settings in the Azure portal (you don't sign in and deploy from your VM directly).
+In this quickstart, you deploy Bastion from your virtual machine settings in the Azure portal. You don't connect and sign in to your virtual machine or deploy Bastion from your VM directly.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the portal, navigate to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
+1. In the portal, navigate to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
1. Select **Bastion** in the left menu. You can view some of the values that will be used when creating the bastion host for your virtual network. Select **Deploy Bastion**. :::image type="content" source="./media/quickstart-host-portal/deploy-bastion.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion.png":::
When you're done using the virtual network and the virtual machines, delete the
## Next steps
-In this quickstart, you created a bastion host for your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following step if you want to connect to a virtual machine scale set.
+In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following step if you want to connect to a virtual machine scale set.
> [!div class="nextstepaction"]
-> [Connect to a virtual machine scale set using Azure Bastion](bastion-connect-vm-scale-set.md)
+> [Connect to a virtual machine scale set using Azure Bastion](bastion-connect-vm-scale-set.md)
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Title: Create and run a chaos experiment using Azure Chaos Studio
description: Understand the steps to create and run a Chaos Studio experiment in 10mins -+ Last updated 11/10/2021
cloud-shell Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-powershell.md
Type `exit` to terminate the session.
[customex]: ../virtual-machines/extensions/custom-script-windows.md [profile]: /powershell/module/microsoft.powershell.core/about/about_profiles [azmount]: ../storage/files/storage-how-to-use-files-windows.md
-[githubtoken]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
+[githubtoken]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/whats-new.md
description: This article is regularly updated with news about the Azure Cogniti
Previously updated : 11/08/2021 Last updated : 02/24/2022 # What's new in Language Understanding
Learn what's new in the service. These items include release notes, videos, blog
## Release notes
+### February 2022
+* LUIS containers can be used in [disconnected environments](../containers/disconnected-containers.md?context=/azure/cognitive-services/luis/context/context).
+ ### January 2022 * [Updated text recognizer](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.8.2) to v1.8.2 * Added [English (UK)](luis-language-support.md) to supported languages.
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 01/12/2022 Last updated : 02/24/2022 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
Azure Cognitive Services containers provide the following set of Docker containe
| Service | Container | Description | Availability | |--|--|--|--|
-| [LUIS][lu-containers] | **LUIS** ([image](https://go.microsoft.com/fwlink/?linkid=2043204&clcid=0x409)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available |
+| [LUIS][lu-containers] | **LUIS** ([image](https://go.microsoft.com/fwlink/?linkid=2043204&clcid=0x409)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://go.microsoft.com/fwlink/?linkid=2018757&clcid=0x409)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. This <br> container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Language service][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Generally available | | [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Gated preview - [request access](https://aka.ms/csgate-translator). <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Containers enable you to run Cognitive Services APIs in your own environment, an
* [Speech to Text (Standard)](../speech-service/speech-container-howto.md?tabs=stt) * [Text Translation (Standard)](../translator/containers/translator-how-to-install-container.md#host-computer)
+* [Language Understanding (LUIS)](../LUIS/luis-container-howto.md)
* Azure Cognitive Service for Language * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md) * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md)
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-template.md
Title: Create an Microsoft Azure confidential ledger by using Azure Resource Man
description: Learn how to create an Microsoft Azure confidential ledger by using Azure Resource Manager template. -+
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Last updated 11/02/2021 -+ ms.devlang: azurecli
container-instances Container Instances Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-nat-gateway.md
+
+ Title: Configure Container Group Egress with NAT Gateway
+description: Configure NAT gateway for Azure Container Instances workloads that use the NAT gateway's public IP address for static egress
+++++ Last updated : 02/28/2022++
+# Configure a NAT gateway for static IP address for outbound traffic from a container group
+
+Setting up a [container group](container-instances-container-groups.md) with an external-facing IP address allows external clients to use the IP address to access a container in the group. For example, a browser can access a web app running in a container. However, currently a container group uses a different IP address for outbound traffic. This egress IP address isn't exposed programmatically, which makes container group monitoring and configuration of client firewall rules more complex.
+
+This article provides steps to configure a container group in a [virtual network](container-instances-virtual-network-concepts.md) integrated with a [Network Address Translation (NAT) gateway](../virtual-network/nat-gateway/nat-overview.md). By configuring a NAT gateway to SNAT a subnet address range delegated to Azure Container Instances (ACI), you can identify outbound traffic from your container groups. The container group egress traffic will use the public IP address of the NAT gateway. A single NAT gateway can be used by multiple container groups deployed in the virtual network's subnet delegated to ACI.
+
+In this article you use the Azure CLI to create the resources for this scenario:
+
+* Container groups deployed on a delegated subnet [in the virtual network](container-instances-vnet.md)
+* A NAT gateway deployed in the network with a static public IP address
+
+You then validate egress from example container groups through the NAT gateway.
+
+> [!NOTE]
+> The ACI service recommends integrating with a NAT gateway for containerized workoads that have static egress but not static ingress requirements. For ACI architecture that supports both static ingress and egress, please see the following tutorial: [Use Azure Firewall for ingress and egress](container-instances-egress-ip-address.md).
+## Before you begin
+You must satisfy the following requirements to complete this tutorial:
+
+**Azure CLI**: You must have Azure CLI version installed on your local computer. If you need to install or upgrade, see [Install the Azure CLI][azure-cli-install]
+
+**Azure resource group**: If you don't have an Azure resource group already, create a resource group with the [az group create][az-group-create] command. Below is an example.
+```azurecli
+az group create --name myResourceGroup --location eastus
+```
+## Deploy ACI in a virtual network
+
+In a typical case, you might already have an Azure virtual network in which to deploy a container group. For demonstration purposes, the following commands create a virtual network and subnet when the container group is created. The subnet is delegated to Azure Container Instances.
+
+The container group runs a small web app from the `aci-helloworld` image. As shown in other articles in the documentation, this image packages a small web app written in Node.js that serves a static HTML page.
+
+> [!TIP]
+> To simplify the following command examples, use an environment variable for the resource group's name:
+> ```console
+> export RESOURCE_GROUP_NAME=myResourceGroup
+> ```
+> This tutorial will make use of the environment variable going forward.
+Create the container group with the [az container create][az-container-create] command:
+
+```azurecli
+az container create \
+ --name appcontainer \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --image mcr.microsoft.com/azuredocs/aci-helloworld \
+ --vnet aci-vnet \
+ --vnet-address-prefix 10.0.0.0/16 \
+ --subnet aci-subnet \
+ --subnet-address-prefix 10.0.0.0/24
+```
+
+> [!NOTE]
+> Adjust the value of `--subnet address-prefix` for the IP address space you need in your subnet. The smallest supported subnet is /29, which provides eight IP addresses. Some >IP addresses are reserved for use by Azure, which you can read more about [here](../virtual-network/ip-services/private-ip-addresses.md).
+## Create a public IP address
+
+In the following sections, use the Azure CLI to deploy an Azure NAT gateway in the virtual network. For background, see [Tutorial: Create a NAT gateway using Azure CLI](../virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md).
+
+First, use the [az network vnet public-ip create][az-network-public-ip-create] to create a public IP address for the NAT gateway. This will be used to access the Internet. You will receive a warning about an upcoming breaking change where Standard SKU IP addresses will be availability zone aware by default. You can learn more about the use of availability zones and public IP addresses [here](../virtual-network/ip-services/virtual-network-network-interface-addresses.md).
+
+```azurecli
+az network public-ip create \
+ --name myPublicIP \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --sku standard \
+ --allocation static
+```
+
+Store the public IP address in a variable. We will use this later during the validation step.
+
+```azurecli
+NG_PUBLIC_IP="$(az network public-ip show \
+ --name myPublicIP \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --query ipAddress --output tsv)"
+```
+
+## Deploy a NAT gateway into a virtual network
+
+Use the following [az network nat gateway create][az-network-nat-gateway-create] to create a NAT gateway that uses the public IP you created in the previous step.
+
+```azurecli
+az network nat gateway create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name myNATgateway \
+ --public-ip-addresses myPublicIP \
+ --idle-timeout 10
+```
+## Configure NAT service for source subnet
+
+We'll configure the source subnet **aci-subnet** to use a specific NAT gateway resource **myNATgateway** with [az network vnet subnet update][az-network-vnet-subnet-update]. This command will activate the NAT service on the specified subnet.
+
+```azurecli
+az network vnet subnet update \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --vnet-name aci-vnet \
+ --name aci-subnet \
+ --nat-gateway myNATgateway
+```
+
+## Test egress from a container group
+
+Test inbound access to the *appcontainer* running in the virtual network by browsing to the firewall's public IP address. Previously, you stored the public IP address in variable $NG_PUBLIC_IP
+
+Deploy the following sample container into the virtual network. When it runs, it sends a single HTTP request to `http://checkip.dyndns.org`, which displays the IP address of the sender (the egress IP address). If the application rule on the firewall is configured properly, the firewall's public IP address is returned.
+
+```azurecli
+az container create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name testegress \
+ --image mcr.microsoft.com/azuredocs/aci-tutorial-sidecar \
+ --command-line "curl -s http://checkip.dyndns.org" \
+ --restart-policy OnFailure \
+ --vnet aci-vnet \
+ --subnet aci-subnet
+```
+
+View the container logs to confirm the IP address is the same as the public IP address we created in the first step of the tutorial.
+
+```azurecli
+az container logs \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name testegress
+```
+
+Output is similar to:
+
+```console
+<html><head><title>Current IP Check</title></head><body>Current IP Address: 52.142.18.133</body></html>
+```
+This IP address should match the public IP address created in the first step of the tutorial.
+
+```Bash
+echo $NG_PUBLIC_IP
+```
+
+## Next steps
+
+In this article, you set up container groups in a virtual network behind an Azure NAT gateway. By using this configuration, you set up a single, static IP address egress from Azure Container Instances container groups.
+
+For troubleshooting assistance, see the [Troubleshoot Azure Virtual Network NAT connectivity](../virtual-network/nat-gateway/troubleshoot-nat.md).
+
+[az-group-create]: /cli/azure/group#az_group_create
+[az-container-create]: /cli/azure/container#az_container_create
+[az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create
+[az-network-public-ip-create]: /cli/azure/network/public-ip/#az_network_public_ip_create
+[az-network-public-ip-show]: /cli/azure/network/public-ip/#az_network_public_ip_show
+[az-network-nat-gateway-create]: /cli/azure/network/nat/gateway/#az_network_nat_gateway_create
+[az-network-vnet-subnet-update]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_update
+[az-container-exec]: /cli/azure/container#az_container_exec
+[azure-cli-install]: /cli/azure/install-azure-cli
container-instances Container Instances Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-template.md
description: In this quickstart, you use an Azure Resource Manager template to q
Last updated 04/30/2020 -+
container-instances Container Instances Tutorial Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-app.md
The Dockerfile in the sample application shows how the container is built. It st
```Dockerfile FROM node:8.9.3-alpine RUN mkdir -p /usr/src/app
-COPY ./app/ /usr/src/app/
+COPY ./app/* /usr/src/app/
WORKDIR /usr/src/app RUN npm install CMD node /usr/src/app/index.js
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* **Azure Load Balancer** - Placing an Azure Load Balancer in front of container instances in a networked container group is not supported * **Global virtual network peering** - Global peering (connecting virtual networks across Azure regions) is not supported * **Public IP or DNS label** - Container groups deployed to a virtual network don't currently support exposing containers directly to the internet with a public IP address or a fully qualified domain name
-* **Virtual Network NAT** - Container groups deployed to a virtual network don't currently support using a NAT gateway resource for outbound internet connectivity.
## Other limitations
In the following diagram, several container groups have been deployed to a subne
<!-- LINKS - Internal --> [az-container-create]: /cli/azure/container#az_container_create
-[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
+[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
container-registry Container Registry Get Started Geo Replication Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-geo-replication-template.md
Last updated 10/06/2020 -+
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
description: Learn how to isolate and restrict the restore permissions for conti
Previously updated : 02/16/2022 Last updated : 02/28/2022
az role assignment create --role "CosmosRestoreOperator" --assignee <email> --sc
``` ### Assign capability to restore from a specific account-
-* Assign a user write action on the specific resource group. This action is required to create a new account in the resource group.
-
-* Assign the *CosmosRestoreOperator* built-in role to the specific restorable database account that needs to be restored. In the following command, the scope for the *RestorableDatabaseAccount* is retrieved from the `ID` property in the output of `az cosmosdb restorable-database-account list` (if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount` (if using PowerShell).
-
- ```azurecli-interactive
- az role assignment create --role "CosmosRestoreOperator" --assignee <email> --scope <RestorableDatabaseAccount>
- ```
+This operation is currently not supported.
### Assign capability to restore from any source account in a resource group. This operation is currently not supported.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
const servicePrincipal = new ClientSecretCredential(
"<client-application-id>", "<client-application-secret>"); const client = new CosmosClient({
- "<account-endpoint>",
+ endpoint: "<account-endpoint>",
aadCredentials: servicePrincipal }); ```
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
ms.devlang: java Previously updated : 02/15/2022 Last updated : 03/01/2022
> * [Node.js](create-sql-api-nodejs.md) > * [Python](create-sql-api-python.md)
-This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector is based on Spark 3.1.x.
+This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
-Throughout this quick tutorial, we rely on [Azure Databricks Runtime 8.0 with Spark 3.1.1](/azure/databricks/release-notes/runtime/8.0) and a Jupyter Notebook to show how to use the Cosmos DB Spark Connector.
+Throughout this quick tutorial, we rely on [Azure Databricks Runtime 8.0 with Spark 3.1.1](/azure/databricks/release-notes/runtime/8.0) and a Jupyter Notebook to show how to use the Cosmos DB Spark Connector, but you can also use [Azure Databricks Runtime 10.3 with Spark 3.2.1](/azure/databricks/release-notes/runtime/10.3).
-You can use any other Spark 3.1.1 spark offering as well, also you should be able to use any language supported by Spark (PySpark, Scala, Java, etc.), or any Spark interface you are familiar with (Jupyter Notebook, Livy, etc.).
+You can use any other Spark 3.1.1 or 3.2.1 spark offering as well, also you should be able to use any language supported by Spark (PySpark, Scala, Java, etc.), or any Spark interface you are familiar with (Jupyter Notebook, Livy, etc.).
## Prerequisites * An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/try/cosmosdb/). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
-* [Azure Databricks](/azure/databricks/release-notes/runtime/8.0) runtime 8.0 with Spark 3.1.1.
+* [Azure Databricks](/azure/databricks/release-notes/runtime/8.0) runtime 8.0 with Spark 3.1.1 or [Azure Databricks](/azure/databricks/release-notes/runtime/10.3) runtime 10.3 with Spark 3.2.1.
* (Optional) [SLF4J binding](https://www.slf4j.org/manual.html) is used to associate a specific logging framework with SLF4J. SLF4J is only needed if you plan to use logging, also download an SLF4J binding, which will link the SLF4J API with the logging implementation of your choice. See the [SLF4J user manual](https://www.slf4j.org/manual.html) for more information.
-Install Cosmos DB Spark Connector, in your spark Cluster [azure-cosmos-spark_3-1_2-12-4.3.1.jar](https://search.maven.org/artifact/com.azure.cosmos.spark/azure-cosmos-spark_3-1_2-12/4.3.1/jar)
+Install Cosmos DB Spark Connector in your spark cluster [using the latest version for Spark 3.1.x](https://aka.ms/azure-cosmos-spark-3-1-download) or [using the latest version for Spark 3.2.x](https://aka.ms/azure-cosmos-spark-3-2-download).
The getting started guide is based on PySpark however you can use the equivalent scala version as well, and you can run the following code snippet in an Azure Databricks PySpark notebook.
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spark-v3.md
If you have any feedback or ideas on how to improve your experience create an is
* [Release notes for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-changelog) ## Download
-* [Download of Cosmos DB Spark connectro for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-changelog)
-* [Download of Cosmos DB Spark connectro for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-download)
+* [Download of Cosmos DB Spark connector for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-download)
+* [Download of Cosmos DB Spark connector for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-download)
Azure Cosmos DB Spark connector is available on [Maven Central Repo](https://search.maven.org/search?q=g:com.azure.cosmos.spark).
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-rate-too-large.md
description: Learn how to diagnose and fix request rate too large exceptions.
Previously updated : 08/25/2021 Last updated : 02/28/2022
Here are some examples of partitioning strategies that lead to hot partitions:
#### How to identify the hot partition
-To verify if there is a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
+To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
-Each PartitionKeyRangeId maps to one physical partition. If there is one PartitionKeyRangeId that has significantly higher Normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition. Learn more about the [Normalized RU Consumption metric](../monitor-normalized-request-units.md).
+Each PartitionKeyRangeId maps to one physical partition. If there's one PartitionKeyRangeId that has significantly higher Normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition. Learn more about the [Normalized RU Consumption metric](../monitor-normalized-request-units.md).
:::image type="content" source="media/troubleshoot-request-rate-too-large/split-norm-utilization-by-pkrange-hot-partition.png" alt-text="Normalized RU Consumption by PartitionKeyRangeId chart with a hot partition.":::
This sample output shows that in a particular minute, the logical partition key
#### Recommended solution Review the guidance on [how to chose a good partition key](../partitioning-overview.md#choose-partitionkey).
-If there is high percent of rate limited requests and no hot partition:
+If there's high percent of rate limited requests and no hot partition:
- You can [increase the RU/s](../set-throughput.md) on the database or container using the client SDKs, Azure portal, PowerShell, CLI or ARM template. Follow [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md) to determine the right RU/s to set.
-If there is high percent of rate limited requests and there is an underlying hot partition:
-- Long-term, for best cost and performance, consider **changing the partition key**. The partition key cannot be updated in place, so this requires migrating the data to a new container with a different partition key. Azure Cosmos DB supports a [live data migration tool](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/) for this purpose.-- Short-term, you can temporarily increase the RU/s to allow more throughput to the hot partition. This is not recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost.
+If there's high percent of rate limited requests and there's an underlying hot partition:
+- Long-term, for best cost and performance, consider **changing the partition key**. The partition key can't be updated in place, so this requires migrating the data to a new container with a different partition key. Azure Cosmos DB supports a [live data migration tool](https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/) for this purpose.
+- Short-term, you can temporarily increase the RU/s to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost.
> [!TIP] > When you increase the throughput, the scale-up operation will either complete instantaneously or require up to 5-6 hours to complete, depending on the number of RU/s you want to scale up to. If you want to know the highest number of RU/s you can set without triggering the asynchronous scale-up operation (which requires Azure Cosmos DB to provision more physical partitions), multiply the number of distinct PartitionKeyRangeIds by 10,0000 RU/s. For example, if you have 30,000 RU/s provisioned and 5 physical partitions (6000 RU/s allocated per physical partition), you can increase to 50,000 RU/s (10,000 RU/s per physical partition) in an instantaneous scale-up operation. Increasing to >50,000 RU/s would require an asynchronous scale-up operation.
For example, this sample output shows that each minute, 30% of Create Document r
:::image type="content" source="media/troubleshoot-request-rate-too-large/throttled-requests-diagnostic-logs-results.png" alt-text="Requests with 429 in Diagnostic Logs."::: #### Recommended solution
+##### Use the Azure Cosmos DB capacity planner
+You can leverage the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to understand what is the best provisioned throughput based on your workload (volume and type of operations and size of documents). You can customize further the calculations by providing sample data to get a more accurate estimation.
+ ##### 429s on create, replace, or upsert document requests - By default, in the SQL API, all properties are indexed by default. Tune the [indexing policy](../index-policy.md) to only index the properties needed. This will lower the Request Units required per create document operation, which will reduce the likelihood of seeing 429s or allow you to achieve higher operations per second for the same amount of provisioned RU/s.
This will lower the Request Units required per create document operation, which
- Follow the guidance to [troubleshoot queries with high RU charge](troubleshoot-query-performance.md#querys-ru-charge-is-too-high) ##### 429s on execute stored procedures-- [Stored procedures](stored-procedures-triggers-udfs.md) are intended for operations that require write transactions across a partition key value. It is not recommended to use stored procedures for a large number of read or query operations. For best performance, these read or query operations should be done on the client-side, using the Cosmos SDKs.
+- [Stored procedures](stored-procedures-triggers-udfs.md) are intended for operations that require write transactions across a partition key value. It isn't recommended to use stored procedures for a large number of read or query operations. For best performance, these read or query operations should be done on the client-side, using the Cosmos SDKs.
## Rate limiting on metadata requests
Metadata rate limiting can occur when you are performing a high volume of metada
- List databases or containers in a Cosmos account - Query for offers to see the current provisioned throughput
-There is a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and is not recommended. See [limits on metadata operations](../concepts-limits.md#metadata-request-limits).
+There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [limits on metadata operations](../concepts-limits.md#metadata-request-limits).
#### How to investigate Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Filter to a specific database and container if desired.
Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Fi
## Rate limiting due to transient service error
-This 429 error is returned when the request encounters a transient service error. Increasing the RU/s on the database or container will have no impact and is not recommended.
+This 429 error is returned when the request encounters a transient service error. Increasing the RU/s on the database or container will have no impact and isn't recommended.
#### Recommended solution Retry the request. If the error persists for several minutes, file a support ticket from the [Azure portal](https://portal.azure.com/).
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 02/21/2022 Last updated : 02/28/2022
baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=10000
*Step 1*: Input `sysparm_offset={offset}` either in **Base URL** or **Relative URL** as shown in the following screenshots: or *Step 2*: Set **Pagination rules** as either option 1 or option 2:
BaseUrl/api/now/table/t100
*Step 1*: Input `{id}` either in **Base URL** in the linked service configuration page or **Relative URL** in the dataset connection pane. or *Step 2*: Set **Pagination rules** as **"AbsoluteUrl.{id}" :"RANGE:1:100:1"**.
Response 2:
``` Set the end condition rule as **"EndCondition:$.data": "Empty"** to end the pagination when the value of the specific node in response is empty.
- :::image type="content" source="media/connector-rest/pagination-rule-example-4-1.png" alt-text="Screenshot showing the EndCondition setting for Example 4.1.":::
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-1.png" alt-text="Screenshot showing the End Condition setting for Example 4.1.":::
- **Example 4.2: The pagination ends when the value of the specific node in response dose not exist**
Response 2:
``` Set the end condition rule as **"EndCondition:$.data": "NonExist"** to end the pagination when the value of the specific node in response dose not exist.
- :::image type="content" source="media/connector-rest/pagination-rule-example-4-2.png" alt-text="Screenshot showing the EndCondition setting for Example 4.2.":::
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-2.png" alt-text="Screenshot showing the End Condition setting for Example 4.2.":::
- **Example 4.3: The pagination ends when the value of the specific node in response exists**
Response 2:
``` Set the end condition rule as **"EndCondition:$.Complete": "Exist"** to end the pagination when the value of the specific node in response exists.
- :::image type="content" source="media/connector-rest/pagination-rule-example-4-3.png" alt-text="Screenshot showing the EndCondition setting for Example 4.3.":::
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-3.png" alt-text="Screenshot showing the End Condition setting for Example 4.3.":::
- **Example 4.4: The pagination ends when the value of the specific node in response is a user-defined const value**
Response 2:
``` Set the end condition rule as **"EndCondition:$.Complete": "Const:true"** to end the pagination when the value of the specific node in response is a user-defined const value.
- :::image type="content" source="media/connector-rest/pagination-rule-example-4-4.png" alt-text="Screenshot showing the EndCondition setting for Example 4.4.":::
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-4.png" alt-text="Screenshot showing the End Condition setting for Example 4.4.":::
- **Example 4.5: The pagination ends when the value of the header key in response equals to user-defined const value**
Response 2:
Set the end condition rule as **"EndCondition:headers.Complete": "Const:1"** to end the pagination when the value of the header key in response is equal to user-defined const value.
- :::image type="content" source="media/connector-rest/pagination-rule-example-4-5.png" alt-text="Screenshot showing the EndCondition setting for Example 4.5.":::
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-5.png" alt-text="Screenshot showing the End Condition setting for Example 4.5.":::
- **Example 4.6: The pagination ends when the key exists in the response header**
Response 2:
Set the end condition rule as **"EndCondition:headers.CompleteTime": "Exist"** to end the pagination when the key exists in the response header.
- :::image type="content" source="media/connector-rest/pagination-rule-example-4-6.png" alt-text="Screenshot showing the EndCondition setting for Example 4.6.":::
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-6.png" alt-text="Screenshot showing the End Condition setting for Example 4.6.":::
#### Example 5:Set end condition to avoid endless requests when range rule is not defined
The last response is:
Set **MaxRequestNumber** to avoid endless request as shown in the following screenshot: #### Example 7:The RFC 5988 pagination rule is supported by default The backend will automatically get the next URL based on the RFC 5988 style links in the header. > [!TIP] > If you don't want to enable this default pagination rule, you can set `supportRFC5988` to `false` or just delete it in the script. >
-> :::image type="content" source="media/connector-rest/pagination-rule-example-7-disable-rfc5988.png" alt-text="Screenshot showing how to disable RFC 5988 setting for Example 7.":::
+> :::image type="content" source="media/connector-rest/pagination-rule-example-7-disable-rfc5988.png" alt-text="Screenshot showing how to disable R F C 5988 setting for Example 7.":::
#### Example 8: The next request URL is from the response body when use pagination in mapping data flows
But if the value of **@odata.nextLink** in the last response body is equal to th
This example states how to set the pagination rule in mapping data flows when the response format is XML and the next request URL is from the response body. As shown in the following screenshot, the first URL is *https://\<user\>.dfs.core.windows.net/bugfix/test/movie_1.xml* The response schema is shown below:
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
documentationcenter: na -+ na Last updated 09/9/2020
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell.md
documentationcenter: na -+ na Last updated 09/28/2020 -+ # Quickstart: Create and configure Azure DDoS Protection Standard using Azure PowerShell
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
tags: azure-resource-manager
ms.assetid: -+ na + Last updated 05/17/2019
dedicated-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/overview.md
tags: azure-resource-manager -+ na
dedicated-hsm Quickstart Hsm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-hsm-azure-cli.md
-+ ms.devlang: azurecli Last updated 01/06/2021
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 02/10/2022 Last updated : 02/28/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low | | **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium | | **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Medium |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | | **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium | | **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
## <a name="alerts-azurecosmos"></a>Alerts for Azure Cosmos DB (Preview)
-[Further details and notes](other-threat-protections.md#cosmos-db)
+[Further details and notes](concept-defender-for-cosmos.md)
| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |--|--|:-:|--|
-| **PREVIEW - Access from a Tor exit node** | This Cosmos DB account was successfully accessed from an IP address known to be an active exit node of Tor, an anonymizing proxy. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity. | Initial Access | High/Medium |
-| **PREVIEW - Access from a suspicious IP** | This Cosmos DB account was successfully accessed from an IP address that was identified as a threat by Microsoft Threat Intelligence. | Initial Access | Medium |
-| **PREVIEW - Access from an unusual location** | This Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low |
-| **PREVIEW - Unusual volume of data extracted** | An unusually large volume of data has been extracted from this Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium |
-| **PREVIEW - Extraction of Cosmos DB accounts keys via a potentially malicious script** | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High |
-| **PREVIEW - SQL injection: potential data exfiltration** | A suspicious SQL statement was used to query a container in this Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Cosmos DB queries, many known SQL injection attacks on Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
-| **PREVIEW - SQL injection: fuzzing attempt** | A suspicious SQL statement was used to query a container in this Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
+| **PREVIEW - Access from a Tor exit node** | This Azure Cosmos DB account was successfully accessed from an IP address known to be an active exit node of Tor, an anonymizing proxy. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity. | Initial Access | High/Medium |
+| **PREVIEW - Access from a suspicious IP** | This Azure Cosmos DB account was successfully accessed from an IP address that was identified as a threat by Microsoft Threat Intelligence. | Initial Access | Medium |
+| **PREVIEW - Access from an unusual location** | This Azure Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low |
+| **PREVIEW - Unusual volume of data extracted** | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium |
+| **PREVIEW - Extraction of Azure Cosmos DB accounts keys via a potentially malicious script** | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High |
+| **PREVIEW - SQL injection: potential data exfiltration** | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
+| **PREVIEW - SQL injection: fuzzing attempt** | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
| | | | |
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
+
+ Title: Overview of Defender for Azure Cosmos DB
+description: Learn about the benefits and features of Microsoft Defender for Azure Cosmos DB.
++ Last updated : 02/28/2022++
+# Introduction to Microsoft Defender for Azure Cosmos DB
+
+APPLIES TO: :::image type="icon" source="media/icons/yes-icon.png" border="false"::: SQL/Core API
+
+Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders.
+
+Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
+
+You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-defender-for-cosmos.md) at either the subscription level, or the resource level.
+
+Microsoft Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+
+Microsoft Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, and doesn't have any effect on its performance.
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
+|Protected Azure Cosmos DB API | :::image type="icon" source="./media/icons/yes-icon.png"::: SQL/Core API <br> :::image type="icon" source="./media/icons/no-icon.png"::: Cassandra API <br> :::image type="icon" source="./media/icons/no-icon.png"::: MongoDB API <br> :::image type="icon" source="./media/icons/no-icon.png"::: Table API <br> :::image type="icon" source="./media/icons/no-icon.png"::: Gremlin API |
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government <br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet |
+
+## What are the benefits of Microsoft Defender for Azure Cosmos DB
+
+Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities and Microsoft Threat Intelligence data. Microsoft Defender for Azure Cosmos DB continuously monitors your Azure Cosmos DB accounts for threats such as SQL injection, compromised identities and data exfiltration.
+
+This service provides action-oriented security alerts in Microsoft Defender for Cloud with details of the suspicious activity and guidance on how to mitigate the threats.
+You can use this information to quickly remediate security issues and improve the security of your Azure Cosmos DB accounts.
+
+Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. To learn how to stream alerts, see [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md).
+
+> [!TIP]
+> For a comprehensive list of all Defender for Storage alerts, see the [alerts reference page](alerts-reference.md#alerts-azurecosmos). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
+
+## Alert types
+
+Threat intelligence security alerts are triggered for:
+
+- **Potential SQL injection attacks**: <br>
+ Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Microsoft Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
+
+- **Anomalous database access patterns**: <br>
+ For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations.
+
+- **Suspicious database activity**: <br>
+ For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns.
+
+## Next steps
+
+In this article, you learned about Microsoft Defender for Azure Cosmos DB.
+
+> [!div class="nextstepaction"]
+> [Enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-defender-for-cosmos.md)
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: Microsoft Defender for Cloud - an introduction
description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multi-cloud resources and workloads. Previously updated : 12/12/2021 Last updated : 02/28/2022 # What is Microsoft Defender for Cloud?
The **Defender plans** page of Microsoft Defender for Cloud offers the following
- [Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md) - [Microsoft Defender for DNS](defender-for-dns-introduction.md) - [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md)
+- [Microsoft Defender for Azure Cosmos DB (Preview)](concept-defender-for-cosmos.md)
Use the advanced protection tiles in the [workload protections dashboard](workload-protections-dashboard.md) to monitor and configure each of these protections.
defender-for-cloud Features Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/features-paas.md
Title: Microsoft Defender for Cloud features for supported Azure PaaS resources. description: This page shows the availability of Microsoft Defender for Cloud features for the supported Azure PaaS resources. Previously updated : 11/09/2021 Last updated : 02/27/2022 # Feature coverage for Azure PaaS services <a name="paas-services"></a>
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations Previously updated : 02/24/2022 Last updated : 02/28/2022 # Protect your Kubernetes workloads
You can manually configure the Kubernetes workload add-on, or extension protecti
| Container CPU and memory limits should be enforced | Protect applications against DDoS attack | **Yes** | | Container images should be deployed only from trusted registries | Remediate vulnerabilities | **Yes** | | Least privileged Linux capabilities should be enforced for containers | Manage access and permissions | **Yes** |
- | Containers should only use allowed AppArmor profiles | Remediate security configurations | **Yes** |
+ | Containers should only use allowed AppArmor profiles | Remediate security configurations | **Yes** |
| Services should listen on allowed ports only | Restrict unauthorized network access | **Yes** | | Usage of host networking and ports should be restricted | Restrict unauthorized network access | **Yes** | | Usage of pod HostPath volume mounts should be restricted to a known list | Manage access and permissions | **Yes** |
You can manually configure the Kubernetes workload add-on, or extension protecti
| Kubernetes clusters should be accessible only over HTTPS | Encrypt data in transit | No | | Kubernetes clusters should disable automounting API credentials | Manage access and permissions | No | | Kubernetes clusters should not use the default namespace | Implement security best practices | No |
+ | Kubernetes clusters should not grant CAPSYSADMIN security capabilities | Manage access and permissions | No |
| Privileged containers should be avoided | Manage access and permissions | No | | Running containers as root user should be avoided | Manage access and permissions | No | ||||
defender-for-cloud Quickstart Enable Database Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-database-protections.md
+
+ Title: Enable database protection for your subscription
+description: Learn how to enable Microsoft Defender for Cloud for all of your database types for your entire subscription.
++ Last updated : 02/28/2022++
+# Quickstart: Microsoft Defender for Cloud database protection
+
+This article explains how to enable Microsoft Defender for Cloud's database (DB) protection for all database types that exist on your subscription.
+
+Workload protections are provided through the Microsoft Defender plans that are specific to the types of resources in your subscriptions.
+
+Microsoft Defender for Cloud database security, allows you to protect your entire database estate, by detecting common attacks, supporting enablement, and threat response for the most popular database types in Azure.
+
+The types of protected databases are:
+
+- Azure SQL Databases
+- SQL servers on machines
+- Open-source relational databases (OSS RDB)
+- Microsoft Defender for Azure Cosmos DB
+
+Database provides protection to engines, and data types, with different attack surface, and security risks. Security detections are made for the specific attack surface of each DB type.
+
+Defender for CloudΓÇÖs database protection detects unusual and potentially harmful attempts to access, or exploit your databases. Advanced threat detection capabilities and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data are used to provide contextual security alerts. Those alerts include steps to mitigate the detected threats, and prevent future attacks.
+
+You can enable database protection on your subscription, or exclude specific database resource types.
+
+## Prerequisites
+
+- You must have Subscription Owner access
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+## Enable database protection on your subscription
+
+**To enable Defender for Storage for individual storage accounts under a specific subscription**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
+
+1. Navigate to **Microsoft Defender fo Cloud** > **Environment settings**.
+
+1. Select the relevant subscription.
+
+1. If you want to enable specific plans, set the plans toggle to **On**.
+
+1. (Optional) Select **Select types** to and enable specific resource types.
+
+ :::image type="content" source="media/quickstart-enable-database-protections/select-type.png" alt-text="Screenshot showing the toggles to enable specific resource types.":::
+
+ 1. Toggle each desired resource type to **On**.
+
+ :::image type="content" source="media/quickstart-enable-database-protections/resource-type.png" alt-text="Screenshot showing the types of resources available.":::
+
+ 1. Select **Continue**.
+
+## Next steps
+
+In this article, you learned how to enable Microsoft Defender for Cloud for all database types on your subscription. Next, read more about each of the resource types.
+
+- [Microsoft Defender for Azure SQL databases](defender-for-sql-introduction.md)
+- [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md)
+- [Microsoft Defender for Azure Cosmos DB (Preview)](concept-defender-for-cosmos.md)
+- [Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)
defender-for-cloud Quickstart Enable Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-defender-for-cosmos.md
+
+ Title: Enable Microsoft Defender for Azure Cosmos DB
+description: Learn how to enable Microsoft Defender for Azure Cosmos DB's enhanced security features.
++ Last updated : 02/28/2022++
+# Quickstart: Enable Microsoft Defender for Azure Cosmos DB
+
+ Microsoft Defender for Azure Cosmos DB protection is available at both the [Subscription level](#enable-database-protection-at-the-subscription-level), and resource level. You can enable Microsoft Defender for Cloud on your subscription to protect all database types on your subscription including Microsoft Defender for Azure Cosmos DB (recommended). You can also choose to enable Microsoft Defender for Azure Cosmos DB at the [Resource level](#enable-microsoft-defender-for-azure-cosmos-db-at-the-resource-level) to protect a specific Azure Cosmos DB account.
+
+## Prerequisites
+
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+## Enable database protection at the subscription level
+
+The subscription level enablement, enables Microsoft Defender for Cloud protection for all database types in your subscription (recommended).
+
+You can enable Microsoft Defender for Cloud protection on your subscription in order to protect all database types, for example, Azure Cosmos DB, Azure SQL Database, Azure SQL servers on machines, and OSS RDBs. You can also select specific resource types to protect when you configure your plan.
+
+When you enable Microsoft Defender for Cloud's enhanced security features on your subscription, Microsoft Defender for Azure Cosmos DB is automatically enabled for all of your Azure Cosmos DB accounts.
+
+**To enable database protection at the subscription level**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant subscription.
+
+1. Locate Databases and toggle the switch to **On**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/protection-type.png" alt-text="Screenshot showing the available protections you can enable." lightbox="media/quickstart-enable-defender-for-cosmos/protection-type-expanded.png":::
+
+1. Select **Save**.
+
+**To select specific resource types to protect when you configure your plan**:
+
+1. Follow steps 1 - 4 above.
+
+1. Select **Select types**
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/select-type.png" alt-text="Screenshot showing where the option to select the type is located.":::
+
+1. Toggle the desired resource type switches to **On**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/resource-type.png" alt-text="Screenshot showing the available resources you can enable.":::
+
+1. Select **Confirm**.
+
+## Enable Microsoft Defender for Azure Cosmos DB at the resource level
+
+You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB account through the Azure portal, PowerShell, or the Azure CLI.
+
+**To enable Microsoft Defender for Cloud for a specific Azure Cosmos DB account**:
+
+### [Azure portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **your Azure Cosmos DB account** > **Settings**.
+
+1. Select **Microsoft Defender for Cloud**.
+
+1. Select **Enable Microsoft Defender for Azure Cosmos DB**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/enable-storage.png" alt-text="Screenshot of the option to enable Microsoft Defender for Azure Cosmos DB on your specified Azure Cosmos DB account.":::
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Install the [Az.Security](https://www.powershellgallery.com/packages/Az.Security/1.1.1) module.
+
+1. Call the [Enable-AzSecurityAdvancedThreatProtection](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection) command.
+
+ ```powershell
+ Enable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<Your subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.DocumentDb/databaseAccounts/myCosmosDBAccount/"
+ ```
+
+1. Verify the Microsoft Defender for Azure Cosmos DB setting for your storage account through the PowerShell call [Get-AzSecurityAdvancedThreatProtection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection) command.
+
+ ```powershell
+ Get-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<Your subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.DocumentDb/databaseAccounts/myCosmosDBAccount/"
+ ```
+
+### [ARM template](#tab/arm-template)
+
+Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/cosmosdb-advanced-threat-protection-create-account/).
+++
+## Simulate security alerts from Microsoft Defender for Azure Cosmos DB
+
+A full list of [supported alerts](alerts-reference.md) is available in the reference table of all Defender for Cloud security alerts.
+
+You can use sample Microsoft Defender for Azure Cosmos DB alerts to evaluate their value, and capabilities. Sample alerts will also validate any configurations you've made for your security alerts (such as SIEM integrations, workflow automation, and email notifications).
+
+**To create sample alerts from Microsoft Defender for Azure Cosmos DB**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as a Subscription Contributor user.
+
+1. Navigate to the Alerts page.
+
+1. Select **Create sample alerts**.
+
+1. Select the subscription.
+
+1. Select the relevant Microsoft Defender plan(s).
+
+1. Select **Create sample alerts**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/sample-alerts.png" alt-text="Screenshot showing the order needed to create an alert.":::
+
+After a few minutes, the alerts will appear in the security alerts page. Alerts will also appear anywhere that you've configured to receive your Microsoft Defender for Cloud security alerts. For example, connected SIEMs, and email notifications.
+
+## Next Steps
+
+In this article, you learned how to enable Microsoft Defender for Azure Cosmos DB, and how to simulate security alerts.
+
+> [!div class="nextstepaction"]
+> [Automate responses to Microsoft Defender for Cloud triggers](workflow-automation.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 02/22/2022 Last updated : 02/28/2022 # What's new in Microsoft Defender for Cloud?
Updates in February include:
- [Kubernetes workload protection for Arc enabled K8s clusters](#kubernetes-workload-protection-for-arc-enabled-k8s-clusters) - [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances)
+- [Microsoft Defender for Azure Cosmos DB plan released for preview](#microsoft-defender-for-azure-cosmos-db-plan-released-for-preview)
### Kubernetes workload protection for Arc enabled K8s clusters
-Defender for Containers for Kubernetes workloads previously only protected AKS. We have now extended the protective coverage to include Azure Arc enabled Kubernetes clusters.
+Defender for Containers for Kubernetes workloads previously only protected AKS. We've now extended the protective coverage to include Azure Arc enabled Kubernetes clusters.
Learn how to [set up your Kubernetes workload protection](kubernetes-workload-protections.md#set-up-your-workload-protection) for AKS and Azure Arc enabled Kubernetes clusters.
Learn how to [set up your Kubernetes workload protection](kubernetes-workload-pr
The new automated onboarding of GCP environments allows you to protect GCP workloads with Microsoft Defender for Cloud. Defender for Cloud protects your resources with the following plans: -- **Defender for Cloud's CSPM** features extend to your GCP resources. This agentless plan assesses your GCP resources according to the GCP-specific security recommendations which are provided with Defender for Cloud. GCP recommendations are included in your secure score, and the resources will be assessed for compliance with the built-in GCP CIS standard. Defender for Cloud's asset inventory page is a multi-cloud enabled feature helping you manage your resources across Azure, AWS, and GCP.
+- **Defender for Cloud's CSPM** features extend to your GCP resources. This agentless plan assesses your GCP resources according to the GCP-specific security recommendations, which are provided with Defender for Cloud. GCP recommendations are included in your secure score, and the resources will be assessed for compliance with the built-in GCP CIS standard. Defender for Cloud's asset inventory page is a multi-cloud enabled feature helping you manage your resources across Azure, AWS, and GCP.
- **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP compute instances. This plan includes the integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more.
The new automated onboarding of GCP environments allows you to protect GCP workl
Learn how to protect, and [connect your GCP projects](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+### Microsoft Defender for Azure Cosmos DB plan released for preview
+
+We have extended Microsoft Defender for CloudΓÇÖs database coverage. You can now enable protection for your Azure Cosmos DB databases.
+
+Microsoft Defender for Azure Cosmos DB is an Azure-native layer of security that detects any attempt to exploit databases in your Azure Cosmos DB accounts. Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders.
+
+It continuously analyzes the customer data stream generated by the Azure Cosmos DB services.
+
+When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+
+There's no impact on database performance when enabling the service, because Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data.
+
+Learn more at [Introduction to Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md).
+
+We're also introducing a new enablement experience for database security. You can now enable Microsoft Defender for Cloud protection on your subscription to protect all database types, such as, Azure Cosmos DB, Azure SQL Database, Azure SQL servers on machines, and Microsoft Defender for open-source relational databases through one enablement process. Specific resource types can be included, or excluded by configuring your plan.
+
+Learn how to [enable your database security at the subscrition level](quickstart-enable-defender-for-cosmos.md#enable-database-protection-at-the-subscription-level).
+ ## January 2022 Updates in January include:
defender-for-cloud Supported Machines Endpoint Solutions Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds.md
Title: Microsoft Defender for Cloud's features according to OS, machine type, and cloud description: Learn about the availability of Microsoft Defender for Cloud features according to OS, machine type, and cloud deployment. Previously updated : 02/27/2022 Last updated : 02/28/2022
For information about when recommendations are generated for each of these solut
| - [Microsoft Defender for Key Vault](./defender-for-key-vault-introduction.md) | GA | Not Available | Not Available | | - [Microsoft Defender for Resource Manager](./defender-for-resource-manager-introduction.md) | GA | GA | GA | | - [Microsoft Defender for Storage](./defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA | Not Available |
-| - [Threat protection for Cosmos DB](./other-threat-protections.md#threat-protection-for-azure-cosmos-db-preview) | Public Preview | Not Available | Not Available |
+| - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Public Preview | Not Available | Not Available |
| - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA | | - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available | | **Microsoft Defender for servers features** <sup>[7](#footnote7)</sup> | | | |
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connecti
## Enroll in ExpressRoute FastPath features (preview)
-FastPath support for virtual network peering is now in Public preview.
+FastPath support for virtual network peering is now in Public preview, both IPv4 and IPv6 scenarios are supported. IPv4 FastPath and Vnet peering can be enabled on connections associated to both ExpressRoute Direct and ExpressRoute Partner circuits. IPv6 FastPath and Vnet peering support is limited to connections associated to ExpressRoute Direct.
### FastPath and virtual network peering
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Azure Firewall Standard has the following known issues:
|Issue |Description |Mitigation | ||||
-|Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work for Internet bound traffic|Network filtering rules for non-TCP/UDP protocols don't work with SNAT to your public IP address. Non-TCP/UDP protocols are supported between spoke subnets and VNets.|Azure Firewall uses the Standard Load Balancer, [which doesn't support SNAT for IP protocols today](../load-balancer/load-balancer-overview.md). We're exploring options to support this scenario in a future release.|
+|Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work for Internet bound traffic|Network filtering rules for non-TCP/UDP protocols don't work with SNAT to your public IP address. Non-TCP/UDP protocols are supported between spoke subnets and VNets.|Azure Firewall uses the Standard Load Balancer, [which doesn't support SNAT for IP protocols today](../load-balancer/outbound-rules.md#limitations). We're exploring options to support this scenario in a future release.|
|Missing PowerShell and CLI support for ICMP|Azure PowerShell and CLI don't support ICMP as a valid protocol in network rules.|It's still possible to use ICMP as a protocol via the portal and the REST API. We're working to add ICMP in PowerShell and CLI soon.| |FQDN tags require a protocol: port to be set|Application rules with FQDN tags require port: protocol definition.|You can use **https** as the port: protocol value. We're working to make this field optional when FQDN tags are used.| |Moving a firewall to a different resource group or subscription isn't supported|Moving a firewall to a different resource group or subscription isn't supported.|Supporting this functionality is on our road map. To move a firewall to a different resource group or subscription, you must delete the current instance and recreate it in the new resource group or subscription.|
firewall Premium Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy.md
Previously updated : 01/31/2022 Last updated : 02/28/2022
You can use `curl` to control various HTTP headers and simulate malicious traffi
1. On the WorkerVM virtual machine, open an administrator command prompt window. 2. Type the following command at the command prompt:
- `curl -A "BlackSun" <your web server address>`
+ `curl -A "HaxerMen" <your web server address>`
3. You'll see your Web server response. 4. Go to the Firewall Network rule logs on the Azure portal to find an alert similar to the following message:
- :::image type="content" source="media/premium-deploy/alert-message.png" alt-text="Alert message":::
+ ```
+ { ΓÇ£msgΓÇ¥ : ΓÇ£TCP request from 10.0.100.5:16036 to 10.0.20.10:80. Action: Alert. Rule: 2032081. IDS:
+ USER_AGENTS Suspicious User Agent (HaxerMen). Priority: 1. Classification: A Network Tojan was
+ detectedΓÇ¥}
+ ```
> [!NOTE] > It can take some time for the data to begin showing in the logs. Give it at least a couple minutes to allow for the logs to begin showing the data.
-5. Add a signature rule for signature 2008983:
+5. Add a signature rule for signature 2032081:
1. Select the **DemoFirewallPolicy** and under **Settings** select **IDPS**. 1. Select the **Signature rules** tab.
- 1. Under **Signature ID**, in the open text box type *2008983*.
+ 1. Under **Signature ID**, in the open text box type *2032081*.
1. Under **Mode**, select **Deny**. 1. Select **Save**. 1. Wait for the deployment to complete before proceeding.
You can use `curl` to control various HTTP headers and simulate malicious traffi
6. On WorkerVM, run the `curl` command again:
- `curl -A "BlackSun" <your web server address>`
+ `curl -A "HaxerMen" <your web server address>`
Since the HTTP request is now blocked by the firewall, you'll see the following output after the connection timeout expires:
You can use `curl` to control various HTTP headers and simulate malicious traffi
1. On the **IDPS (preview)** page, select the **Bypass list** tab. 2. Edit **MyRule** and set **Destination** to *10.0.20.10, which is the ServerVM private IP address. 3. Select **Save**.
-1. Run the test again: `curl -A "BlackSun" http://server.2020-private-preview.com` and now you should get the `Hello World` response and no log alert. >
+1. Run the test again: `curl -A "HaxerMen" http://server.2020-private-preview.com` and now you should get the `Hello World` response and no log alert. >
#### To test IDPS for HTTPS traffic Repeat these curl tests using HTTPS instead of HTTP. For example:
-`curl --ssl-no-revoke -A "BlackSun" <your web server address>`
+`curl --ssl-no-revoke -A "HaxerMen" <your web server address>`
You should see the same results that you had with the HTTP tests.
firewall Tutorial Firewall Deploy Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal-policy.md
The resource group contains all the resources for the tutorial.
### Create a VNet
-This VNet will have three subnets.
+This VNet will have two subnets.
> [!NOTE] > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
You can keep your firewall resources for the next tutorial, or if no longer need
## Next steps > [!div class="nextstepaction"]
-> [Deploy and configure Azure Firewall Premium](premium-deploy.md)
+> [Deploy and configure Azure Firewall Premium](premium-deploy.md)
governance Assign Policy Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-azurecli.md
Title: "Quickstart: New policy assignment with Azure CLI"
description: In this quickstart, you use Azure CLI to create an Azure Policy assignment to identify non-compliant resources. Last updated 08/17/2021 -+ # Quickstart: Create a policy assignment to identify non-compliant resources with Azure CLI
governance Assign Policy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-template.md
Title: "Quickstart: New policy assignment with templates"
description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a policy assignment to identify non-compliant resources. Last updated 08/17/2021 -+ # Quickstart: Create a policy assignment to identify non-compliant resources by using an ARM template
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
A Windows device with the following minimum requirements:
* Minimum Free Memory: 1 GB * Minimum Free Disk Space: 10 GB
-## Supported versions
-Azure IoT Edge for Linux on Windows supports the following versions:
-- 1.1 LTS using [Azure IoT Edge 1.1 LTS](./version-history.md)-- Continuous Release (CR) using [Azure IoT Edge 1.2](./version-history.md) currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - ## Platform support Azure IoT Edge for Linux on Windows supports the following architectures:
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-Try out Azure IoT Edge in this quickstart by deploying containerized code to a Linux on Windows IoT Edge device. IoT Edge allows you to remotely manage code on your devices so that you can send more of your workloads to the edge. For this quickstart, we recommend using your own device to see how easy it is to use Azure IoT Edge for Linux on Windows.
+Try out Azure IoT Edge in this quickstart by deploying containerized code to a Linux on Windows IoT Edge device. IoT Edge allows you to remotely manage code on your devices so that you can send more of your workloads to the edge. For this quickstart, we recommend using your own Windows Client device to see how easy it is to use Azure IoT Edge for Linux on Windows. If you wish to use Windows Server or an Azure VM to create your deployment, follow the steps in the how-to guide on [installing and provisioning Azure IoT Edge for Linux on a Windows device](how-to-provision-single-device-linux-on-windows-symmetric.md).
In this quickstart, you'll learn how to:
Make sure your IoT Edge device meets the following requirements:
* System Requirements * Windows 10<sup>1</sup>/11 (Pro, Enterprise, IoT Enterprise)
- * Windows Server 2019<sup>1</sup>/2022
- <sub><sup>1</sup> Windows 10 and Windows Server 2019 minimum build 17763 with all current cumulative updates installed.</sub>
+ <sub><sup>1</sup> Windows 10 minimum build 17763 with all current cumulative updates installed.</sub>
* Hardware requirements * Minimum Free Memory: 1 GB
Install IoT Edge for Linux on Windows on your device, and configure it with the
Run the following PowerShell commands on the target device where you want to deploy Azure IoT Edge for Linux on Windows. To deploy to a remote target device using PowerShell, use [Remote PowerShell](/powershell/module/microsoft.powershell.core/about/about_remote) to establish a connection to a remote device and run these commands remotely on that device.
+1. In an elevated PowerShell session, run the following command to enable Hyper-V. For more information, check [Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v).
+
+ ```powershell
+ Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
+ ```
+ 1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows. <!-- 1.1 -->
You can confirm that the resource group is removed by using this command to view
az group list ```
-### Remove Azure IoT Edge for Linux on Windows
-
-<!-- 1.1 -->
-Use the dashboard extension in Windows Admin Center to uninstall Azure IoT Edge for Linux on Windows.
-
-1. Connect to the IoT Edge device in Windows Admin Center. The Azure dashboard tool extension loads.
-
-1. Select **Uninstall**. After Azure IoT Edge is removed, Windows Admin Center removes the Azure IoT Edge device connection entry from the **Start** page.
-
->[!Note]
->Another way to remove Azure IoT Edge from your Windows system is to select **Start** > **Settings** > **Apps** > **Azure IoT Edge LTS** > **Uninstall** on your IoT Edge device. This method removes Azure IoT Edge from your IoT Edge device, but leaves the connection behind in Windows Admin Center. To complete the removal, uninstall Windows Admin Center from the **Settings** menu as well.
-
-<!-- end 1.1 -->
-
-<!-- 1.2 -->
-1. On the Windows host OS, select **Start** > **Settings** > **Apps** > **Apps & features**.
-
-1. Then select **Azure IoT Edge** > **Uninstall**
-<!-- end 1.2 -->
+<!-- Uninstall IoT Edge for Linux on Windows H2 and content -->
## Next steps
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Set-EflowVM** command updates the virtual machine configuration with the r
For more information, use the command `Get-Help Set-EflowVM -full`. +
+## Set-EflowVmDNSServers
+
+The **Set-EflowVmDNSServers** command configures the DNS servers for EFLOW virtual machine.
+
+| Parameter | Accepted values | Comments |
+| | | -- |
+| vendpointName | String value of the virtual endpoint name | Use the _Get-EflowVmEndpoint_ to obtain the virtual interfaces assigned to the EFLOW VM. E.g. **DESKTOP-CONTOSO-EflowInterface** |
+| dnsServers | List of DNS server IPAddress to use for name resolution | E.g. **@("10.0.10.1")** |
+
+For more information, use the command `Get-Help Set-EflowVmDNSServers -full`.
++ ## Set-EflowVmFeature The **Set-EflowVmFeature** command enables or disables the status of IoT Edge for Linux on Windows features.
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
[IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices.
+<!-- 1.1 -->
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Windows 10 Enterprise | ![Windows 10 Enterprise + AMD64](./media/support/green-check.png) | | | | Windows 10 IoT Enterprise | ![Windows 10 IoT Enterprise + AMD64](./media/support/green-check.png) | | | | Windows Server 2019 | ![Windows Server 2019 + AMD64](./media/support/green-check.png) | | |
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+| Operating System | AMD64 | ARM32v7 | ARM64 |
+| - | -- | - | -- |
+| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
+| Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) |
+| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
+| Windows 10 Pro | ![Windows 10 Pro + AMD64](./media/support/green-check.png) | | ![Win 10 Pro + ARM64](./media/support/green-check.png)<sup>1</sup> |
+| Windows 10 Enterprise | ![Windows 10 Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
+| Windows 10 IoT Enterprise | ![Windows 10 IoT Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 IoT Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
+| Windows Server 2019 | ![Windows Server 2019 + AMD64](./media/support/green-check.png) | | |
+
+<sup>1</sup> Support for this platform using IoT Edge for Linux on Windows is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+<!-- end 1.2 -->
All Windows operating systems must be version 1809 (build 17763) or later.
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
The IoT Edge documentation on this site is available for two different versions
* **IoT Edge 1.2** contains content for new features and capabilities that are in the latest stable release. This version of the documentation also contains content for the IoT Edge for Linux on Windows (EFLOW) continuous release version, which is based on IoT Edge 1.2 and contains the latest features and capabilities. * **IoT Edge 1.1 (LTS)** is the first long-term support (LTS) version of IoT Edge. The documentation for this version covers all features and capabilities from all previous versions through 1.1. This version of the documentation also contains content for the IoT Edge for Linux on Windows long-term support version, which is based on IoT Edge 1.1 LTS.
- * This documentation version will be stable through the supported lifetime of version 1.1, and will not reflect new features released in later versions. IoT Edge 1.1 LTS will be supported until December 3, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+ * This documentation version will be stable through the supported lifetime of version 1.1, and won't reflect new features released in later versions. IoT Edge 1.1 LTS will be supported until December 3, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
For more information about IoT Edge releases, see [Azure IoT Edge supported systems](support.md).
+### IoT Edge for Linux on Windows
+Azure IoT Edge for Linux on Windows (EFLOW) supports the following versions:
+* **EFLOW Continuous Release (CR)** based on Azure IoT Edge 1.2 version, it contains new features and capabilities that are in the latest stable release.
+* **EFLOW 1.1 (LTS)** based on Azure IoT Edge 1.1, it's the Long-term support version. This version will be stable through the supported lifetime of this version and won't include new features released in later versions. This version will be supported until Dec 2022 to match the IoT Edge 1.1 LTS release lifecycle. 
+
+All new releases are made available in the [Azure IoT Edge for Linux on Windows project](https://github.com/Azure/iotedge-eflow).
+ ## Version history This table provides recent version history for IoT Edge package releases, and highlights documentation updates made for each version.
This table provides recent version history for IoT Edge package releases, and hi
| [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-provision-single-device-linux-x509.md) | | [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | X.509 auto-provisioning with DPS<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) | +
+### IoT Edge for Linux on Windows
+| Release notes and assets | Type | Date | Highlights |
+| | - | - | - |
+| [Continuous Release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | Stable | January 2022 | **Public Preview** |
+| [1.1](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | Long-term support (LTS) | June 2021 | [Long-term support plan and supported systems updates](support.md) |
+ ## Next steps * [View all Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases)
iot-fundamentals Iot Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-support-help.md
If you do submit a new question to Stack Overflow, please use one or more of the
- [Azure RTOS](https://stackoverflow.com/questions/tagged/azure-rtos) - [Azure Sphere](https://stackoverflow.com/questions/tagged/azure-sphere) - [Azure Time Series Insights](https://stackoverflow.com/questions/tagged/azure-timeseries-insights) - [Azure Percept](https://stackoverflow.com/questions/tagged/azure-percept) ## Stay informed of updates and new releases
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md
-+ Last updated 02/23/2022 #Customer intent: As a developer new to IoT Hub, learn the basic concepts.
lab-services Get Started Manage Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/get-started-manage-labs.md
Title: Get started with Azure Lab Services description: This article describes how to get started with Azure Lab Services. -+ Last updated 11/18/2020
logic-apps Create Serverless Apps Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-serverless-apps-visual-studio.md
Title: Create an example serverless app with Visual Studio
-description: Using an Azure quickstart template, create, deploy, and manage an example serverless app with Azure Logic Apps and Azure Functions in Visual Studio
+description: Create, deploy, and manage an example serverless app with an Azure quickstart template, Azure Logic Apps and Azure Functions in Visual Studio.
ms.suite: integration
logic-apps Logic Apps Author Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-author-definitions.md
Title: Create, edit, or extend logic app JSON workflow definitions
-description: How to write, edit, and extend your logic app's JSON workflow definitions in Azure Logic Apps
+description: Write, edit, and extend your logic app's JSON workflow definitions in Azure Logic Apps.
ms.suite: integration--++ Last updated 01/01/2018
logic-apps Logic Apps Azure Resource Manager Templates Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-azure-resource-manager-templates-overview.md
Title: Overview - Automate deployment for Azure Logic Apps
+ Title: Azure Resource Manager templates for Azure Logic Apps
description: Learn about Azure Resource Manager templates to automate deployment for Azure Logic Apps ms.suite: integration -+ Last updated 12/08/2021
logic-apps Logic Apps Batch Process Send Receive Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-batch-process-send-receive-messages.md
Title: Batch process messages as a group
-description: Send and receive messages in groups between your workflows by using batch processing in Azure Logic Apps
+description: Send and receive messages in groups between your workflows by using batch processing in Azure Logic Apps.
ms.suite: integration --++ Last updated 07/31/2020
logic-apps Logic Apps Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-content-type.md
Title: Handle content types
-description: Learn how to handle various content types in workflows during design time and run time in Azure Logic Apps
+description: Learn how to handle various content types in workflows during design time and run time in Azure Logic Apps.
ms.suite: integration--++ Last updated 07/20/2018
logic-apps Logic Apps Control Flow Branches https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-branches.md
Title: Create or join parallel branches for actions in workflows
-description: Learn how to create or merge parallel running branches for independent workflow actions in Azure Logic Apps
+description: Learn how to create or merge parallel running branches for independent workflow actions in Azure Logic Apps.
ms.suite: integration--++ Last updated 10/10/2018
structure in your logic app's JSON definition instead, for example:
## Next steps
-* [Run steps based on a condition (conditional statements)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
-* [Run steps based on different values (switch statements)](../logic-apps/logic-apps-control-flow-switch-statement.md)
+* [Run steps based on a condition (condition action)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
+* [Run steps based on different values (switch action)](../logic-apps/logic-apps-control-flow-switch-statement.md)
* [Run and repeat steps (loops)](../logic-apps/logic-apps-control-flow-loops.md) * [Run steps based on grouped action status (scopes)](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md)
logic-apps Logic Apps Control Flow Conditional Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-conditional-statement.md
Title: Add conditional statements to workflows
-description: How to create conditions that control actions in workflows in Azure Logic Apps
+ Title: Add conditions to workflows
+description: Create conditions that control actions in workflows in Azure Logic Apps.
ms.suite: integration--++ Last updated 10/09/2018
-# Create conditional statements that control workflow actions in Azure Logic Apps
+# Add conditions to control workflow actions in Azure Logic Apps
To run specific actions in your logic app only after passing a specified condition,
-add a *conditional statement*. This control structure compares the data in your
+add a *condition action*. This control structure compares the data in your
workflow against specific values or fields. You can then specify different actions that run based on whether or not the data meets the condition. You can nest conditions inside each other. For example, suppose you have a logic app that sends too many emails when new items appear on a website's RSS feed.
-You can add a conditional statement to send email only
+You can add a condition action to send email only
when the new item includes a specific string. > [!TIP]
This logic app now sends mail only when the new items in the RSS feed meet your
## JSON definition
-Here's the high-level code definition behind a conditional statement:
+Here's the high-level code definition behind a condition action:
``` json "actions": {
Here's the high-level code definition behind a conditional statement:
## Next steps
-* [Run steps based on different values (switch statements)](../logic-apps/logic-apps-control-flow-switch-statement.md)
+* [Run steps based on different values (switch actions)](../logic-apps/logic-apps-control-flow-switch-statement.md)
* [Run and repeat steps (loops)](../logic-apps/logic-apps-control-flow-loops.md) * [Run or merge parallel steps (branches)](../logic-apps/logic-apps-control-flow-branches.md) * [Run steps based on grouped action status (scopes)](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md)
logic-apps Logic Apps Control Flow Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-loops.md
Title: Add loops to repeat actions
-description: Create loops that repeat workflow actions or process arrays in Azure Logic Apps
+description: Create loops that repeat workflow actions or process arrays in Azure Logic Apps.
ms.suite: integration--++ Last updated 01/05/2019
The default is one hour.
## Next steps
-* [Run steps based on a condition (conditional statements)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
-* [Run steps based on different values (switch statements)](../logic-apps/logic-apps-control-flow-switch-statement.md)
+* [Run steps based on a condition (condition action)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
+* [Run steps based on different values (switch action)](../logic-apps/logic-apps-control-flow-switch-statement.md)
* [Run or merge parallel steps (branches)](../logic-apps/logic-apps-control-flow-branches.md) * [Run steps based on grouped action status (scopes)](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md)
logic-apps Logic Apps Control Flow Run Steps Group Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-run-steps-group-scopes.md
Title: Group and run actions by scope
-description: Create scoped actions that run based on group status in Azure Logic Apps
+description: Create scoped actions that run based on group status in Azure Logic Apps.
ms.suite: integration-++ Last updated 10/03/2018- # Run actions based on group status by using scopes in Azure Logic Apps
First, create this sample logic app so that you can add a scope later:
Bing Maps service at an interval that you specify * A **Bing Maps - Get route** action that checks the travel time between two locations
-* A conditional statement that checks whether the
+* A condition action that checks whether the
travel time exceeds your specified travel time * An action that sends you email that current travel time exceeds your specified time
so save your work often.
> To visually simplify your view and hide each action's details in the designer, > collapse each action's shape as you progress through these steps.
-1. Add the **Bing Maps - Get route** action.
+1. Add the **Bing Maps - Get route** action.
1. If you don't already have a Bing Maps connection, you're asked to create a connection.
visit the [Azure Logic Apps user feedback site](https://aka.ms/logicapps-wish).
## Next steps
-* [Run steps based on a condition (conditional statements)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
-* [Run steps based on different values (switch statements)](../logic-apps/logic-apps-control-flow-switch-statement.md)
+* [Run steps based on a condition (condition action)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
+* [Run steps based on different values (switch action)](../logic-apps/logic-apps-control-flow-switch-statement.md)
* [Run and repeat steps (loops)](../logic-apps/logic-apps-control-flow-loops.md) * [Run or merge parallel steps (branches)](../logic-apps/logic-apps-control-flow-branches.md)
logic-apps Logic Apps Control Flow Switch Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-switch-statement.md
Title: Add switch statements to workflows
-description: How to create switch statements that control workflow actions based on specific values in Azure Logic Apps
+ Title: Add switch actions to workflows
+description: Create switch actions that control workflow actions based on specific values in Azure Logic Apps.
ms.suite: integration--++ Last updated 10/08/2018
-# Create switch statements that run workflow actions based on specific values in Azure Logic Apps
+# Create switch actions that run workflow actions based on specific values in Azure Logic Apps
To run specific actions based on the values of objects, expressions,
-or tokens, add a *switch* statement. This structure evaluates the object,
+or tokens, add a *switch* action. This structure evaluates the object,
expression, or token, chooses the case that matches the result,
-and runs specific actions only for that case. When the switch statement runs,
+and runs specific actions only for that case. When the switch action runs,
only one case should match the result. For example, suppose you want a logic app that takes different
email to an approver. Based on whether the approver selects
"Approve" or "Reject", the logic app follows different steps. > [!TIP]
-> Like all programming languages, switch statements
+> Like all programming languages, switch actions
> support only equality operators. If you need other > relational operators, such as "greater than", use a
-> [conditional statement](../logic-apps/logic-apps-control-flow-conditional-statement.md).
+> [condition action](../logic-apps/logic-apps-control-flow-conditional-statement.md).
> To ensure deterministic execution behavior, > cases must contain a unique and static value > instead of dynamic tokens or expressions.
with an Outlook.com account or a work or school account.
![Enter email details](./media/logic-apps-control-flow-switch-statement/send-approval-email-details.png)
-## Add switch statement
+## Add a switch action
-1. For this example, add a switch statement at the end
+1. For this example, add a switch action at the end
your sample workflow. After the last step, choose **New step**.
- When you want to add a switch statement between steps,
+ When you want to add a switch action between steps,
move the pointer over the arrow where you want to add
- the switch statement. Choose the **plus sign** (**+**)
+ the switch action. Choose the **plus sign** (**+**)
that appears, then choose **Add an action**. 1. In the search box, enter "switch" as your filter.
Select this action: **Switch - Control**
![Add switch](./media/logic-apps-control-flow-switch-statement/add-switch-statement.png)
- A switch statement appears with one case and a default case.
- By default, a switch statement requires at least one case plus the default case.
+ A switch action appears with one case and a default case.
+ By default, a switch action requires at least one case plus the default case.
- ![Empty default switch statement](./media/logic-apps-control-flow-switch-statement/empty-switch.png)
+ ![Empty default switch action](./media/logic-apps-control-flow-switch-statement/empty-switch.png)
1. Click inside the **On** box so that the dynamic content list appears. From that list, select the **SelectedOption** field whose output
add another case between **Case** and **Default**.
| Default | None | No action necessary. In this example, the **Default** case is empty because **SelectedOption** has only two options. | |||
- ![Finished switch statement](./media/logic-apps-control-flow-switch-statement/finished-switch.png)
+ ![Finished switch action](./media/logic-apps-control-flow-switch-statement/finished-switch.png)
1. Save your logic app.
add another case between **Case** and **Default**.
## JSON definition
-Now that you created a logic app using a switch statement,
-let's look at the high-level code definition behind the switch statement.
+Now that you created a logic app using a switch action,
+let's look at the high-level code definition behind the switch action.
``` json "Switch": {
let's look at the high-level code definition behind the switch statement.
| Label | Description | |-|-|
-| `"Switch"` | The name of the switch statement, which you can rename for readability |
-| `"type": "Switch"` | Specifies that the action is a switch statement |
+| `"Switch"` | The name of the switch action, which you can rename for readability |
+| `"type": "Switch"` | Specifies that the action is a switch action |
| `"expression"` | In this example, specifies the approver's selected option that's evaluated against each case as declared later in the definition | | `"cases"` | Defines any number of cases. For each case, `"Case_*"` is the default name for that case, which you can rename for readability |
-| `"case"` | Specifies the case's value, which must be a constant and unique value that the switch statement uses for comparison. If no cases match the switch expression result, the actions in the `"default"` section are run. |
+| `"case"` | Specifies the case's value, which must be a constant and unique value that the switch action uses for comparison. If no cases match the switch expression result, the actions in the `"default"` section are run. |
| | | ## Get support
let's look at the high-level code definition behind the switch statement.
## Next steps
-* [Run steps based on a condition (conditional statements)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
+* [Run steps based on a condition (condition action)](../logic-apps/logic-apps-control-flow-conditional-statement.md)
* [Run and repeat steps (loops)](../logic-apps/logic-apps-control-flow-loops.md) * [Run or merge parallel steps (branches)](../logic-apps/logic-apps-control-flow-branches.md) * [Run steps based on grouped action status (scopes)](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md)
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-visual-studio.md
You can also [manage your logic apps in the Azure portal](manage-logic-apps-with
* Download and install these tools, if you don't have them already:
- * [Visual Studio 2019, 2017, or 2015 - Community edition or greater](https://aka.ms/download-visual-studio). This quickstart uses Visual Studio Community 2017, which is free.
+ * [Visual Studio 2019, 2017, or 2015 - Community edition or greater](https://aka.ms/download-visual-studio). The Azure Logic Apps extension is currently unavailable for Visual Studio 2022. This quickstart uses Visual Studio Community 2017, which is free.
> [!IMPORTANT] > When you install Visual Studio 2019 or 2017, make sure that you select the **Azure development** workload.
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Previously updated : 12/07/2021 Last updated : 02/21/2022 # Azure Machine Learning Python SDK release notes
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2022-02-28
+
+### Azure Machine Learning SDK for Python v1.39.0
+ + **azureml-automl-core**
+ + Fix incorrect form displayed in PBI for integration with AutoML regression models
+ + Adding min-label-classes check for both classification tasks (multi-class and multi-label). It will throw an error for the customer's run if the unique number of classes in the input training dataset is fewer than 2. It is meaningless to run classification on fewer than two classes.
+ + **azureml-automl-runtime**
+ + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+ + Automl training now supports numpy version 1.8.
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Fixed a bug in the TCNForecaster model where not all training data would be used when cross-validation settings were provided.
+ + TCNForecaster wrapper's forecast method that was corrupting inference-time predictions. Also fixed an issue where the forecast method would not use the most recent context data in train-valid scenarios.
+ + **azureml-interpret**
+ + For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update.
+ + **azureml-responsibleai**
+ + azureml-responsibleai package to raiwidgets and responsibleai 0.17.0 release
+ + **azureml-synapse**
+ + Fix the issue that magic widget is disappeared.
+ + **azureml-train-automl-runtime**
+ + Updating AutoML dependencies to support python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
+ + Automl training now supports numpy version 1.19
+ + Fix automl reset index logic for ensemble models in automl_setup_model_explanations API
+ + In automl, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
+ + All internal intermediate artifacts that are produced by AutoML are now stored transparently on the parent run (instead of being sent to the default workspace blob store). Users should be able to see the artifacts that AutoML generates under the 'outputs/` directory on the parent run.
+
+
## 2022-01-24 ### Azure Machine Learning SDK for Python v1.38.0
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
The following is a sample JSONL file for image classification:
Once your data is in JSONL format, you can create a TabularDataset with the following code: ```python
+ws = Workspace.from_config()
+ds = ws.get_default_datastore()
from azureml.core import Dataset training_dataset = Dataset.Tabular.from_json_lines_files(
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Previously updated : 02/17/2022 Last updated : 02/28/2022 #Customer intent: As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio.
Using the following keystroke shortcuts, you can more easily navigate and run co
* If your kernel crashed and was restarted, you can run the following command to look at jupyter log and find out more details: `sudo journalctl -u jupyter`. If kernel issues persist, consider using a compute instance with more memory. * If you run into an expired token issue, sign out of your Azure ML studio, sign back in, and then restart the notebook kernel.
-
+
+* When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use one of the following methods:
+
+ * Use the SDK to upload the data to a datastore. For more information, see the [Upload the data](/azure/machine-learning/tutorial-1st-experiment-bring-data#upload) section of the tutorial.
+ * Use [Azure Data Factory](how-to-data-ingest-adf.md) to create a data ingestion pipeline.
+ ## Next steps * [Run your first experiment](tutorial-1st-experiment-sdk-train.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 02/14/2022 Last updated : 02/28/2022
When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to d
### Azure Monitor > [!WARNING]
-> Azure Monitor supports using Azure Private Link to connect to a VNet. However, Azure Machine Learning does not support using a private link-enabled Azure Monitor (including Azure Application Insights). Do __not_ configure private link for the Azure Monitor or Azure Application Insights you plan to use with Azure Machine Learning.
+> Azure Monitor supports using Azure Private Link to connect to a VNet. However, you must use the open Private Link mode in Azure Monitor. For more information, see [Private Link access modes: Private only vs. Open](/azure/azure-monitor/logs/private-link-security#private-link-access-modes-private-only-vs-open).
## Required public internet access
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-vs-code.md
Learn how to set up the Azure Machine Learning Visual Studio Code extension for your machine learning workflows.
-> [!div class="mx-imgBorder"]
-> ![VS Code Extension](./media/how-to-setup-vs-code/vs-code-extension.PNG)
- The Azure Machine Learning extension for VS Code provides a user interface to: - Manage Azure Machine Learning resources (experiments, virtual machines, models, deployments, etc.)
The Azure Machine Learning extension for VS Code provides a user interface to:
- Train machine learning models - Debug machine learning experiments locally - Schema-based language support, autocompletion and diagnostics for specification file authoring-- Snippets for common tasks ## Prerequisites
The Azure Machine Learning extension for VS Code provides a user interface to:
- Visual Studio Code. If you don't have it, [install it](https://code.visualstudio.com/docs/setup/setup-overview). - [Python](https://www.python.org/downloads/) - (Optional) To create resources using the extension, you need to install the CLI (v2). For setup instructions, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md).
+- Clone the community driven repository
+ ```bash
+ git clone https://github.com/Azure/azureml-examples.git --depth 1
+ ```
## Install the extension 1. Open Visual Studio Code. 1. Select **Extensions** icon from the **Activity Bar** to open the Extensions view.
-1. In the Extensions view, search for "Azure Machine Learning".
+1. In the Extensions view search bar, type "Azure Machine Learning" and select the first extension.
1. Select **Install**. > [!div class="mx-imgBorder"] > ![Install Azure Machine Learning VS Code Extension](./media/how-to-setup-vs-code/install-aml-vscode-extension.PNG)
-> [!NOTE]
-> Alternatively, you can install the Azure Machine Learning extension via the Visual Studio Marketplace by [downloading the installer directly](https://aka.ms/vscodetoolsforai).
-
-The rest of the steps in this tutorial have been tested with the latest version of the extension.
- > [!NOTE] > The Azure Machine Learning VS Code extension uses the CLI (v2) by default. To switch to the 1.0 CLI, set the `azureML.CLI Compatibility Mode` setting in Visual Studio Code to `1.0`. For more information on modifying your settings in Visual Studio, see the [user and workspace settings documentation](https://code.visualstudio.com/docs/getstarted/settings).
The rest of the steps in this tutorial have been tested with the latest version
In order to provision resources and run workloads on Azure, you have to sign in with your Azure account credentials. To assist with account management, Azure Machine Learning automatically installs the Azure Account extension. Visit the following site to [learn more about the Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account).
-To sign into you Azure account, select the **Azure: Sign In** button on the Visual Studio Code status bar to start the sign in process.
-
-Alternatively, use the command palette:
-
-1. Open the command palette by selecting **View > Command Palette** from the menu bar.
-1. Enter the command "> Azure: Sign In" into the command palette to start the sign in process.
+To sign into you Azure account, select the **Azure: Sign In** button in the bottom right corner on the Visual Studio Code status bar to start the sign in process.
## Choose your default workspace
Alternatively, use the `> Azure ML: Set Default Workspace` command in the comman
- [Develop on a remote compute instance locally](how-to-set-up-vs-code-remote.md) - [Use a compute instances as a remote Jupyter server](how-to-set-up-vs-code-remote.md) - [Train an image classification model using the Visual Studio Code extension](tutorial-train-deploy-image-classification-model-vscode.md)-- [Run and debug machine learning experiments locally](how-to-debug-visual-studio-code.md)
+- [Run and debug machine learning experiments locally](how-to-debug-visual-studio-code.md)
marketplace What Is Test Drive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-test-drive.md
Previously updated : 12/03/2021 Last updated : 02/28/2022 # What is a test drive?
The process of turning an architecture of resources into a test drive can be dau
## Generate leads from your test drive
-A commercial marketplace test drive is a great tool for marketers. We recommend you incorporate it in your go-to-market efforts when you launch to generate more leads for your business. For detailed guidance, see [Customer leads from your commercial marketplace offer](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/marketplace/partner-center-portal/commercial-marketplace-get-customer-leads.md).
+A commercial marketplace test drive is a great tool for marketers. We recommend you incorporate it in your go-to-market efforts when you launch to generate more leads for your business. For detailed guidance, see [Customer leads from your commercial marketplace offer](./partner-center-portal/commercial-marketplace-get-customer-leads.md).
-If you close a deal with a test drive lead, be sure to register it at [Microsoft Partner Sales Connect](https://support.microsoft.com/help/3155788/getting-started-with-microsoft-partner-sales-connect). Also, we would love to hear about your customer wins where a test drive played a role.
+If you close a deal with a test drive lead, be sure to register it at [Grow your business with referrals from Microsoft](https://support.microsoft.com/help/3155788/getting-started-with-microsoft-partner-sales-connect). Also, we would love to hear about your customer wins where a test drive played a role.
## Other resources
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-access-security-private-link.md
With Private Link, you can enable cross-premises access to the private endpoint
> [!NOTE] > In some cases the Azure Database for MySQL and the VNet-subnet are in different subscriptions. In these cases you must ensure the following configurations:
-> - Make sure that both the subscription has the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
+> - Make sure that both subscriptions have the **Microsoft.DBforMySQL** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal]
## Configure Private Link for Azure Database for MySQL
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md
az feature register --namespace Microsoft.RedHatOpenShift --name preview
A Red Hat pull secret enables your cluster to access Red Hat container registries along with additional content. This step is optional but recommended.
-1. [Navigate to your Red Hat OpenShift cluster manager portal](https://cloud.redhat.com/openshift/install/azure/aro-provisioned) and log in.
+1. [Navigate to your Red Hat OpenShift cluster manager portal](https://console.redhat.com/openshift/install/azure/aro-provisioned) and log in.
You will need to log in to your Red Hat account or create a new Red Hat account with your business email and accept the terms and conditions.
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
description: Learn about Azure Peering Service overview
-+ na Last updated 05/18/2020
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
To choose a pricing tier, use the following table as a starting point.
| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.| | Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
-After you create a server, the compute tier, number of vCores and storage size can be changed up or down within seconds. You also can independently adjust the backup retention period up or down. For more information, see the [Scale resources](#scale-resources) section.
+After you create a server, the compute tier, number of vCores can be changed up or down and storage size can be changed up within seconds. You also can independently adjust the backup retention period up or down. For more information, see the [Scale resources](#scale-resources) section.
## Compute tiers, vCores, and server types
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-functions.md
Previously updated : 02/18/2022 Last updated : 02/24/2022 # Functions in the Hyperscale (Citus) SQL API
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
Previously updated : 02/18/2022 Last updated : 02/24/2022 # The Hyperscale (Citus) SQL API
Azure Database for PostgreSQL - Hyperscale (Citus) includes features beyond
standard PostgreSQL. Below is a categorized reference of functions and configuration options for:
-* managing sharded data between multiple servers
-* compressing data with columnar storage
-* automating timeseries partitioning
-* parallelizing query execution across shards
+* Parallelizing query execution across shards
+* Managing sharded data between multiple servers
+* Compressing data with columnar storage
+* Automating timeseries partitioning
## SQL functions
configuration options for:
| Name | Description | ||-|
-| [alter_distributed_table](reference-functions.md#alter_distributed_table) | change the distribution column, shard count or colocation properties of a distributed table |
-| [citus_copy_shard_placement](reference-functions.md#master_copy_shard_placement) | repair an inactive shard placement using data from a healthy placement |
-| [create_distributed_table](reference-functions.md#create_distributed_table) | turn a PostgreSQL table into a distributed (sharded) table |
-| [create_reference_table](reference-functions.md#create_reference_table) | maintain full copies of a table in sync across all nodes |
-| [isolate_tenant_to_new_shard](reference-functions.md#isolate_tenant_to_new_shard) | create a new shard to hold rows with a specific single value in the distribution column |
-| [truncate_local_data_after_distributing_table](reference-functions.md#truncate_local_data_after_distributing_table) | truncate all local rows after distributing a table |
-| [undistribute_table](reference-functions.md#undistribute_table) | undo the action of create_distributed_table or create_reference_table |
+| [alter_distributed_table](reference-functions.md#alter_distributed_table) | Change the distribution column, shard count or colocation properties of a distributed table |
+| [citus_copy_shard_placement](reference-functions.md#master_copy_shard_placement) | Repair an inactive shard placement using data from a healthy placement |
+| [create_distributed_table](reference-functions.md#create_distributed_table) | Turn a PostgreSQL table into a distributed (sharded) table |
+| [create_reference_table](reference-functions.md#create_reference_table) | Maintain full copies of a table in sync across all nodes |
+| [isolate_tenant_to_new_shard](reference-functions.md#isolate_tenant_to_new_shard) | Create a new shard to hold rows with a specific single value in the distribution column |
+| [truncate_local_data_after_distributing_table](reference-functions.md#truncate_local_data_after_distributing_table) | Truncate all local rows after distributing a table |
+| [undistribute_table](reference-functions.md#undistribute_table) | Undo the action of create_distributed_table or create_reference_table |
### Shard rebalancing | Name | Description | ||-|
-| [citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy) | append a row to `pg_dist_rebalance_strategy` |
-| [citus_move_shard_placement](reference-functions.md#master_move_shard_placement) | typically used indirectly during shard rebalancing rather than being called directly by a database administrator |
-| [citus_set_default_rebalance_strategy](reference-functions.md#) | change the strategy named by its argument to be the default chosen when rebalancing shards |
-| [get_rebalance_progress](reference-functions.md#get_rebalance_progress) | monitor the moves planned and executed by `rebalance_table_shards` |
-| [get_rebalance_table_shards_plan](reference-functions.md#get_rebalance_table_shards_plan) | output the planned shard movements of rebalance_table_shards without performing them |
-| [rebalance_table_shards](reference-functions.md#rebalance_table_shards) | move shards of the given table to distribute them evenly among the workers |
+| [citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy) | Append a row to `pg_dist_rebalance_strategy` |
+| [citus_move_shard_placement](reference-functions.md#master_move_shard_placement) | Typically used indirectly during shard rebalancing rather than being called directly by a database administrator |
+| [citus_set_default_rebalance_strategy](reference-functions.md#) | Change the strategy named by its argument to be the default chosen when rebalancing shards |
+| [get_rebalance_progress](reference-functions.md#get_rebalance_progress) | Monitor the moves planned and executed by `rebalance_table_shards` |
+| [get_rebalance_table_shards_plan](reference-functions.md#get_rebalance_table_shards_plan) | Output the planned shard movements of rebalance_table_shards without performing them |
+| [rebalance_table_shards](reference-functions.md#rebalance_table_shards) | Move shards of the given table to distribute them evenly among the workers |
### Colocation | Name | Description | ||-|
-| [create_distributed_function](reference-functions.md#create_distributed_function) | make function run on workers near colocated shards |
-| [update_distributed_table_colocation](reference-functions.md#update_distributed_table_colocation) | update or break colocation of a distributed table |
+| [create_distributed_function](reference-functions.md#create_distributed_function) | Make function run on workers near colocated shards |
+| [update_distributed_table_colocation](reference-functions.md#update_distributed_table_colocation) | Update or break colocation of a distributed table |
### Columnar storage | Name | Description | ||-|
-| [alter_columnar_table_set](reference-functions.md#alter_columnar_table_set) | change settings on a columnar table |
-| [alter_table_set_access_method](reference-functions.md#alter_table_set_access_method) | convert a table between heap or columnar storage |
+| [alter_columnar_table_set](reference-functions.md#alter_columnar_table_set) | Change settings on a columnar table |
+| [alter_table_set_access_method](reference-functions.md#alter_table_set_access_method) | Convert a table between heap or columnar storage |
### Timeseries partitioning | Name | Description | ||-|
-| [alter_old_partitions_set_access_method](reference-functions.md#alter_old_partitions_set_access_method) | change storage method of partitions |
-| [create_time_partitions](reference-functions.md#create_time_partitions) | create partitions of a given interval to cover a given range of time |
-| [drop_old_time_partitions](reference-functions.md#drop_old_time_partitions) | remove all partitions whose intervals fall before a given timestamp |
+| [alter_old_partitions_set_access_method](reference-functions.md#alter_old_partitions_set_access_method) | Change storage method of partitions |
+| [create_time_partitions](reference-functions.md#create_time_partitions) | Create partitions of a given interval to cover a given range of time |
+| [drop_old_time_partitions](reference-functions.md#drop_old_time_partitions) | Remove all partitions whose intervals fall before a given timestamp |
### Informational | Name | Description | ||-|
-| [citus_get_active_worker_nodes](reference-functions.md#citus_get_active_worker_nodes) | get active worker host names and port numbers |
-| [citus_relation_size](reference-functions.md#citus_relation_size) | get disk space used by all the shards of the specified distributed table |
-| [citus_remote_connection_stats](reference-functions.md#citus_remote_connection_stats) | show the number of active connections to each remote node |
-| [citus_stat_statements_reset](reference-functions.md#citus_stat_statements_reset) | remove all rows from `citus_stat_statements` |
-| [citus_table_size](reference-functions.md#citus_table_size) | get disk space used by all the shards of the specified distributed table, excluding indexes |
-| [citus_total_relation_size](reference-functions.md#citus_total_relation_size) | get total disk space used by the all the shards of the specified distributed table, including all indexes and TOAST data |
-| [column_to_column_name](reference-functions.md#column_to_column_name) | translate the `partkey` column of `pg_dist_partition` into a textual column name |
-| [get_shard_id_for_distribution_column](reference-functions.md#get_shard_id_for_distribution_column) | find the shard ID associated with a value of the distribution column |
+| [citus_get_active_worker_nodes](reference-functions.md#citus_get_active_worker_nodes) | Get active worker host names and port numbers |
+| [citus_relation_size](reference-functions.md#citus_relation_size) | Get disk space used by all the shards of the specified distributed table |
+| [citus_remote_connection_stats](reference-functions.md#citus_remote_connection_stats) | Show the number of active connections to each remote node |
+| [citus_stat_statements_reset](reference-functions.md#citus_stat_statements_reset) | Remove all rows from `citus_stat_statements` |
+| [citus_table_size](reference-functions.md#citus_table_size) | Get disk space used by all the shards of the specified distributed table, excluding indexes |
+| [citus_total_relation_size](reference-functions.md#citus_total_relation_size) | Get total disk space used by the all the shards of the specified distributed table, including all indexes and TOAST data |
+| [column_to_column_name](reference-functions.md#column_to_column_name) | Translate the `partkey` column of `pg_dist_partition` into a textual column name |
+| [get_shard_id_for_distribution_column](reference-functions.md#get_shard_id_for_distribution_column) | Find the shard ID associated with a value of the distribution column |
## Server parameters
configuration options for:
| Name | Description | ||-|
-| [citus.all_modifications_commutative](reference-parameters.md#citusall_modifications_commutative) | allow all commands to claim a shared lock |
-| [citus.count_distinct_error_rate](reference-parameters.md#cituscount_distinct_error_rate-floating-point) | tune error rate of postgresql-hll approximate counting |
-| [citus.enable_repartition_joins](reference-parameters.md#citusenable_repartition_joins-boolean) | allow JOINs made on non-distribution columns |
-| [citus.enable_repartitioned_insert_select](reference-parameters.md#citusenable_repartition_joins-boolean) | allow repartitioning rows from the SELECT statement and transferring them between workers for insertion |
-| [citus.limit_clause_row_fetch_count](reference-parameters.md#cituslimit_clause_row_fetch_count-integer) | the number of rows to fetch per task for limit clause optimization |
-| [citus.local_table_join_policy](reference-parameters.md#cituslocal_table_join_policy-enum) | where data moves when doing a join between local and distributed tables |
-| [citus.multi_shard_commit_protocol](reference-parameters.md#citusmulti_shard_commit_protocol-enum) | the commit protocol to use when performing COPY on a hash distributed table |
-| [citus.propagate_set_commands](reference-parameters.md#cituspropagate_set_commands-enum) | which SET commands are propagated from the coordinator to workers |
+| [citus.all_modifications_commutative](reference-parameters.md#citusall_modifications_commutative) | Allow all commands to claim a shared lock |
+| [citus.count_distinct_error_rate](reference-parameters.md#cituscount_distinct_error_rate-floating-point) | Tune error rate of postgresql-hll approximate counting |
+| [citus.enable_repartition_joins](reference-parameters.md#citusenable_repartition_joins-boolean) | Allow JOINs made on non-distribution columns |
+| [citus.enable_repartitioned_insert_select](reference-parameters.md#citusenable_repartition_joins-boolean) | Allow repartitioning rows from the SELECT statement and transferring them between workers for insertion |
+| [citus.limit_clause_row_fetch_count](reference-parameters.md#cituslimit_clause_row_fetch_count-integer) | The number of rows to fetch per task for limit clause optimization |
+| [citus.local_table_join_policy](reference-parameters.md#cituslocal_table_join_policy-enum) | Where data moves when doing a join between local and distributed tables |
+| [citus.multi_shard_commit_protocol](reference-parameters.md#citusmulti_shard_commit_protocol-enum) | The commit protocol to use when performing COPY on a hash distributed table |
+| [citus.propagate_set_commands](reference-parameters.md#cituspropagate_set_commands-enum) | Which SET commands are propagated from the coordinator to workers |
### Informational | Name | Description | ||-|
-| [citus.explain_all_tasks](reference-parameters.md#citusexplain_all_tasks-boolean) | make EXPLAIN output show all tasks |
-| [citus.explain_analyze_sort_method](reference-parameters.md#citusexplain_analyze_sort_method-enum) | sort method of the tasks in the output of EXPLAIN ANALYZE |
-| [citus.log_remote_commands](reference-parameters.md#cituslog_remote_commands-boolean) | log queries the coordinator sends to worker nodes |
-| [citus.multi_task_query_log_level](reference-parameters.md#citusmulti_task_query_log_level-enum-multi_task_logging) | log-level for any query that generates more than one task |
-| [citus.stat_statements_max](reference-parameters.md#citusstat_statements_max-integer) | max number of rows to store in `citus_stat_statements` |
-| [citus.stat_statements_purge_interval](reference-parameters.md#citusstat_statements_purge_interval-integer) | frequency at which the maintenance daemon removes records from `citus_stat_statements` that are unmatched in `pg_stat_statements` |
-| [citus.stat_statements_track](reference-parameters.md#citusstat_statements_track-enum) | enable/disable statement tracking |
+| [citus.explain_all_tasks](reference-parameters.md#citusexplain_all_tasks-boolean) | Make EXPLAIN output show all tasks |
+| [citus.explain_analyze_sort_method](reference-parameters.md#citusexplain_analyze_sort_method-enum) | Sort method of the tasks in the output of EXPLAIN ANALYZE |
+| [citus.log_remote_commands](reference-parameters.md#cituslog_remote_commands-boolean) | Log queries the coordinator sends to worker nodes |
+| [citus.multi_task_query_log_level](reference-parameters.md#citusmulti_task_query_log_level-enum-multi_task_logging) | Log-level for any query that generates more than one task |
+| [citus.stat_statements_max](reference-parameters.md#citusstat_statements_max-integer) | Max number of rows to store in `citus_stat_statements` |
+| [citus.stat_statements_purge_interval](reference-parameters.md#citusstat_statements_purge_interval-integer) | Frequency at which the maintenance daemon removes records from `citus_stat_statements` that are unmatched in `pg_stat_statements` |
+| [citus.stat_statements_track](reference-parameters.md#citusstat_statements_track-enum) | Enable/disable statement tracking |
### Inter-node connection management | Name | Description | ||-|
-| [citus.executor_slow_start_interval](reference-parameters.md#citusexecutor_slow_start_interval-integer) | time to wait in milliseconds between opening connections to the same worker node |
-| [citus.force_max_query_parallelization](reference-parameters.md#citusforce_max_query_parallelization-boolean) | open as many connections as possible |
-| [citus.max_adaptive_executor_pool_size](reference-parameters.md#citusmax_adaptive_executor_pool_size-integer) | max worker connections per session |
-| [citus.max_cached_conns_per_worker](reference-parameters.md#citusmax_cached_conns_per_worker-integer) | number of connections kept open to speed up subsequent commands |
-| [citus.node_connection_timeout](reference-parameters.md#citusnode_connection_timeout-integer) | max duration (in milliseconds) to wait for connection establishment |
+| [citus.executor_slow_start_interval](reference-parameters.md#citusexecutor_slow_start_interval-integer) | Time to wait in milliseconds between opening connections to the same worker node |
+| [citus.force_max_query_parallelization](reference-parameters.md#citusforce_max_query_parallelization-boolean) | Open as many connections as possible |
+| [citus.max_adaptive_executor_pool_size](reference-parameters.md#citusmax_adaptive_executor_pool_size-integer) | Max worker connections per session |
+| [citus.max_cached_conns_per_worker](reference-parameters.md#citusmax_cached_conns_per_worker-integer) | Number of connections kept open to speed up subsequent commands |
+| [citus.node_connection_timeout](reference-parameters.md#citusnode_connection_timeout-integer) | Max duration (in milliseconds) to wait for connection establishment |
### Data transfer | Name | Description | ||-|
-| [citus.enable_binary_protocol](reference-parameters.md#citusenable_binary_protocol-boolean) | use PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data with workers |
-| [citus.max_intermediate_result_size](reference-parameters.md#citusmax_intermediate_result_size-integer) | size in KB of intermediate results for CTEs and subqueries that are unable to be pushed down |
+| [citus.enable_binary_protocol](reference-parameters.md#citusenable_binary_protocol-boolean) | Use PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data with workers |
+| [citus.max_intermediate_result_size](reference-parameters.md#citusmax_intermediate_result_size-integer) | Size in KB of intermediate results for CTEs and subqueries that are unable to be pushed down |
### Deadlock | Name | Description | ||-|
-| [citus.distributed_deadlock_detection_factor](reference-parameters.md#citusdistributed_deadlock_detection_factor-floating-point) | time to wait before checking for distributed deadlocks |
-| [citus.log_distributed_deadlock_detection](reference-parameters.md#cituslog_distributed_deadlock_detection-boolean) | whether to log distributed deadlock detection-related processing in the server log |
+| [citus.distributed_deadlock_detection_factor](reference-parameters.md#citusdistributed_deadlock_detection_factor-floating-point) | Time to wait before checking for distributed deadlocks |
+| [citus.log_distributed_deadlock_detection](reference-parameters.md#cituslog_distributed_deadlock_detection-boolean) | Whether to log distributed deadlock detection-related processing in the server log |
## System tables
help you see data properties and query activity across the server group.
| Name | Description | ||-|
-| [citus_dist_stat_activity](reference-metadata.md#distributed-query-activity) | distributed queries that are executing on all nodes |
-| [citus_lock_waits](reference-metadata.md#distributed-query-activity) | queries blocked throughout the server group |
-| [citus_shards](reference-metadata.md#shard-information-view) | the location of each shard, the type of table it belongs to, and its size |
-| [citus_stat_statements](reference-metadata.md#query-statistics-table) | stats about how queries are being executed, and for whom |
-| [citus_tables](reference-metadata.md#distributed-tables-view) | a summary of all distributed and reference tables |
-| [citus_worker_stat_activity](reference-metadata.md#distributed-query-activity) | queries on workers, including tasks on individual shards |
-| [pg_dist_colocation](reference-metadata.md#colocation-group-table) | which tables' shards should be placed together |
-| [pg_dist_node](reference-metadata.md#worker-node-table) | information about worker nodes in the server group |
-| [pg_dist_object](reference-metadata.md#distributed-object-table) | objects such as types and functions that have been created on the coordinator node and propagated to worker nodes |
-| [pg_dist_placement](reference-metadata.md#shard-placement-table) | the location of shard replicas on worker nodes |
-| [pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table) | strategies that `rebalance_table_shards` can use to determine where to move shards |
-| [pg_dist_shard](reference-metadata.md#shard-table) | the table, distribution column, and value ranges for every shard |
-| [time_partitions](reference-metadata.md#time-partitions-view) | information about each partition managed by such functions as `create_time_partitions` and `drop_old_time_partitions` |
+| [citus_dist_stat_activity](reference-metadata.md#distributed-query-activity) | Distributed queries that are executing on all nodes |
+| [citus_lock_waits](reference-metadata.md#distributed-query-activity) | Queries blocked throughout the server group |
+| [citus_shards](reference-metadata.md#shard-information-view) | The location of each shard, the type of table it belongs to, and its size |
+| [citus_stat_statements](reference-metadata.md#query-statistics-table) | Stats about how queries are being executed, and for whom |
+| [citus_tables](reference-metadata.md#distributed-tables-view) | A summary of all distributed and reference tables |
+| [citus_worker_stat_activity](reference-metadata.md#distributed-query-activity) | Queries on workers, including tasks on individual shards |
+| [pg_dist_colocation](reference-metadata.md#colocation-group-table) | Which tables' shards should be placed together |
+| [pg_dist_node](reference-metadata.md#worker-node-table) | Information about worker nodes in the server group |
+| [pg_dist_object](reference-metadata.md#distributed-object-table) | Objects such as types and functions that have been created on the coordinator node and propagated to worker nodes |
+| [pg_dist_placement](reference-metadata.md#shard-placement-table) | The location of shard replicas on worker nodes |
+| [pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table) | Strategies that `rebalance_table_shards` can use to determine where to move shards |
+| [pg_dist_shard](reference-metadata.md#shard-table) | The table, distribution column, and value ranges for every shard |
+| [time_partitions](reference-metadata.md#time-partitions-view) | Information about each partition managed by such functions as `create_time_partitions` and `drop_old_time_partitions` |
## Next steps
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [User Access Administrator](#user-access-administrator) | Lets you manage user access to Azure resources. | 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 | > | **Compute** | | | > | [Classic Virtual Machine Contributor](#classic-virtual-machine-contributor) | Lets you manage classic virtual machines, but not access to them, and not the virtual network or storage account they're connected to. | d73bb868-a0df-4d4d-bd69-98a00b01fccb |
+> | [Disk Backup Reader](#disk-backup-reader) | Provides permission to backup vault to perform disk backup. | 3e5e47e6-65f7-47ef-90b5-e5dd4d455f24 |
+> | [Disk Pool Operator](#disk-pool-operator) | Provide permission to StoragePool Resource Provider to manage disks added to a disk pool. | 60fc6e62-5479-42d4-8bf4-67625fcc2840 |
+> | [Disk Restore Operator](#disk-restore-operator) | Provides permission to backup vault to perform disk restore. | b50d9833-a0cb-478e-945f-707fcc997c13 |
+> | [Disk Snapshot Contributor](#disk-snapshot-contributor) | Provides permission to backup vault to manage disk snapshots. | 7efff54f-a5b4-42b5-a1c5-5411624893ce |
> | [Virtual Machine Administrator Login](#virtual-machine-administrator-login) | View Virtual Machines in the portal and login as administrator | 1c0163c0-47e6-4577-8991-ea5c82e286e4 | > | [Virtual Machine Contributor](#virtual-machine-contributor) | Create and manage virtual machines, manage disks, install and run software, reset password of the root user of the virtual machine using VM extensions, and manage local user accounts using VM extensions. This role does not grant you management access to the virtual network or storage account the virtual machines are connected to. This role does not allow you to assign roles in Azure RBAC. | 9980e02c-c2be-4d73-94e8-173b1dc7cf3c | > | [Virtual Machine User Login](#virtual-machine-user-login) | View Virtual Machines in the portal and login as a regular user. | fb879df8-f326-4884-b1cf-06f3ad86be52 |
The following table provides a brief description of each built-in role. Click th
> | [Monitoring Reader](#monitoring-reader) | Can read all monitoring data (metrics, logs, etc.). See also [Get started with roles, permissions, and security with Azure Monitor](../azure-monitor/roles-permissions-security.md#built-in-monitoring-roles). | 43d0d8ad-25c7-4714-9337-8ba259a9fe05 | > | [Workbook Contributor](#workbook-contributor) | Can save shared workbooks. | e8ddcd69-c73f-4f9f-9844-4100522f16ad | > | [Workbook Reader](#workbook-reader) | Can read workbooks. | b279062a-9be3-42a0-92ae-8b3cf002ec4d |
-> | **Management + governance** | | |
+> | **Management and governance** | | |
> | [Automation Contributor](#automation-contributor) | Manage azure automation resources and other resources using azure automation. | f353d9bd-d4a6-484e-a77a-8050b599b867 | > | [Automation Job Operator](#automation-job-operator) | Create and Manage Jobs using Automation Runbooks. | 4fe576fe-1146-4730-92eb-48519fa6bf9f | > | [Automation Operator](#automation-operator) | Automation Operators are able to start, stop, suspend, and resume jobs | d3881f73-407a-4167-8283-e981cbba0404 |
The following table provides a brief description of each built-in role. Click th
> | [Site Recovery Reader](#site-recovery-reader) | Lets you view Site Recovery status but not perform other management operations | dbaa88c4-0c30-4179-9fb3-46319faa6149 | > | [Support Request Contributor](#support-request-contributor) | Lets you create and manage Support requests | cfd33db0-3dd1-45e3-aa9d-cdbdf3b6f24e | > | [Tag Contributor](#tag-contributor) | Lets you manage tags on entities, without providing access to the entities themselves. | 4a9ae827-6dc8-4573-8ac7-8239d42aa03f |
-> | **Other** | | |
-> | [Azure Digital Twins Data Owner](#azure-digital-twins-data-owner) | Full access role for Digital Twins data-plane | bcd981a7-7f74-457b-83e1-cceb9e632ffe |
-> | [Azure Digital Twins Data Reader](#azure-digital-twins-data-reader) | Read-only role for Digital Twins data-plane properties | d57506d4-4c8d-48b1-8587-93c323f6a5a3 |
-> | [BizTalk Contributor](#biztalk-contributor) | Lets you manage BizTalk services, but not access to them. | 5e3c6656-6cfa-4708-81fe-0de47ac73342 |
+> | **Virtual desktop infrastructure** | | |
> | [Desktop Virtualization Application Group Contributor](#desktop-virtualization-application-group-contributor) | Contributor of the Desktop Virtualization Application Group. | 86240b0e-9422-4c43-887b-b61143f32ba8 | > | [Desktop Virtualization Application Group Reader](#desktop-virtualization-application-group-reader) | Reader of the Desktop Virtualization Application Group. | aebf23d0-b568-4e86-b8f9-fe83a2c6ab55 | > | [Desktop Virtualization Contributor](#desktop-virtualization-contributor) | Contributor of Desktop Virtualization. | 082f0a83-3be5-4ba1-904c-961cca79b387 |
The following table provides a brief description of each built-in role. Click th
> | [Desktop Virtualization User Session Operator](#desktop-virtualization-user-session-operator) | Operator of the Desktop Virtualization User Session. | ea4bfff8-7fb4-485a-aadd-d4129a0ffaa6 | > | [Desktop Virtualization Workspace Contributor](#desktop-virtualization-workspace-contributor) | Contributor of the Desktop Virtualization Workspace. | 21efdde3-836f-432b-bf3d-3e8e734d4b2b | > | [Desktop Virtualization Workspace Reader](#desktop-virtualization-workspace-reader) | Reader of the Desktop Virtualization Workspace. | 0fa44ee9-7a7d-466b-9bb2-2bf446b1204d |
-> | [Disk Backup Reader](#disk-backup-reader) | Provides permission to backup vault to perform disk backup. | 3e5e47e6-65f7-47ef-90b5-e5dd4d455f24 |
-> | [Disk Pool Operator](#disk-pool-operator) | Provide permission to StoragePool Resource Provider to manage disks added to a disk pool. | 60fc6e62-5479-42d4-8bf4-67625fcc2840 |
-> | [Disk Restore Operator](#disk-restore-operator) | Provides permission to backup vault to perform disk restore. | b50d9833-a0cb-478e-945f-707fcc997c13 |
-> | [Disk Snapshot Contributor](#disk-snapshot-contributor) | Provides permission to backup vault to manage disk snapshots. | 7efff54f-a5b4-42b5-a1c5-5411624893ce |
+> | **Other** | | |
+> | [Azure Digital Twins Data Owner](#azure-digital-twins-data-owner) | Full access role for Digital Twins data-plane | bcd981a7-7f74-457b-83e1-cceb9e632ffe |
+> | [Azure Digital Twins Data Reader](#azure-digital-twins-data-reader) | Read-only role for Digital Twins data-plane properties | d57506d4-4c8d-48b1-8587-93c323f6a5a3 |
+> | [BizTalk Contributor](#biztalk-contributor) | Lets you manage BizTalk services, but not access to them. | 5e3c6656-6cfa-4708-81fe-0de47ac73342 |
> | [Scheduler Job Collections Contributor](#scheduler-job-collections-contributor) | Lets you manage Scheduler job collections, but not access to them. | 188a0f2f-5c9e-469b-ae67-2aa5ce574b94 | > | [Services Hub Operator](#services-hub-operator) | Services Hub Operator allows you to perform all read, write, and deletion operations related to Services Hub Connectors. | 82200a5b-e217-47a5-b665-6d8765ee745b |
Lets you manage classic virtual machines, but not access to them, and not the vi
} ```
+### Disk Backup Reader
+
+Provides permission to backup vault to perform disk backup. [Learn more](../backup/disk-backup-faq.yml)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/read | Get the properties of a Disk |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/beginGetAccess/action | Get the SAS URI of the Disk for blob access |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Provides permission to backup vault to perform disk backup.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/3e5e47e6-65f7-47ef-90b5-e5dd4d455f24",
+ "name": "3e5e47e6-65f7-47ef-90b5-e5dd4d455f24",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Compute/disks/read",
+ "Microsoft.Compute/disks/beginGetAccess/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Disk Backup Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Disk Pool Operator
+
+Provide permission to StoragePool Resource Provider to manage disks added to a disk pool.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/write | Creates a new Disk or updates an existing one |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/read | Get the properties of a Disk |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Used by the StoragePool Resource Provider to manage Disks added to a Disk Pool.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/60fc6e62-5479-42d4-8bf4-67625fcc2840",
+ "name": "60fc6e62-5479-42d4-8bf4-67625fcc2840",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Compute/disks/write",
+ "Microsoft.Compute/disks/read",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Disk Pool Operator",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Disk Restore Operator
+
+Provides permission to backup vault to perform disk restore. [Learn more](../backup/restore-managed-disks.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/write | Creates a new Disk or updates an existing one |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/read | Get the properties of a Disk |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Provides permission to backup vault to perform disk restore.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b50d9833-a0cb-478e-945f-707fcc997c13",
+ "name": "b50d9833-a0cb-478e-945f-707fcc997c13",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Compute/disks/write",
+ "Microsoft.Compute/disks/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Disk Restore Operator",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Disk Snapshot Contributor
+
+Provides permission to backup vault to manage disk snapshots. [Learn more](../backup/backup-managed-disks.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/delete | Delete a Snapshot |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/write | Create a new Snapshot or update an existing one |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/read | Get the properties of a Snapshot |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/beginGetAccess/action | Get the SAS URI of the Snapshot for blob access |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/endGetAccess/action | Revoke the SAS URI of the Snapshot |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/beginGetAccess/action | Get the SAS URI of the Disk for blob access |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/write | Creates a storage account with the specified parameters or update the properties or tags or adds custom domain for the specified storage account. |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/delete | Deletes an existing storage account. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Provides permission to backup vault to manage disk snapshots.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/7efff54f-a5b4-42b5-a1c5-5411624893ce",
+ "name": "7efff54f-a5b4-42b5-a1c5-5411624893ce",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Compute/snapshots/delete",
+ "Microsoft.Compute/snapshots/write",
+ "Microsoft.Compute/snapshots/read",
+ "Microsoft.Compute/snapshots/beginGetAccess/action",
+ "Microsoft.Compute/snapshots/endGetAccess/action",
+ "Microsoft.Compute/disks/beginGetAccess/action",
+ "Microsoft.Storage/storageAccounts/listkeys/action",
+ "Microsoft.Storage/storageAccounts/write",
+ "Microsoft.Storage/storageAccounts/read",
+ "Microsoft.Storage/storageAccounts/delete"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Disk Snapshot Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Virtual Machine Administrator Login View Virtual Machines in the portal and login as administrator [Learn more](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md)
Can read workbooks. [Learn more](../sentinel/tutorial-monitor-your-data.md)
} ```
-## Management + governance
+## Management and governance
### Automation Contributor
Lets you manage tags on entities, without providing access to the entities thems
} ```
-## Other
+## Virtual desktop infrastructure
-### Azure Digital Twins Data Owner
+### Desktop Virtualization Application Group Contributor
-Full access role for Digital Twins data-plane [Learn more](../digital-twins/concepts-security.md)
+Contributor of the Desktop Virtualization Application Group. [Learn more](../virtual-desktop/rbac.md)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | *none* | |
+> | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/applicationgroups/* | |
+> | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/hostpools/read | Read hostpools |
+> | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/hostpools/sessionhosts/read | Read hostpools/sessionhosts |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/* | Read, delete, create, or update any Event Route |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/* | Read, create, update, or delete any Digital Twin |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/commands/* | Invoke any Command on a Digital Twin |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/relationships/* | Read, create, update, or delete any Digital Twin Relationship |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/models/* | Read, create, update, or delete any Model |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/query/* | Query any Digital Twins Graph |
+> | *none* | |
> | **NotDataActions** | | > | *none* | |
Full access role for Digital Twins data-plane [Learn more](../digital-twins/conc
"assignableScopes": [ "/" ],
- "description": "Full access role for Digital Twins data-plane",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/bcd981a7-7f74-457b-83e1-cceb9e632ffe",
- "name": "bcd981a7-7f74-457b-83e1-cceb9e632ffe",
+ "description": "Contributor of the Desktop Virtualization Application Group.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/86240b0e-9422-4c43-887b-b61143f32ba8",
+ "name": "86240b0e-9422-4c43-887b-b61143f32ba8",
"permissions": [ {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.DigitalTwins/eventroutes/*",
- "Microsoft.DigitalTwins/digitaltwins/*",
- "Microsoft.DigitalTwins/digitaltwins/commands/*",
- "Microsoft.DigitalTwins/digitaltwins/relationships/*",
- "Microsoft.DigitalTwins/models/*",
- "Microsoft.DigitalTwins/query/*"
- ],
- "notDataActions": []
- }
- ],
- "roleName": "Azure Digital Twins Data Owner",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
-}
-```
-
-### Azure Digital Twins Data Reader
-
-Read-only role for Digital Twins data-plane properties [Learn more](../digital-twins/concepts-security.md)
-
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | *none* | |
-> | **NotActions** | |
-> | *none* | |
-> | **DataActions** | |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/read | Read any Digital Twin |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/relationships/read | Read any Digital Twin Relationship |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/read | Read any Event Route |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/models/read | Read any Model |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/query/action | Query any Digital Twins Graph |
-> | **NotDataActions** | |
-> | *none* | |
-
-```json
-{
- "assignableScopes": [
- "/"
- ],
- "description": "Read-only role for Digital Twins data-plane properties",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/d57506d4-4c8d-48b1-8587-93c323f6a5a3",
- "name": "d57506d4-4c8d-48b1-8587-93c323f6a5a3",
- "permissions": [
- {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.DigitalTwins/digitaltwins/read",
- "Microsoft.DigitalTwins/digitaltwins/relationships/read",
- "Microsoft.DigitalTwins/eventroutes/read",
- "Microsoft.DigitalTwins/models/read",
- "Microsoft.DigitalTwins/query/action"
- ],
- "notDataActions": []
- }
- ],
- "roleName": "Azure Digital Twins Data Reader",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
-}
-```
-
-### BizTalk Contributor
-
-Lets you manage BizTalk services, but not access to them.
-
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | Microsoft.BizTalkServices/BizTalk/* | Create and manage BizTalk services |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
-> | [Microsoft.ResourceHealth](resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read | Gets the availability statuses for all resources in the specified scope |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
-> | **NotActions** | |
-> | *none* | |
-> | **DataActions** | |
-> | *none* | |
-> | **NotDataActions** | |
-> | *none* | |
-
-```json
-{
- "assignableScopes": [
- "/"
- ],
- "description": "Lets you manage BizTalk services, but not access to them.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/5e3c6656-6cfa-4708-81fe-0de47ac73342",
- "name": "5e3c6656-6cfa-4708-81fe-0de47ac73342",
- "permissions": [
- {
- "actions": [
- "Microsoft.Authorization/*/read",
- "Microsoft.BizTalkServices/BizTalk/*",
- "Microsoft.Insights/alertRules/*",
- "Microsoft.ResourceHealth/availabilityStatuses/read",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Support/*"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ],
- "roleName": "BizTalk Contributor",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
-}
-```
-
-### Desktop Virtualization Application Group Contributor
-
-Contributor of the Desktop Virtualization Application Group. [Learn more](../virtual-desktop/rbac.md)
-
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/applicationgroups/* | |
-> | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/hostpools/read | Read hostpools |
-> | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/hostpools/sessionhosts/read | Read hostpools/sessionhosts |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
-> | **NotActions** | |
-> | *none* | |
-> | **DataActions** | |
-> | *none* | |
-> | **NotDataActions** | |
-> | *none* | |
-
-```json
-{
- "assignableScopes": [
- "/"
- ],
- "description": "Contributor of the Desktop Virtualization Application Group.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/86240b0e-9422-4c43-887b-b61143f32ba8",
- "name": "86240b0e-9422-4c43-887b-b61143f32ba8",
- "permissions": [
- {
- "actions": [
- "Microsoft.DesktopVirtualization/applicationgroups/*",
- "Microsoft.DesktopVirtualization/hostpools/read",
- "Microsoft.DesktopVirtualization/hostpools/sessionhosts/read",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Authorization/*/read",
- "Microsoft.Insights/alertRules/*",
- "Microsoft.Support/*"
- ],
+ "actions": [
+ "Microsoft.DesktopVirtualization/applicationgroups/*",
+ "Microsoft.DesktopVirtualization/hostpools/read",
+ "Microsoft.DesktopVirtualization/hostpools/sessionhosts/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Support/*"
+ ],
"notActions": [], "dataActions": [], "notDataActions": []
Reader of the Desktop Virtualization Workspace. [Learn more](../virtual-desktop/
} ```
-### Disk Backup Reader
-
-Provides permission to backup vault to perform disk backup. [Learn more](../backup/disk-backup-faq.yml)
-
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/read | Get the properties of a Disk |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/beginGetAccess/action | Get the SAS URI of the Disk for blob access |
-> | **NotActions** | |
-> | *none* | |
-> | **DataActions** | |
-> | *none* | |
-> | **NotDataActions** | |
-> | *none* | |
+## Other
-```json
-{
- "assignableScopes": [
- "/"
- ],
- "description": "Provides permission to backup vault to perform disk backup.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/3e5e47e6-65f7-47ef-90b5-e5dd4d455f24",
- "name": "3e5e47e6-65f7-47ef-90b5-e5dd4d455f24",
- "permissions": [
- {
- "actions": [
- "Microsoft.Authorization/*/read",
- "Microsoft.Compute/disks/read",
- "Microsoft.Compute/disks/beginGetAccess/action"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ],
- "roleName": "Disk Backup Reader",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
-}
-```
-### Disk Pool Operator
+### Azure Digital Twins Data Owner
-Provide permission to StoragePool Resource Provider to manage disks added to a disk pool.
+Full access role for Digital Twins data-plane [Learn more](../digital-twins/concepts-security.md)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/write | Creates a new Disk or updates an existing one |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/read | Get the properties of a Disk |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | *none* | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | *none* | |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/* | Read, delete, create, or update any Event Route |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/* | Read, create, update, or delete any Digital Twin |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/commands/* | Invoke any Command on a Digital Twin |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/relationships/* | Read, create, update, or delete any Digital Twin Relationship |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/models/* | Read, create, update, or delete any Model |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/query/* | Query any Digital Twins Graph |
> | **NotDataActions** | | > | *none* | |
Provide permission to StoragePool Resource Provider to manage disks added to a d
"assignableScopes": [ "/" ],
- "description": "Used by the StoragePool Resource Provider to manage Disks added to a Disk Pool.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/60fc6e62-5479-42d4-8bf4-67625fcc2840",
- "name": "60fc6e62-5479-42d4-8bf4-67625fcc2840",
+ "description": "Full access role for Digital Twins data-plane",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/bcd981a7-7f74-457b-83e1-cceb9e632ffe",
+ "name": "bcd981a7-7f74-457b-83e1-cceb9e632ffe",
"permissions": [ {
- "actions": [
- "Microsoft.Compute/disks/write",
- "Microsoft.Compute/disks/read",
- "Microsoft.Authorization/*/read",
- "Microsoft.Insights/alertRules/*",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/resourceGroups/read"
- ],
+ "actions": [],
"notActions": [],
- "dataActions": [],
+ "dataActions": [
+ "Microsoft.DigitalTwins/eventroutes/*",
+ "Microsoft.DigitalTwins/digitaltwins/*",
+ "Microsoft.DigitalTwins/digitaltwins/commands/*",
+ "Microsoft.DigitalTwins/digitaltwins/relationships/*",
+ "Microsoft.DigitalTwins/models/*",
+ "Microsoft.DigitalTwins/query/*"
+ ],
"notDataActions": [] } ],
- "roleName": "Disk Pool Operator",
+ "roleName": "Azure Digital Twins Data Owner",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ```
-### Disk Restore Operator
+### Azure Digital Twins Data Reader
-Provides permission to backup vault to perform disk restore. [Learn more](../backup/restore-managed-disks.md)
+Read-only role for Digital Twins data-plane properties [Learn more](../digital-twins/concepts-security.md)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/write | Creates a new Disk or updates an existing one |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/read | Get the properties of a Disk |
+> | *none* | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | *none* | |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/read | Read any Digital Twin |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/relationships/read | Read any Digital Twin Relationship |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/read | Read any Event Route |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/models/read | Read any Model |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/query/action | Query any Digital Twins Graph |
> | **NotDataActions** | | > | *none* | |
Provides permission to backup vault to perform disk restore. [Learn more](../bac
"assignableScopes": [ "/" ],
- "description": "Provides permission to backup vault to perform disk restore.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b50d9833-a0cb-478e-945f-707fcc997c13",
- "name": "b50d9833-a0cb-478e-945f-707fcc997c13",
+ "description": "Read-only role for Digital Twins data-plane properties",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/d57506d4-4c8d-48b1-8587-93c323f6a5a3",
+ "name": "d57506d4-4c8d-48b1-8587-93c323f6a5a3",
"permissions": [ {
- "actions": [
- "Microsoft.Authorization/*/read",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Compute/disks/write",
- "Microsoft.Compute/disks/read"
- ],
+ "actions": [],
"notActions": [],
- "dataActions": [],
+ "dataActions": [
+ "Microsoft.DigitalTwins/digitaltwins/read",
+ "Microsoft.DigitalTwins/digitaltwins/relationships/read",
+ "Microsoft.DigitalTwins/eventroutes/read",
+ "Microsoft.DigitalTwins/models/read",
+ "Microsoft.DigitalTwins/query/action"
+ ],
"notDataActions": [] } ],
- "roleName": "Disk Restore Operator",
+ "roleName": "Azure Digital Twins Data Reader",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ```
-### Disk Snapshot Contributor
+### BizTalk Contributor
-Provides permission to backup vault to manage disk snapshots. [Learn more](../backup/backup-managed-disks.md)
+Lets you manage BizTalk services, but not access to them.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | Microsoft.BizTalkServices/BizTalk/* | Create and manage BizTalk services |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.ResourceHealth](resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read | Gets the availability statuses for all resources in the specified scope |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/delete | Delete a Snapshot |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/write | Create a new Snapshot or update an existing one |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/read | Get the properties of a Snapshot |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/beginGetAccess/action | Get the SAS URI of the Snapshot for blob access |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/endGetAccess/action | Revoke the SAS URI of the Snapshot |
-> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/beginGetAccess/action | Get the SAS URI of the Disk for blob access |
-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. |
-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/write | Creates a storage account with the specified parameters or update the properties or tags or adds custom domain for the specified storage account. |
-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/delete | Deletes an existing storage account. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Provides permission to backup vault to manage disk snapshots. [Learn more](../ba
"assignableScopes": [ "/" ],
- "description": "Provides permission to backup vault to manage disk snapshots.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/7efff54f-a5b4-42b5-a1c5-5411624893ce",
- "name": "7efff54f-a5b4-42b5-a1c5-5411624893ce",
+ "description": "Lets you manage BizTalk services, but not access to them.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/5e3c6656-6cfa-4708-81fe-0de47ac73342",
+ "name": "5e3c6656-6cfa-4708-81fe-0de47ac73342",
"permissions": [ { "actions": [ "Microsoft.Authorization/*/read",
+ "Microsoft.BizTalkServices/BizTalk/*",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.ResourceHealth/availabilityStatuses/read",
+ "Microsoft.Resources/deployments/*",
"Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Compute/snapshots/delete",
- "Microsoft.Compute/snapshots/write",
- "Microsoft.Compute/snapshots/read",
- "Microsoft.Compute/snapshots/beginGetAccess/action",
- "Microsoft.Compute/snapshots/endGetAccess/action",
- "Microsoft.Compute/disks/beginGetAccess/action",
- "Microsoft.Storage/storageAccounts/listkeys/action",
- "Microsoft.Storage/storageAccounts/write",
- "Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/delete"
+ "Microsoft.Support/*"
], "notActions": [], "dataActions": [], "notDataActions": [] } ],
- "roleName": "Disk Snapshot Contributor",
+ "roleName": "BizTalk Contributor",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" }
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Click the resource provider name in the following table to see the list of opera
| [Microsoft.DevTestLab](#microsoftdevtestlab) | | [Microsoft.LabServices](#microsoftlabservices) | | [Microsoft.VisualStudio](#microsoftvisualstudio) |
-| **Migrate** |
+| **Migration** |
| [Microsoft.Migrate](#microsoftmigrate) | | [Microsoft.OffAzure](#microsoftoffazure) | | **Monitor** |
Click the resource provider name in the following table to see the list of opera
| [Microsoft.OperationalInsights](#microsoftoperationalinsights) | | [Microsoft.OperationsManagement](#microsoftoperationsmanagement) | | [Microsoft.WorkloadMonitor](#microsoftworkloadmonitor) |
-| **Management + governance** |
+| **Management and governance** |
| [Microsoft.Advisor](#microsoftadvisor) | | [Microsoft.Authorization](#microsoftauthorization) | | [Microsoft.Automation](#microsoftautomation) |
Click the resource provider name in the following table to see the list of opera
| [Microsoft.Subscription](#microsoftsubscription) | | **Intune** | | [Microsoft.Intune](#microsoftintune) |
-| **Other** |
+| **Virtual desktop infrastructure** |
| [Microsoft.DesktopVirtualization](#microsoftdesktopvirtualization) |
+| **Other** |
| [Microsoft.DigitalTwins](#microsoftdigitaltwins) | | [Microsoft.ServicesHub](#microsoftserviceshub) |
Azure service: [Azure DevOps](/azure/devops/)
> | Microsoft.VisualStudio/Project/Delete | Delete Project | > | Microsoft.VisualStudio/Project/Read | Read Project |
-## Migrate
+## Migration
### Microsoft.Migrate
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.WorkloadMonitor/monitors/history/read | Gets the history of health changes of a specific monitor | > | Microsoft.WorkloadMonitor/operations/read | Gets a list of the supported operations |
-## Management + governance
+## Management and governance
### Microsoft.Advisor
Azure service: Microsoft Monitoring Insights
> | Microsoft.Intune/diagnosticsettings/delete | Deleting a diagnostic setting | > | Microsoft.Intune/diagnosticsettingscategories/read | Reading a diagnostic setting categories |
-## Other
+## Virtual desktop infrastructure
### Microsoft.DesktopVirtualization
Azure service: [Windows Virtual Desktop](../virtual-desktop/index.yml)
> | **DataAction** | **Description** | > | Microsoft.DesktopVirtualization/applicationgroups/useapplications/action | Use ApplicationGroup |
+## Other
+ ### Microsoft.DigitalTwins Azure service: [Azure Digital Twins](../digital-twins/index.yml)
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na Previously updated : 01/28/2022 Last updated : 02/18/2022
This article answers some common questions about Azure role-based access control
Azure supports up to **2000** role assignments per subscription. This limit includes role assignments at the subscription, resource group, and resource scopes, but not at the management group scope. If you get the error message "No more role assignments can be created (code: RoleAssignmentLimitExceeded)" when you try to assign a role, try to reduce the number of role assignments in the subscription. > [!NOTE]
-> Starting November 2021, the role assignments limit for a subscription is being increased from **2000** to **4000** over the next several months. Subscriptions that are near the limit will be prioritized first. The limit for the remaining subscriptions will be increased over time. Once the limit increase process is started for a subscription, it still takes multiple weeks to increase the limit.
+> Starting November 2021, the role assignments limit for all Azure subscriptions is being automatically increased from **2000** to **4000**. There is no action that you need to take for your subscription. The limit increase will take several months.
If you are getting close to this limit, here are some ways that you can reduce the number of role assignments:
search Cognitive Search Skill Annotation Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-annotation-language.md
All the data is under one root element, for which the path is `"/document"`. The
## Simple paths Simple paths through the internal enriched document can be expressed with simple tokens separated by slashes.
-This syntax is similar to [the JSON Pointer specification](https://datatracker.ietf.org/doc/html/rfc6901.htmlhttps://datatracker.ietf.org/doc/html/rfc6901.html).
+This syntax is similar to [the JSON Pointer specification](https://datatracker.ietf.org/doc/html/rfc6901.html).
### Object properties
Parentheses can be used to change or disambiguate evaluation order.
## See also + [Create a skillset in Azure Cognitive Search](cognitive-search-defining-skillset.md)
-+ [Reference annotations in an Azure Cognitive Search skillset](cognitive-search-concept-annotations-syntax.md)
++ [Reference annotations in an Azure Cognitive Search skillset](cognitive-search-concept-annotations-syntax.md)
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-file-storage-integration.md
Previously updated : 01/19/2022 Last updated : 02/21/2022 # Index data from Azure Files
Last updated 01/19/2022
> [!IMPORTANT] > Azure Files indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to create the indexer data source.
-Configure a [search indexer](search-indexer-overview.md) to extract content from Azure File Storage and make it searchable in Azure Cognitive Search.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure File Storage and makes it searchable in Azure Cognitive Search. Inputs to the indexer are your files in a single share. Output is a search index with searchable content and metadata stored in individual fields.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing files in Azure Storage. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing files in Azure Storage. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
## Prerequisites
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ Read permissions on Azure Storage. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles instead, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions. ++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.+ ## Supported document formats The Azure Files indexer can extract text from the following document formats:
In the [search index](search-what-is-an-index.md), add fields to accept the cont
{ "name": "metadata_storage_name", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true }, { "name": "metadata_storage_path", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true }, { "name": "metadata_storage_size", "type": "Edm.Int64", "searchable": false, "filterable": true, "sortable": true },
- { "name": "metadata_storage_content_type", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_content_type", "type": "Edm.String", "searchable": true, "filterable": true, "sortable": true }
] } ```
In the [search index](search-what-is-an-index.md), add fields to accept the cont
+ **metadata_storage_content_md5** (`Edm.String`) - MD5 hash of the file content, if available. + **metadata_storage_sas_token** (`Edm.String`) - A temporary SAS token that can be used by [custom skills](cognitive-search-custom-skill-interface.md) to get access to the file. This token shouldn't be stored for later use as it might expire.
-## Configure the file indexer
+## Configure and run the file indexer
-Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors. Under "configuration", you can specify which files are indexed by file type or by properties on the files themselves.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30 {
- "name" : "my-file-indexer,
+ "name" : "my-file-indexer",
"dataSourceName" : "my-file-datasource", "targetIndexName" : "my-search-index", "parameters": {
- "batchSize": null,
- "maxFailedItems": null,
- "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
- "configuration:" {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration": {
"indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg" }
Indexer configuration specifies the inputs, parameters, and properties controlli
1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
+An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
+
+## Check indexer status
+
+To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request:
+
+```http
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+```
+
+The response includes status and the number of items processed. It should look similar to the following example:
+
+```json
+ {
+ "status":"running",
+ "lastResult": {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ "executionHistory":
+ [
+ {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ ... earlier history items
+ ]
+ }
+```
+
+Execution history contains up to 50 of the most recently completed executions, which are sorted in the reverse chronological order so that the latest execution comes first.
+ ## Next steps You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
Last updated 02/28/2022
# Index data from Azure SQL
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure SQL and makes it searchable in Azure Cognitive Search. The workflow creates a search index and loads it with text extracted from Azure SQL Database and Azure SQL managed instances.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure SQL Database or an Azure SQL managed instance and makes it searchable in Azure Cognitive Search.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information about settings that are specific to Azure SQL. You can create indexers using the [Azure portal](https://portal.azure.com), [Search REST APIs](/rest/api/searchservice/Indexer-operations) or an Azure SDK. This article uses REST to explain each step.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Azure SQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
## Prerequisites + An [Azure SQL database](../azure-sql/database/sql-database-paas-overview.md) with data in a single table or view. Use a table if you want the ability to [index incremental updates](#CaptureChangedRows) using SQL's native change detection capabilities.
-+ Read permissions. Azure Cognitive Search supports SQL Server authentication, where the user name and password are provided on the connection string. Alternatively, you can [set up a managed identity and use Azure roles](search-howto-managed-identities-sql.md) to omit credentials on the connection.
++ Read permissions. Azure Cognitive Search supports SQL Server authentication, where the user name and password are provided on the connection string. Alternatively, you can [set up a managed identity and use Azure roles](search-howto-managed-identities-sql.md) to omit credentials on the connection.+++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer. <!-- Real-time data synchronization must not be an application requirement. An indexer can reindex your table at most every five minutes. If your data changes frequently, and those changes need to be reflected in the index within seconds or single minutes, we recommend using the [REST API](/rest/api/searchservice/AddUpdate-or-Delete-Documents) or [.NET SDK](search-get-started-dotnet.md) to push updated rows directly.
In a [search index](search-what-is-an-index.md), add fields to accept values fro
## Configure and run the Azure SQL indexer
-Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
Indexer configuration specifies the inputs, parameters, and properties controlli
"disabled": null, "schedule": null, "parameters": {
- "batchSize": null,
- "maxFailedItems": 0,
- "maxFailedItemsPerBatch": 0,
- "base64EncodeKeys": false,
- "configuration": {
- "queryTimeout": "00:05:00",
- "disableOrderByHighWaterMarkColumn": false
+ "batchSize": null,
+ "maxFailedItems": 0,
+ "maxFailedItemsPerBatch": 0,
+ "base64EncodeKeys": false,
+ "configuration": {
+ "queryTimeout": "00:05:00",
+ "disableOrderByHighWaterMarkColumn": false
} }, "fieldMappings": [],
Execution history contains up to 50 of the most recently completed executions, w
If your SQL database supports [change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server), a search indexer can pick up just the new and updated content on subsequent indexer runs. Azure Cognitive Search provides two change detection policies to support incremental indexing.
-Within an indexer definition, you can specify a change detection policies that tells the indexer which change tracking mechanism is used on your table or view. There are two policies to choose from:
+Within an indexer definition, you can specify a change detection policy that tells the indexer which change tracking mechanism is used on your table or view. There are two policies to choose from:
+ "SqlIntegratedChangeTrackingPolicy" (applies to tables only)
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
Last updated 03/19/2021
# Indexer connections to SQL Server on an Azure virtual machine
-When configuring an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#faq) to extract content from a database on an Azure virtual machine, additional steps are required for secure connections.
+When configuring an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) to extract content from a database on an Azure virtual machine, additional steps are required for secure connections.
A connection from Azure Cognitive Search to SQL Server on a virtual machine is a public internet connection. In order for secure connections to succeed, complete the following steps:
search Search Howto Connecting Azure Sql Mi To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md
Last updated 06/26/2021
# Indexer connections to Azure SQL Managed Instance through a public endpoint
-If you are setting up an Azure Cognitive Search indexer that connects to an Azure SQL managed instance, you will need to enable a public endpoint on the managed instance as a prerequisite. An indexer connects to a managed instance over a public endpoint.
+If you are setting up an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) that connects to an Azure SQL managed instance, you'll need to enable a public endpoint on the managed instance as a prerequisite. By default, an indexer connects to a managed instance over a public endpoint. You can also use a [private endpoint](search-indexer-howto-access-private.md).
This article provides basic steps that include collecting information necessary for data source configuration. For more information and methodologies, see [Configure public endpoint in Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md).
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-create-indexers.md
There are several ways to run an indexer:
Scheduled execution is usually implemented when you have a need for incremental indexing so that you can pick up the latest changes. As such, scheduling has a dependency on change detection.
-## Change detection and internal state
+## Check results
-Change detection logic is a capability that's built into source platforms. If your data source support change detection, an indexer can detect changes in the underlying data and only process new or updated documents on each indexer run, leaving unchanged content as-is. If indexer execution history says that a run was successful with `0/0` documents processed, it means that the indexer didn't find any new or changed rows or blobs in the underlying data source.
+[Monitor indexer status](search-howto-monitor-indexers.md) to check for status. Successful execution can still include warning and notifications. Be sure to check both successful and failed status notifications for details about the job.
-How an indexer supports change detection varies by data source:
+For content verification, [run queries](search-query-create.md) on the populated index that return entire documents or selected fields.
-+ Azure Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. The various indexers use this information to determine which documents to update in the index. Built-in change detection means that an indexer can recognize new and updated documents automatically.
+## Change detection and internal state
-+ Azure SQL and Cosmos DB provide change detection features in their platforms. You can specify the change detection policy in your data source definition.
+If your data source supports change detection, an indexer can detect underlying changes in the data and process just the new or updated documents on each indexer run, leaving unchanged content as-is. If indexer execution history says that a run was successful with `0/0` documents processed, it means that the indexer didn't find any new or changed rows or blobs in the underlying data source.
-For large indexing loads, an indexer also keeps track of the last document it processed through an internal "high water mark". The marker is never exposed in the API, but internally the indexer keeps track of where it stopped. When indexing resumes, either through a scheduled run or an on-demand invocation, the indexer references the high water mark so that it can pick up where it left off.
+Change detection logic is built into the data platforms. How an indexer supports change detection varies by data source:
-If you need to clear the high water mark to re-index in full, you can use [Reset Indexer](/rest/api/searchservice/reset-indexer). For more selective re-indexing, use [Reset Skills](/rest/api/searchservice/preview-api/reset-skills) or [Reset Documents](/rest/api/searchservice/preview-api/reset-documents). Through the reset APIs, you can clear internal state, and also flush the cache if you enabled [incremental enrichment](search-howto-incremental-index.md). For more background and comparison of each reset option, see [Run or reset indexers, skills, and documents](search-howto-run-reset-indexers.md).
++ Azure Storage has built-in change detection, which means an indexer can recognize new and updated documents automatically.. Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. An indexer can use this information to determine which documents to update in the index.
-## Check results
++ Azure SQL and Cosmos DB provide optional change detection features in their platforms. You can specify the change detection policy in your data source definition.
-[Monitor indexer status](search-howto-monitor-indexers.md) to check for status. Successful execution can still include warning and notifications. Be sure to check both successful and failed status notifications for details about the job.
+For large indexing loads, an indexer also keeps track of the last document it processed through an internal "high water mark". The marker is never exposed in the API, but internally the indexer keeps track of where it stopped. When indexing resumes, either through a scheduled run or an on-demand invocation, the indexer references the high water mark so that it can pick up where it left off.
-For content verification, [run queries](search-query-create.md) on the populated index that return entire documents or selected fields.
+If you need to clear the high water mark to re-index in full, you can use [Reset Indexer](/rest/api/searchservice/reset-indexer). For more selective re-indexing, use [Reset Skills](/rest/api/searchservice/preview-api/reset-skills) or [Reset Documents](/rest/api/searchservice/preview-api/reset-documents). Through the reset APIs, you can clear internal state, and also flush the cache if you enabled [incremental enrichment](search-howto-incremental-index.md). For more background and comparison of each reset option, see [Run or reset indexers, skills, and documents](search-howto-run-reset-indexers.md).
## Next steps
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-azure-data-lake-storage.md
Last updated 02/11/2022
# Index data from Azure Data Lake Storage Gen2
-Configure a [search indexer](search-indexer-overview.md) to extract content and metadata from Azure Data Lake Storage (ADLS) Gen2 and make it searchable in Azure Cognitive Search.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Data Lake Storage (ADLS) Gen2 and makes it searchable in Azure Cognitive Search. Inputs to the indexer are your blobs, in a single container. Output is a search index with searchable content and metadata stored in individual fields.
-ADLS Gen2 is available through Azure Storage. When setting up a storage account, you have the option of enabling [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md), organizing files into a hierarchy of directories and nested subdirectories. By enabling a hierarchical namespace, you enable ADLS Gen2.
-
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from ADLS Gen2. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing from ADLS Gen2. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
For a code sample in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub. ## Prerequisites
-+ [ADLS Gen2](../storage/blobs/data-lake-storage-introduction.md) with [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) enabled.
++ [ADLS Gen2](../storage/blobs/data-lake-storage-introduction.md) with [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) enabled. ADLS Gen2 is available through Azure Storage. When setting up a storage account, you have the option of enabling [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md), organizing files into a hierarchy of directories and nested subdirectories. By enabling a hierarchical namespace, you enable ADLS Gen2. + [Access tiers](../storage/blobs/access-tiers-overview.md) for ADLS Gen2 include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
For a code sample in C#, see [Index Data Lake Gen2 using Azure AD](https://githu
+ Read permissions on Azure Storage. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles instead, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Storage Blob Data Reader** permissions. ++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.+ ## Access control ADLS Gen2 implements an [access control model](../storage/blobs/data-lake-storage-access-control.md) that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs).
In a [search index](search-what-is-an-index.md), add fields to accept the conten
1. Add fields for standard metadata properties. The indexer can read custom metadata properties, [standard metadata](#indexing-blob-metadata) properties, and [content-specific metadata](search-blob-metadata-properties.md) properties.
-## Configure the ADLS Gen2 indexer
+## Configure and run the ADLS Gen2 indexer
-Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors. The "configuration" section determines what content gets indexed.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
Indexer configuration specifies the inputs, parameters, and properties controlli
"dataSourceName" : "my-adlsgen2-datasource", "targetIndexName" : "my-search-index", "parameters": {
- "batchSize": null,
- "maxFailedItems": null,
- "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
- "configuration:" {
- "indexedFileNameExtensions" : ".pdf,.docx",
- "excludedFileNameExtensions" : ".png,.jpeg",
- "dataToExtract": "contentAndMetadata",
- "parsingMode": "default",
- "imageAction": "none"
- }
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration:" {
+ "indexedFileNameExtensions" : ".pdf,.docx",
+ "excludedFileNameExtensions" : ".png,.jpeg",
+ "dataToExtract": "contentAndMetadata",
+ "parsingMode": "default",
+ "imageAction": "none"
+ }
}, "schedule" : { }, "fieldMappings" : [ ]
Indexer configuration specifies the inputs, parameters, and properties controlli
In blob indexing, you can often omit field mappings because the indexer has built-in support for mapping the "content" and metadata properties to similarly named and typed fields in an index. For metadata properties, the indexer will automatically replace hyphens `-` with underscores in the search index.
-1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties. For the full list of parameter descriptions, see [Blob configuration parameters](/rest/api/searchservice/create-indexer#blob-configuration-parameters) in the REST API.
+
+An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
+
+## Check indexer status
+
+To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request:
+
+```http
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+```
+
+The response includes status and the number of items processed. It should look similar to the following example:
+
+```json
+ {
+ "status":"running",
+ "lastResult": {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ "executionHistory":
+ [
+ {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ ... earlier history items
+ ]
+ }
+```
-For the full list of parameter descriptions, see [Blob configuration parameters](/rest/api/searchservice/create-indexer#blob-configuration-parameters) in the REST API.
+Execution history contains up to 50 of the most recently completed executions, which are sorted in the reverse chronological order so that the latest execution comes first.
## How blobs are indexed
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
Last updated 02/15/2022
> [!IMPORTANT] > The Gremlin API indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-This article shows you how to configure an Azure Cosmos DB indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from Azure Cosmos DB using the [Gremlin API](../cosmos-db/choose-api.md#gremlin-api).
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Cosmos DB and makes it searchable in Azure Cognitive Search.
-Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [Gremlin API](../cosmos-db/choose-api.md#gremlin-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-By default the Azure Cognitive Search Cosmos DB Gremlin API indexer will make every vertex in your graph a document in the index. Edges will be ignored. Alternatively, you could set the query to only index the edges.
-
-Although Cosmos DB indexing is easiest with the [Import data wizard](search-import-data-portal.md), this article uses the REST APIs to explain concepts and steps.
+Because terminology can be confusing, it's worth noting that [Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
## Prerequisites
Although Cosmos DB indexing is easiest with the [Import data wizard](search-impo
+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
-Unfamiliar with indexers? See [**Create an indexer**](search-howto-create-indexers.md) before you get started.
++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer. ## Define the data source
For this call, specify a [preview REST API version](search-api-preview.md) (2020
1. Set "credentials" to a connection string. The next section describes the supported formats.
-1. Set "container" to the collection. The "name" property is required and it specifies the ID of the graph. The "query" property is optional. The query default is `g.V()`. To index the edges, set the query to `g.E()`.
+1. Set "container" to the collection. The "name" property is required and it specifies the ID of the graph.
+
+ The "query" property is optional. By default the Azure Cognitive Search Cosmos DB Gremlin API indexer will make every vertex in your graph a document in the index. Edges will be ignored. The query default is `g.V()`. Alternatively, you could set the query to only index the edges. To index the edges, set the query to `g.E()`.
1. [Set "dataChangeDetectionPolicy"](#DataChangeDetectionPolicy) if data is volatile and you want the indexer to pick up just the new and updated items on subsequent runs. Incremental progress will be enabled by default using `_ts` as the high water mark column.
In a [search index](search-what-is-an-index.md), add fields to accept the source
## Configure and run the Cosmos DB indexer
-Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
Indexer configuration specifies the inputs, parameters, and properties controlli
"disabled": null, "schedule": null, "parameters": {
- "batchSize": null,
- "maxFailedItems": 0,
- "maxFailedItemsPerBatch": 0,
- "base64EncodeKeys": false,
- "configuration": {}
- },
+ "batchSize": null,
+ "maxFailedItems": 0,
+ "maxFailedItemsPerBatch": 0,
+ "base64EncodeKeys": false,
+ "configuration": {}
+ },
"fieldMappings": [], "encryptionKey": null }
Indexer configuration specifies the inputs, parameters, and properties controlli
1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
+An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
+
+## Check indexer status
+
+To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request:
+
+```http
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+```
+
+The response includes status and the number of items processed. It should look similar to the following example:
+
+```json
+ {
+ "status":"running",
+ "lastResult": {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ "executionHistory":
+ [
+ {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ ... earlier history items
+ ]
+ }
+```
+
+Execution history contains up to 50 of the most recently completed executions, which are sorted in the reverse chronological order so that the latest execution comes first.
+ <a name="DataChangeDetectionPolicy"></a>
-## Indexing changed documents
+## Indexing new and changed documents
+
+Once an indexer has fully populated a search index, you might want subsequent indexer runs to incrementally index just the new and changed documents in your database.
-The purpose of a data change detection policy is to efficiently identify changed data items. Currently, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB, which is specified in the data source definition as follows:
+To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your data source definition. For Cosmos DB, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
+
+The following example shows a [data source definition](#define-the-data-source) with a change detection policy:
```http "dataChangeDetectionPolicy": {
The purpose of a data change detection policy is to efficiently identify changed
}, ```
-Using this policy is highly recommended to ensure good indexer performance.
- <a name="DataDeletionDetectionPolicy"></a> ## Indexing deleted documents
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
Last updated 02/15/2022
> [!IMPORTANT] > MongoDB API support is currently in public preview under [supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-This article shows you how to configure an Azure Cosmos DB [indexer](search-indexer-overview.md) to extract content and make it searchable in Azure Cognitive Search. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from Azure Cosmos DB using the [MongoDB API](../cosmos-db/choose-api.md#api-for-mongodb).
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Cosmos DB and makes it searchable in Azure Cognitive Search.
-Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [MongoDB API](../cosmos-db/choose-api.md#api-for-mongodb). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-Although Cosmos DB indexing is easiest with the [Import data wizard](search-import-data-portal.md), this article uses the REST APIs to explain concepts and steps.
+Because terminology can be confusing, it's worth noting that [Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
## Prerequisites
Although Cosmos DB indexing is easiest with the [Import data wizard](search-impo
+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
-Unfamiliar with indexers? See [**Create an indexer**](search-howto-create-indexers.md) before you get started.
++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer. ## Define the data source
In a [search index](search-what-is-an-index.md), add fields to accept the source
## Configure and run the Cosmos DB indexer
-Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
Indexer configuration specifies the inputs, parameters, and properties controlli
"disabled": null, "schedule": null, "parameters": {
- "batchSize": null,
- "maxFailedItems": 0,
- "maxFailedItemsPerBatch": 0,
- "base64EncodeKeys": false,
- "configuration": {}
- },
+ "batchSize": null,
+ "maxFailedItems": 0,
+ "maxFailedItemsPerBatch": 0,
+ "base64EncodeKeys": false,
+ "configuration": {}
+ },
"fieldMappings": [], "encryptionKey": null }
Indexer configuration specifies the inputs, parameters, and properties controlli
1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
+An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
+
+## Check indexer status
+
+To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request:
+
+```http
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+```
+
+The response includes status and the number of items processed. It should look similar to the following example:
+
+```json
+ {
+ "status":"running",
+ "lastResult": {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ "executionHistory":
+ [
+ {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ ... earlier history items
+ ]
+ }
+```
+
+Execution history contains up to 50 of the most recently completed executions, which are sorted in the reverse chronological order so that the latest execution comes first.
+ <a name="DataChangeDetectionPolicy"></a>
-## Indexing changed documents
+## Indexing new and changed documents
+
+Once an indexer has fully populated a search index, you might want subsequent indexer runs to incrementally index just the new and changed documents in your database.
-The purpose of a data change detection policy is to efficiently identify changed data items. Currently, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB, which is specified in the data source definition as follows:
+To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your data source definition. For Cosmos DB, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
+
+The following example shows a [data source definition](#define-the-data-source) with a change detection policy:
```http "dataChangeDetectionPolicy": {
The purpose of a data change detection policy is to efficiently identify changed
}, ```
-Using this policy is highly recommended to ensure good indexer performance.
- <a name="DataDeletionDetectionPolicy"></a> ## Indexing deleted documents
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
Last updated 02/15/2022
# Index data from Azure Cosmos DB using the SQL API
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Cosmos DB and makes it searchable in Azure Cognitive Search. The workflow creates a search index and loads it with text extracted from Azure Cosmos DB using the [SQL API](../cosmos-db/choose-api.md#coresql-api).
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Cosmos DB and makes it searchable in Azure Cognitive Search.
-Because terminology can be confusing, it's worth noting that [Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [SQL API](../cosmos-db/choose-api.md#coresql-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information about settings that are specific to Cosmos DB SQL API. You can create indexers using the [Azure portal](https://portal.azure.com), [Search REST APIs](/rest/api/searchservice/Indexer-operations) or an Azure SDK. This article uses REST to explain each step.
+Because terminology can be confusing, it's worth noting that [Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
## Prerequisites
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions. ++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer. + ## Define the data source The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
In a [search index](search-what-is-an-index.md), add fields to accept the source
## Configure and run the Cosmos DB indexer
-Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
Indexer configuration specifies the inputs, parameters, and properties controlli
"disabled": null, "schedule": null, "parameters": {
- "batchSize": null,
- "maxFailedItems": 0,
- "maxFailedItemsPerBatch": 0,
- "base64EncodeKeys": false,
- "configuration": {}
- },
+ "batchSize": null,
+ "maxFailedItems": 0,
+ "maxFailedItemsPerBatch": 0,
+ "base64EncodeKeys": false,
+ "configuration": {}
+ },
"fieldMappings": [], "encryptionKey": null }
Execution history contains up to 50 of the most recently completed executions, w
<a name="DataChangeDetectionPolicy"></a>
-## Indexing changed documents
+## Indexing new and changed documents
-The purpose of a data change detection policy is to efficiently identify changed data items. Currently, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB, which is specified in the data source definition as follows:
+Once an indexer has fully populated a search index, you might want subsequent indexer runs to incrementally index just the new and changed documents in your database.
+
+To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your data source definition. For Cosmos DB, the only supported policy is the [`HighWaterMarkChangeDetectionPolicy`](/dotnet/api/azure.search.documents.indexes.models.highwatermarkchangedetectionpolicy) using the `_ts` (timestamp) property provided by Azure Cosmos DB.
+
+The following example shows a [data source definition](#define-the-data-source) with a change detection policy:
```http "dataChangeDetectionPolicy": {
The purpose of a data change detection policy is to efficiently identify changed
}, ```
-Using this policy is highly recommended to ensure good indexer performance.
-
-If you're using a custom query, make sure that the `_ts` property is projected by the query.
- <a name="IncrementalProgress"></a>
-### Incremental progress and custom queries
+### Incremental indexing and custom queries
-Incremental progress during indexing ensures that if indexer execution is interrupted by transient failures or execution time limit, the indexer can pick up where it left off next time it runs, instead of having to reindex the entire collection from scratch. This is especially important when indexing large collections.
+If you're using a [custom query to retrieve documents](#flatten-structures), make sure the query orders the results by the `_ts` column. This enables periodic check-pointing that Azure Cognitive Search uses to provide incremental progress in the presence of failures.
-To enable incremental progress when using a custom query, ensure that your query orders the results by the `_ts` column. This enables periodic check-pointing that Azure Cognitive Search uses to provide incremental progress in the presence of failures.
+In some cases, even if your query contains an `ORDER BY [collection alias]._ts` clause, Azure Cognitive Search may not infer that the query is ordered by the `_ts`. You can tell Azure Cognitive Search that results are ordered by setting the `assumeOrderByHighWaterMarkColumn` configuration property.
-In some cases, even if your query contains an `ORDER BY [collection alias]._ts` clause, Azure Cognitive Search may not infer that the query is ordered by the `_ts`. You can tell Azure Cognitive Search that results are ordered by using the `assumeOrderByHighWaterMarkColumn` configuration property. To specify this hint, create or update your indexer as follows:
+To specify this hint, [create or update your indexer definition](#configure-and-run-the-cosmos-db-indexer) as follows:
```http {
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
Last updated 01/27/2022
> [!IMPORTANT] > MySQL support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a [preview REST API](search-api-preview.md) (2020-06-30-preview or later) to index your content. There is currently no portal support.
-Configure a [search indexer](search-indexer-overview.md) to extract content from Azure Database for MySQL and make it searchable in Azure Cognitive Search. The indexer will crawl your MySQL database on Azure, extract searchable data, and index it in Azure Cognitive Search. When configured to include a high water mark and soft deletion, the indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in your search index.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Database for MySQL and makes it searchable in Azure Cognitive Search.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing files in Azure DB for MySQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing files in Azure DB for MySQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. When configured to include a high water mark and soft deletion, the indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in your search index. Data extraction occurs when you submit the Create Indexer request.
## Prerequisites
In a [search index](search-what-is-an-index.md), add search index fields that co
If the primary key in the source table matches the document key (in this case, "ID"), the indexer will import the primary key as the document key.
-## Configure the MySQL indexer
+## Configure and run the MySQL indexer
-Once the index and data source have been created, you're ready to create the indexer.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-[Create or Update Indexer](/rest/api/searchservice/create-indexer) specifies the predefined data source and search index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
-```http
-POST https://[search service name].search.windows.net/indexers?api-version=2020-06-30
+ ```http
+ POST https://[search service name].search.windows.net/indexers?api-version=2020-06-30
+
+ {
+ "name" : "hotels-mysql-idxr",
+ "dataSourceName" : "hotels-mysql-ds",
+ "targetIndexName" : "hotels-mysql-ix",
+ "disabled": null,
+ "schedule": null,
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration": { }
+ },
+ "fieldMappings" : [ ],
+ "encryptionKey": null
+ }
+ ```
-{
- "name" : "hotels-mysql-idxr",
- "dataSourceName" : "hotels-mysql-ds",
- "targetIndexName" : "hotels-mysql-ix",
- "disabled": null,
- "schedule": null,
- "parameters": {
- "batchSize": null,
- "maxFailedItems": null,
- "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
- "configuration": { }
- },
- "fieldMappings" : [ ],
- "encryptionKey": null
-}
-```
+1. [Specify field mappings](search-indexer-field-mappings.md) if there are differences in field name or type, or if you need multiple versions of a source field in the search index.
-By default, the indexer runs when it's created on the search service. You can set "disabled" to true if you want to run the indexer manually.
+An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
-You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md).
+## Check indexer status
-To put the indexer on a schedule, set the "schedule" property when creating or updating the indexer. Here is an example of a schedule that runs every 15 minutes.
+To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request:
```http
-PUT https://[search service name].search.windows.net/indexers/hotels-mysql-idxr?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin-key]
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+```
-{
- "dataSourceName" : "hotels-mysql-ds",
- "targetIndexName" : "hotels-mysql-ix",
- "schedule" : {
- "interval" : "PT15M",
- "startTime" : "2022-01-01T00:00:00Z"
+The response includes status and the number of items processed. It should look similar to the following example:
+
+```json
+ {
+ "status":"running",
+ "lastResult": {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ "executionHistory":
+ [
+ {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ ... earlier history items
+ ]
}
-}
```
+Execution history contains up to 50 of the most recently completed executions, which are sorted in the reverse chronological order so that the latest execution comes first.
+ ## Capture new, changed, and deleted rows If your data source meets the requirements for change and deletion detection, the indexer can incrementally index the changes in your data source since the last indexer job, which means you can avoid having to re-index the entire table or view every time an indexer runs.
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-blob-storage.md
Last updated 02/11/2022
# Index data from Azure Blob Storage
-Configure a [search indexer](search-indexer-overview.md) to extract content and metadata from Azure Blob Storage and make it searchable in Azure Cognitive Search.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Blob Storage and makes it searchable in Azure Cognitive Search. Inputs to the indexer are your blobs, in a single container. Output is a search index with searchable content and metadata stored in individual fields.
-Blob indexers are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing. This article focuses on indexers for text-based indexing, where just the textual content and metadata are ingested for full text search scenarios.
-
-Inputs to the indexer are your blobs, in a single container. Output is a search index with searchable content and metadata stored in individual fields.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Blob Storage. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Blob Storage. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+Blob indexers are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing. This article focuses on indexers for text-based indexing, where just the textual content and metadata are ingested for full text search scenarios.
## Prerequisites
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ Read permissions on Azure Storage. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles instead, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Storage Blob Data Reader** permissions. ++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.+ <a name="SupportedFormats"></a> ## Supported document formats
In a [search index](search-what-is-an-index.md), add fields to accept the conten
1. Add fields for standard metadata properties. The indexer can read custom metadata properties, [standard metadata](#indexing-blob-metadata) properties, and [content-specific metadata](search-blob-metadata-properties.md) properties.
-## Configure the blob indexer
+## Configure and run the blob indexer
-Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors. The "configuration" section determines what content gets indexed.
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
Indexer configuration specifies the inputs, parameters, and properties controlli
"dataSourceName" : "my-blob-datasource", "targetIndexName" : "my-search-index", "parameters": {
- "batchSize": null,
- "maxFailedItems": null,
- "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
- "configuration:" {
- "indexedFileNameExtensions" : ".pdf,.docx",
- "excludedFileNameExtensions" : ".png,.jpeg",
- "dataToExtract": "contentAndMetadata",
- "parsingMode": "default",
- "imageAction": "none"
- }
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration:" {
+ "indexedFileNameExtensions" : ".pdf,.docx",
+ "excludedFileNameExtensions" : ".png,.jpeg",
+ "dataToExtract": "contentAndMetadata",
+ "parsingMode": "default",
+ "imageAction": "none"
+ }
}, "schedule" : { }, "fieldMappings" : [ ]
Indexer configuration specifies the inputs, parameters, and properties controlli
In blob indexing, you can often omit field mappings because the indexer has built-in support for mapping the "content" and metadata properties to similarly named and typed fields in an index. For metadata properties, the indexer will automatically replace hyphens `-` with underscores in the search index.
-1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
+1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties. For the full list of parameter descriptions, see [Blob configuration parameters](/rest/api/searchservice/create-indexer#blob-configuration-parameters) in the REST API.
+
+An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
+
+## Check indexer status
+
+To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request:
+
+```http
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+```
+
+The response includes status and the number of items processed. It should look similar to the following example:
+
+```json
+ {
+ "status":"running",
+ "lastResult": {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ "executionHistory":
+ [
+ {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ ... earlier history items
+ ]
+ }
+```
-For the full list of parameter descriptions, see [Blob configuration parameters](/rest/api/searchservice/create-indexer#blob-configuration-parameters) in the REST API.
+Execution history contains up to 50 of the most recently completed executions, which are sorted in the reverse chronological order so that the latest execution comes first.
## How blobs are indexed
search Search Howto Indexing Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-tables.md
Last updated 02/11/2022
# Index data from Azure Table Storage
-Configure a [search indexer](search-indexer-overview.md) to extract content from Azure Table Storage and make it searchable in Azure Cognitive Search.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Table Storage and makes it searchable in Azure Cognitive Search. Inputs to the indexer are your entities, in a single table. Output is a search index with searchable content and metadata stored in individual fields.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Azure Table Storage. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing from Azure Table Storage. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
## Prerequisites
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ Read permissions to access Azure Storage. A "full access" connection string includes a key that gives access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions. ++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.+ ## Define the data source The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
In a [search index](search-what-is-an-index.md), add fields to accept the conten
Using the same names and compatible [data types](/rest/api/searchservice/supported-data-types) minimizes the need for [field mappings](search-indexer-field-mappings.md).
-## Configure the table indexer
+## Configure and run the table indexer
+
+Once the index and data source have been created, you're ready to create the indexer. Indexer configuration specifies the inputs, parameters, and properties controlling run time behaviors.
-1. [Create or update an indexer](/rest/api/searchservice/create-indexer) to use the predefined data source and search index.
+1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
In a [search index](search-what-is-an-index.md), add fields to accept the conten
"maxFailedItemsPerBatch": null, "base64EncodeKeys": null, "configuration:" { }
- },
+ },
"schedule" : { }, "fieldMappings" : [ ] } ```
+1. [Specify field mappings](search-indexer-field-mappings.md) if there are differences in field name or type, or if you need multiple versions of a source field in the search index.
+ 1. See [Create an indexer](search-howto-create-indexers.md) for more information about other properties.
+An indexer runs automatically when it's created. You can prevent this by setting "disabled" to true. To control indexer execution, [run an indexer on demand](search-howto-run-reset-indexers.md) or [put it on a schedule](search-howto-schedule-indexers.md).
+
+## Check indexer status
+
+To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request:
+
+```http
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+```
+
+The response includes status and the number of items processed. It should look similar to the following example:
+
+```json
+ {
+ "status":"running",
+ "lastResult": {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ "executionHistory":
+ [
+ {
+ "status":"success",
+ "errorMessage":null,
+ "startTime":"2022-02-21T00:23:24.957Z",
+ "endTime":"2022-02-21T00:36:47.752Z",
+ "errors":[],
+ "itemsProcessed":1599501,
+ "itemsFailed":0,
+ "initialTrackingState":null,
+ "finalTrackingState":null
+ },
+ ... earlier history items
+ ]
+ }
+```
+
+Execution history contains up to 50 of the most recently completed executions, which are sorted in the reverse chronological order so that the latest execution comes first.
+ ## Next steps You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Storage:
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
After creating your [JSON configuration file](#create-a-connector-json-configura
1. Prepare an [ARM template JSON file](/azure/templates/microsoft.securityinsights/dataconnectors) for your connector. For example, see the following ARM template JSON files: - Data connector in the [Slack solution](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SlackAudit/Data%20Connectors/SlackNativePollerConnector/azuredeploy_Slack_native_poller_connector.json)
- - [Atlassian Jira Audit data connector](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AtlassianJiraAudit/JiraNativePollerConnector/azuredeploy_Jira_native_poller_connector.json)
+ - [Atlassian Jira Audit data connector](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/AtlassianJiraAudit/Data%20Connectors/JiraNativePollerConnector/azuredeploy_Jira_native_poller_connector.json)
1. In the Azure portal, search for **Deploy a custom template**.
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The descriptor `Dvc` is used for the reporting device, which is the local system
| **NetworkApplicationProtocol** | Optional | String | The application layer protocol used by the connection or session. If the [DstPortNumber](#dstportnumber) value is provided, we recommend that you include **NetworkApplicationProtocol** too. If the value isn't available from the source, derive the value from the [DstPortNumber](#dstportnumber) value.<br><br>Example: `FTP` | | <a name="networkprotocol"></a> **NetworkProtocol** | Optional | Enumerated | The IP protocol used by the connection or session as listed in [IANA protocol assignment](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml), which is typically `TCP`, `UDP`, or `ICMP`.<br><br>Example: `TCP` | | **NetworkProtocolVersion** | Optional | Enumerated | The version of [NetworkProtocol](#networkprotocol). When using it to distinguish between IP version, use the values `IPv4` and `IPv6`. |
-| <a name="networkdirection"></a>**NetworkDirection** | Optional | Enumerated | The direction of the connection or session:<br><br> - For the [EventType](#eventtype) `NetworkSession`, **NetworkDirection** represents the direction relative to the organization or cloud environment boundary. Supported values are `Inbound`, `Outbound`, `Local` (to the organization), `Extenral` (to the organization) or `NA` (Not Applicable).<br><br> - For the [EventType](#eventtype) `EndpointNetworkSession`, **NetworkDirection** represents the direction relative to the endpoint. Supported values are `Inbound`, `Outbound`, `Local` (to the system), 'Listen' or `NA` (Not Applicable). The `Listen` value indicates that a device has started accepting network connections but isn't actually, necessarily, connected. |
+| <a name="networkdirection"></a>**NetworkDirection** | Optional | Enumerated | The direction of the connection or session:<br><br> - For the [EventType](#eventtype) `NetworkSession`, **NetworkDirection** represents the direction relative to the organization or cloud environment boundary. Supported values are `Inbound`, `Outbound`, `Local` (to the organization), `External` (to the organization) or `NA` (Not Applicable).<br><br> - For the [EventType](#eventtype) `EndpointNetworkSession`, **NetworkDirection** represents the direction relative to the endpoint. Supported values are `Inbound`, `Outbound`, `Local` (to the system), `Listen` or `NA` (Not Applicable). The `Listen` value indicates that a device has started accepting network connections but isn't actually, necessarily, connected. |
| <a name="networkduration"></a>**NetworkDuration** | Optional | Integer | The amount of time, in milliseconds, for the completion of the network session or connection.<br><br>Example: `1500` | | **Duration** | Alias | | Alias to [NetworkDuration](#networkduration). | | **NetworkIcmpCode** | Optional | Integer | For an ICMP message, the ICMP message type numeric value as described in [RFC 2780](https://datatracker.ietf.org/doc/html/rfc2780) for IPv4 network connections, or in [RFC 4443](https://datatracker.ietf.org/doc/html/rfc4443) for IPv6 network connections. If a [NetworkIcmpType](#networkicmptype) value is provided, this field is mandatory. If the value isn't available from the source, derive the value from the [NetworkIcmpType](#networkicmptype) field instead.<br><br>Example: `34` |
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
The currently supported list of vendors and products used in the [EventVendor](#
| Corelight | Zeek | | GCP | Cloud DNS | | Infoblox | NIOS |
-| Microsoft | - AAD<br> - Azure Firewall<br> - Azure File Storage<br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - NSGFlow <br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br>
+| Microsoft | - AAD<br> - Azure Firewall<br> - Azure File Storage<br> - Azure NSG flows<br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br>
| Okta | Okta | | Palo Alto | - PanOS<br> - CDL<br> | | Zscaler | - ZIA DNS<br> - ZIA Firewall<br> - ZIA Proxy |
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-deploy-solution.md
To deploy the Microsoft Sentinel SAP data connector and security content as desc
| Area | Description | | | | |**Azure prerequisites** | **Access to Microsoft Sentinel**. Make a note of your Microsoft Sentinel workspace ID and key to use in this tutorial when you [deploy your SAP data connector](#deploy-your-sap-data-connector). <br><br>To view these details from Microsoft Sentinel, go to **Settings** > **Workspace settings** > **Agents management**. <br><br>**Ability to create Azure resources**. For more information, see the [Azure Resource Manager documentation](../azure-resource-manager/management/manage-resources-portal.md). <br><br>**Access to your Azure key vault**. This tutorial describes the recommended steps for using your Azure key vault to store your credentials. For more information, see the [Azure Key Vault documentation](../key-vault/index.yml). |
-|**System prerequisites** | **Software**. The SAP data connector deployment script automatically installs software prerequisites. For more information, see [Automatically installed software](#automatically-installed-software). <br><br> **System connectivity**. Ensure that the VM serving as your SAP data connector host has access to: <br>- Microsoft Sentinel <br>- Your Azure key vault <br>- The SAP environment host, via the following TCP ports: *32xx*, *5xx13*, and *33xx*, where *xx* is the SAP instance number. <br><br>Make sure that you also have an SAP user account in order to access the SAP software download page.<br><br>**System architecture**. The SAP solution is deployed on a VM as a Docker container, and each SAP client requires its own container instance. For sizing recommendations, see [Recommended virtual machine sizing](sap-solution-detailed-requirements.md#recommended-virtual-machine-sizing). <br>Your VM and the Microsoft Sentinel workspace can be in different Azure subscriptions, and even different Azure AD tenants.|
+|**System prerequisites** | **Software**. The SAP data connector deployment script automatically installs software prerequisites. For more information, see [Automatically installed software](#automatically-installed-software). <br><br> **System connectivity**. Ensure that the VM serving as your SAP data connector host has access to: <br>- Microsoft Sentinel <br>- Your Azure key vault <br>- The SAP environment host, via the following TCP ports: *32xx*, *5xx13*, and *33xx*, *48xx* (in case SNC is used) where *xx* is the SAP instance number. <br><br>Make sure that you also have an SAP user account in order to access the SAP software download page.<br><br>**System architecture**. The SAP solution is deployed on a VM as a Docker container, and each SAP client requires its own container instance. For sizing recommendations, see [Recommended virtual machine sizing](sap-solution-detailed-requirements.md#recommended-virtual-machine-sizing). <br>Your VM and the Microsoft Sentinel workspace can be in different Azure subscriptions, and even different Azure AD tenants.|
|**SAP prerequisites** | **Supported SAP versions**. We recommend using [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on older SAP version [SAP_BASIS 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows).<br><br> **SAP system details**. Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as`001`<br><br>**SAP NetWeaver instance access**. Access to your SAP instances must use one of the following options: <br>- [SAP ABAP user/password](#configure-your-sap-system). <br>- A user with an X509 certificate, using SAP CRYPTOLIB PSE. This option might require expert manual steps.<br><br>**Support from your SAP team**. You'll need the support of your SAP team to help ensure that your SAP system is [configured correctly](#configure-your-sap-system) for the solution deployment. | | | |
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
+
+ Title: Integrate Azure Service Bus with Service Connector
+description: Integrate Service Bus into your application with Service Connector
++++ Last updated : 02/21/2022++
+# Integrate Service Bus with Service Connector
+
+This page shows the supported authentication types and client types of Azure Service Bus using Service Connector. You might still be able to connect to Service Bus in other programming languages without using Service Connector. This page also shows default environment variable names and values or Spring Boot configuration you get when you create service connections. You can learn more about the [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Spring Cloud
+
+## Supported authentication types and client types
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+| | :-: | :--: | :--: | :--: |
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+## Default environment variable names or application properties
+
+### .NET, Java, Node.JS, Python
+
+#### Secret/connection string
+
+> [!div class="mx-tdBreakAll"]
+> |Default environment variable name | Description | Sample value |
+> | -- | -- | |
+> | AZURE_SERVICEBUS_CONNECTIONSTRING | Service Bus connection string | `Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey={****}` |
+
+#### System-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| -- | -- | -- |
+| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+
+#### User-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| - | -| - |
+| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+| AZURE_SERVICEBUS_CLIENTID | Your client ID | `{yourClientID}` |
+
+#### Service principal
+
+| Default environment variable name | Description | Sample value |
+| --| | -- |
+| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+| AZURE_SERVICEBUS_CLIENTID | Your client ID | `{yourClientID}` |
+| AZURE_SERVICEBUS_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
+| AZURE_SERVICEBUS_TENANTID | Your tenant ID | `{yourTenantID}` |
+
+### Java - Spring Boot
+
+#### Spring Boot secret/connection string
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> | -- | -- | |
+> | spring.cloud.azure.servicebus.connection-string | Service Bus connection string | `Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=***` |
+
+#### Spring Boot system-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| | | - |
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+
+#### Spring Boot user-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| | | - |
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `{yourClientID} ` |
+
+#### Spring Boot service principal
+
+| Default environment variable name | Description | Sample value |
+| | | - |
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
+| spring.cloud.azure.tenant-id | Your client secret | `******` |
+| spring.cloud.azure.client-secret | Your tenant ID | `{yourTenantID}` |
+
+## Next step
+
+Follow the tutorial listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-health Alerts Activity Log Service Notifications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-portal.md
Title: Receive activity log alerts on Azure service notifications using Azure portal description: Learn how to use the Azure portal to set up activity log alerts for service health notifications by using the Azure portal.-+ Last updated 06/27/2019
spatial-anchors Tutorial New Unity Hololens App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-new-unity-hololens-app.md
Last updated 2/3/2021
-<!--
-
-[X] Make sure you have internet on your Hl2
-[X] User Debug - ARM64 - Device, Debug -> Start Debugging to see logs
-[X] "After closing session you could have a different device on a different day (depending on your anchor expiration), as long as you still have the IDs"
-[X] Tapping will be middle of the hand, not your fingers
-[X] Using Legacy shader since it's included in a default Unity build. Default shaders are only included if part of the scene.
>- # Tutorial: Step-by-step instructions to create a new HoloLens Unity app using Azure Spatial Anchors This tutorial will show you how to create a new HoloLens Unity app with Azure Spatial Anchors.
storage Create Data Lake Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/create-data-lake-storage-account.md
The following image shows this setting in the **Create storage account** page.
To enable Data Lake Storage capabilities on an existing account, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md).
-> [!NOTE]
-> **Data protection** and hierarchical namespace can't be enabled simultaneously.
- ## Next steps - [Storage account overview](../common/storage-account-overview.md)
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
You can read account-level metric values of your storage account or the Blob sto
```powershell $resourceId = "<resource-ID>"
-
-
-
-
-
- $resourceId -MetricName "UsedCapacity" -TimeGrain 01:00:00
+ Get-AzMetric -ResourceId $resourceId -MetricName "UsedCapacity" -TimeGrain 01:00:00
``` #### Reading metric values with dimensions
storage Quickstart Blobs Javascript Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-javascript-browser.md
Title: "Quickstart: Azure Blob storage library v12 - JavaScript in a browser"
-description: In this quickstart, you learn how to use the Azure Blob storage client library version 12 for JavaScript in a browser. You create a container and an object in Blob storage. Next, you learn how to list all of the blobs in a container. Finally, you learn how to delete blobs and delete a container.
+ Title: "Quickstart: Azure Blob storage library v12 - JS Browser"
+description: In this quickstart, you learn how to use the Azure Blob storage npm client library version 12 for JavaScript in a browser. You create a container and an object in Blob storage. Next, you learn how to list all of the blobs in a container. Finally, you learn how to delete blobs and delete a container.
Previously updated : 07/24/2020 Last updated : 02/25/2022
Azure Blob storage is optimized for storing large amounts of unstructured data. Blobs are objects that can hold text or binary data, including images, documents, streaming media, and archive data. In this quickstart, you learn to manage blobs by using JavaScript in a browser. You'll upload and list blobs, and you'll create and delete containers.
+The [**example code**](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser) shows you how to accomplish the following tasks with the Azure Blob storage client library for JavaScript:
+
+- [Declare fields for UI elements](#declare-fields-for-ui-elements)
+- [Add your storage account info](#add-your-storage-account-info)
+- [Create client objects](#create-client-objects)
+- [Create and delete a storage container](#create-and-delete-a-storage-container)
+- [List blobs](#list-blobs)
+- [Upload blobs](#upload-blobs-to-a-container)
+- [Delete blobs](#delete-blobs)
+ Additional resources: -- [API reference documentation](/javascript/api/@azure/storage-blob)-- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob)-- [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob)-- [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+[API reference](/javascript/api/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
## Prerequisites - [An Azure account with an active subscription](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) - [An Azure Storage account](../common/storage-account-create.md)-- [Node.js](https://nodejs.org)
+- [Node.js LTS](https://nodejs.org/en/download/)
- [Microsoft Visual Studio Code](https://code.visualstudio.com)-- A Visual Studio Code extension for browser debugging, such as:
- - [Debugger for Microsoft Edge](https://devblogs.microsoft.com/visualstudio/debug-javascript-in-microsoft-edge-from-visual-studio/)
- - [Debugger for Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome)
- - [Debugger for Firefox](https://marketplace.visualstudio.com/items?itemName=firefox-devtools.vscode-firefox-debug)
+ ## Object model
In this quickstart, you'll use the following JavaScript classes to interact with
- [ContainerClient](/javascript/api/@azure/storage-blob/containerclient): The `ContainerClient` class allows you to manipulate Azure Storage containers and their blobs. - [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient): The `BlockBlobClient` class allows you to manipulate Azure Storage blobs.
-## Setting up
+## Configure storage account for browser access
-This section walks you through preparing a project to work with the Azure Blob storage client library v12 for JavaScript.
+To programmatically access your storage account from a web browser, you need to configure CORS access and create an SAS connection string.
### Create a CORS rule Before your web application can access blob storage from the client, you must configure your account to enable [cross-origin resource sharing](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services), or CORS.
-In the Azure portal, select your storage account. To define a new CORS rule, navigate to the **Settings** section and select **CORS**. For this quickstart, you create an open CORS rule:
+In the Azure portal, select your storage account. To define a new CORS rule, navigate to the **Settings** section and select **CORS**. For this quickstart, you create a fully-open CORS rule:
![Azure Blob Storage Account CORS settings](media/quickstart-blobs-javascript-browser/azure-blob-storage-cors-settings.png)
The following table describes each CORS setting and explains the values used to
| **EXPOSED HEADERS** | * | Lists the allowed response headers by the account. Setting the value to `*` allows the account to send any header. | | **MAX AGE** | **86400** | The maximum amount of time the browser caches the preflight OPTIONS request in seconds. A value of *86400* allows the cache to remain for a full day. |
-After you fill in the fields with the values from this table, click the **Save** button.
+After you fill in the fields with the values from this table, select the **Save** button.
> [!IMPORTANT] > Ensure any settings you use in production expose the minimum amount of access necessary to your storage account to maintain secure access. The CORS settings described here are appropriate for a quickstart as it defines a lenient security policy. These settings, however, are not recommended for a real-world context.
-### Create a shared access signature
+### Create a SAS connection string
The shared access signature (SAS) is used by code running in the browser to authorize Azure Blob storage requests. By using the SAS, the client can authorize access to storage resources without the account access key or connection string. For more information on SAS, see [Using shared access signatures (SAS)](../common/storage-sas-overview.md). Follow these steps to get the Blob service SAS URL: 1. In the Azure portal, select your storage account.
-2. Navigate to the **Security + networking** section and select **Shared access signature**.
-3. Scroll down and click the **Generate SAS and connection string** button.
-4. Scroll down further and locate the **Blob service SAS URL** field
-5. Click the **Copy to clipboard** button at the far-right end of the **Blob service SAS URL** field.
-6. Save the copied URL somewhere for use in an upcoming step.
+1. Navigate to the **Security + networking** section and select **Shared access signature**.
+1. Review the **Allowed services** to understand the SAS token will have access to all of your storage account
+ * Blob
+ * File
+ * Queue
+ * Table
+1. Select the **Allowed resources types** to include:
+ * Service
+ * Container
+ * Object
+1. Review the **Start and expiry date/time** to understand the SAS token has a limited lifetime by default.
+1. Scroll down and select the **Generate SAS and connection string** button.
+1. Scroll down further and locate the **Blob service SAS URL** field
+1. Select the **Copy to clipboard** button at the far-right end of the **Blob service SAS URL** field.
+1. Save the copied URL somewhere for use in an upcoming step.
-### Add the Azure Blob storage client library
+## Create the JavaScript project
-On your local computer, create a new folder called *azure-blobs-js-browser* and open it in Visual Studio Code.
+Create a JavaScript application named *blob-quickstart-v12*.
-Select **View > Terminal** to open a console window inside Visual Studio Code. Run the following Node.js Package Manager (npm) command in the terminal window to create a [package.json](https://docs.npmjs.com/files/package.json) file.
+1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for the project.
-```console
-npm init -y
-```
+ ```console
+ mkdir blob-quickstart-v12
+ ```
-The Azure SDK is composed of many separate packages. You can choose which packages you need based on the services you intend to use. Run following `npm` command in the terminal window to install the `@azure/storage-blob` package.
+1. Switch to the newly created *blob-quickstart-v12* directory.
-```console
-npm install --save @azure/storage-blob
-```
+ ```console
+ cd blob-quickstart-v12
+ ```
-#### Bundle the Azure Blob storage client library
+1. Create a *package.json*.
-To use Azure SDK libraries on a website, convert your code to work inside the browser. You do this using a tool called a bundler. Bundling takes JavaScript code written using [Node.js](https://nodejs.org) conventions and converts it into a format that's understood by browsers. This quickstart article uses the [Parcel](https://parceljs.org/) bundler.
+ ```console
+ npm init -y
+ ```
-Install Parcel by running the following `npm` command in the terminal window:
+1. Open the project in Visual Studio Code:
-```console
-npm install -g parcel-bundler
-```
+ ```console
+ code .
+ ```
-In Visual Studio Code, open the *package.json* file and add a `browserlist` between the `license` and `dependencies` entries. This `browserlist` targets the latest version of three popular browsers. The full *package.json* file should now look like this:
+## Install the npm package for blob storage
+1. In a Visual Studio Code terminal, install the Azure Storage npm package:
-Save the *package.json* file.
+ ```console
+ npm install @azure/storage-blob
+ ```
-### Import the Azure Blob storage client library
+1. Install a bundler package to bundle the files and package for the browser:
-To use Azure SDK libraries inside JavaScript, import the `@azure/storage-blob` package. Create a new file in Visual Studio Code containing the following JavaScript code.
+ ```console
+ npm install parcel
+ ```
+ If you plan to use a different bundler, learn more about [bundling the Azure SDK](https://github.com/Azure/azure-sdk-for-js/blob/main/documentation/Bundling.md).
-Save the file as *index.js* in the *azure-blobs-js-browser* directory.
+## Configure browser bundling
-### Implement the HTML page
-Create a new file in Visual Studio Code and add the following HTML code.
+1. In Visual Studio Code, open the *package.json* file and add a `browserlist`. This `browserlist` targets the latest version of popular browsers. The full *package.json* file should now look like this:
+ :::code language="json" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/package.json" range="13-19":::
-Save the file as *https://docsupdatetracker.net/index.html* in the *azure-blobs-js-browser* folder.
+1. Add a **start** script to bundle the website:
-## Code examples
+ :::code language="json" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/package.json" range="9-11":::
-The example code shows you how to accomplish the following tasks with the Azure Blob storage client library for JavaScript:
+## Create the HTML file
-- [Declare fields for UI elements](#declare-fields-for-ui-elements)-- [Add your storage account info](#add-your-storage-account-info)-- [Create client objects](#create-client-objects)-- [Create and delete a storage container](#create-and-delete-a-storage-container)-- [List blobs](#list-blobs)-- [Upload blobs](#upload-blobs)-- [Delete blobs](#delete-blobs)
+1. Create `https://docsupdatetracker.net/index.html` and add the following HTML code:
-You'll run the code after you add all the snippets to the *index.js* file.
+ :::code language="html" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/https://docsupdatetracker.net/index.html":::
-### Declare fields for UI elements
+## Create the JavaScript file
-Add the following code to the end of the *index.js* file.
+From the project directory:
+1. Create a new file named `index.js`.
+1. Add the Azure Storage npm package.
-Save the *index.js* file.
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" range="19":::
-This code declares fields for each HTML element and implements a `reportStatus` function to display output.
+## Declare fields for UI elements
-In the following sections, add each new block of JavaScript code after the previous block.
+Add DOM elements for user interaction:
-### Add your storage account info
+ :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_DeclareVariables":::
-Add code to access your storage account. Replace the placeholder with your Blob service SAS URL that you generated earlier. Add the following code to the end of the *index.js* file.
+ This code declares fields for each HTML element and implements a `reportStatus` function to display output.
-Save the *index.js* file.
+## Add your storage account info
-### Create client objects
+Add the following code at the end of the *index.js* file to access your storage account. Replace the `<placeholder>` with your Blob service SAS URL that you generated earlier. Add the following code to the end of the *index.js* file.
-Create [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) and [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) objects for interacting with the Azure Blob storage service. Add the following code to the end of the *index.js* file.
+## Create client objects
-Save the *index.js* file.
+Create [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) and [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) objects to connect to your storage account. Add the following code to the end of the *index.js* file.
-### Create and delete a storage container
-Create and delete the storage container when you click the corresponding button on the web page. Add the following code to the end of the *index.js* file.
+## Create and delete a storage container
+Create and delete the storage container when you select the corresponding button on the web page. Add the following code to the end of the *index.js* file.
-Save the *index.js* file.
-### List blobs
+## List blobs
-List the contents of the storage container when you click the **List files** button. Add the following code to the end of the *index.js* file.
+List the contents of the storage container when you select the **List files** button. Add the following code to the end of the *index.js* file.
:::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_ListBlobs":::
-Save the *index.js* file.
- This code calls the [ContainerClient.listBlobsFlat](/javascript/api/@azure/storage-blob/containerclient#listblobsflat-containerlistblobsoptions-) function, then uses an iterator to retrieve the name of each [BlobItem](/javascript/api/@azure/storage-blob/blobitem) returned. For each `BlobItem`, it updates the **Files** list with the [name](/javascript/api/@azure/storage-blob/blobitem#name) property value.
-### Upload blobs
+## Upload blobs to a container
-Upload files to the storage container when you click the **Select and upload files** button. Add the following code to the end of the *index.js* file.
+Upload files to the storage container when you select the **Select and upload files** button. Add the following code to the end of the *index.js* file.
:::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_UploadBlobs":::
-Save the *index.js* file.
- This code connects the **Select and upload files** button to the hidden `file-input` element. The button `click` event triggers the file input `click` event and displays the file picker. After you select files and close the dialog box, the `input` event occurs and the `uploadFiles` function is called. This function creates a [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object, then calls the browser-only [uploadBrowserData](/javascript/api/@azure/storage-blob/blockblobclient#uploadbrowserdata-blobarraybufferarraybufferview--blockblobparalleluploadoptions-) function for each file you selected. Each call returns a `Promise`. Each `Promise` is added to a list so that they can all be awaited together, causing the files to upload in parallel.
-### Delete blobs
+## Delete blobs
-Delete files from the storage container when you click the **Delete selected files** button. Add the following code to the end of the *index.js* file.
+Delete files from the storage container when you select the **Delete selected files** button. Add the following code to the end of the *index.js* file.
:::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_DeleteBlobs":::
-Save the *index.js* file.
- This code calls the [ContainerClient.deleteBlob](/javascript/api/@azure/storage-blob/containerclient#deleteblob-string--blobdeleteoptions-) function to remove each file selected in the list. It then calls the `listFiles` function shown earlier to refresh the contents of the **Files** list. ## Run the code
-To run the code inside the Visual Studio Code debugger, configure the *launch.json* file for your browser.
-
-### Configure the debugger
-
-To set up the debugger extension in Visual Studio Code:
-
-1. Select **Run > Add Configuration**
-2. Select **Edge**, **Chrome**, or **Firefox**, depending on which extension you installed in the [Prerequisites](#prerequisites) section earlier.
-
-Adding a new configuration creates a *launch.json* file and opens it in the editor. Modify the *launch.json* file so that the `url` value is `http://localhost:1234/https://docsupdatetracker.net/index.html`, as shown here:
+1. From a Visual Studio Code terminal, run the app.
+ ```console
+ npm start
+ ```
-After updating, save the *launch.json* file. This configuration tells Visual Studio Code which browser to open and which URL to load.
+ This process bundles the files and starts a web server.
-### Launch the web server
+1. Access the web site with a browser using the following URL:
-To launch the local development web server, select **View > Terminal** to open a console window inside Visual Studio Code, then enter the following command.
+ ```HTTP
+ http://localhost:1234
+ ```
-```console
-parcel https://docsupdatetracker.net/index.html
-```
-
-Parcel bundles your code and starts a local development server for your page at `http://localhost:1234/https://docsupdatetracker.net/index.html`. Changes you make to *index.js* will automatically be built and reflected on the development server whenever you save the file.
-
-If you receive a message that says **configured port 1234 could not be used**, you can change the port by running the command `parcel -p <port#> https://docsupdatetracker.net/index.html`. In the *launch.json* file, update the port in the URL path to match.
-
-### Start debugging
-
-Run the page in the debugger and get a feel for how blob storage works. If any errors occur, the **Status** pane on the web page will display the error message received.
-
-To open *https://docsupdatetracker.net/index.html* in the browser with the Visual Studio Code debugger attached, select **Run > Start Debugging** or press F5 in Visual Studio Code.
-
-### Use the web app
-
-In the [Azure portal](https://portal.azure.com), you can verify the results of the API calls as you follow the steps below.
-
-#### Step 1 - Create a container
+## Step 1 - Create a container
1. In the web app, select **Create container**. The status indicates that a container was created.
-2. To verify in the Azure portal, select your storage account. Under **Blob service**, select **Containers**. Verify that the new container appears. (You may need to select **Refresh**.)
+2. In the Azure portal, verify your container was created. Select your storage account. Under **Blob service**, select **Containers**. Verify that the new container appears. (You may need to select **Refresh**.)
-#### Step 2 - Upload a blob to the container
+## Step 2 - Upload a blob to the container
1. On your local computer, create and save a test file, such as *test.txt*.
-2. In the web app, click **Select and upload files**.
+2. In the web app, select **Select and upload files**.
3. Browse to your test file, and then select **Open**. The status indicates that the file was uploaded, and the file list was retrieved. 4. In the Azure portal, select the name of the new container that you created earlier. Verify that the test file appears.
-#### Step 3 - Delete the blob
+## Step 3 - Delete the blob
1. In the web app, under **Files**, select the test file. 2. Select **Delete selected files**. The status indicates that the file was deleted and that the container contains no files. 3. In the Azure portal, select **Refresh**. Verify that you see **No blobs found**.
-#### Step 4 - Delete the container
+## Step 4 - Delete the container
1. In the web app, select **Delete container**. The status indicates that the container was deleted. 2. In the Azure portal, select the **\<account-name\> | Containers** link at the top-left of the portal pane. 3. Select **Refresh**. The new container disappears. 4. Close the web app.
-### Clean up resources
+## Use the storage emulator
+
+This quickstart created a container and blob on the Azure cloud. You can also use the Azure Blob storage npm package to create these resources locally on the [Azure Storage emulator](/azure/storage/common/storage-use-emulator) for development and testing.
-Click on the **Terminal** console in Visual Studio Code and press CTRL+C to stop the web server.
+## Clean up resources
-To clean up the resources created during this quickstart, go to the [Azure portal](https://portal.azure.com) and delete the resource group you created in the [Prerequisites](#prerequisites) section.
+1. When you're done with this quickstart, delete the `blob-quickstart-v12` directory.
+1. If you're done using your Azure Storage resource, remove your resource group using either method:
+ * Use the [Azure CLI to remove the Storage resource](storage-quickstart-blobs-cli.md#clean-up-resources)
+ * Use the [Azure portal to remove the resource](storage-quickstart-blobs-portal.md#clean-up-resources).
## Next steps
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
Title: "Quickstart: Azure Blob storage library v12 - JavaScript"
-description: In this quickstart, you learn how to use the Azure Blob storage client library version 12 for JavaScript to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
+description: In this quickstart, you learn how to use the Azure Blob storage blob npm package version 12 for JavaScript to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
Previously updated : 09/17/2020 Last updated : 02/25/2022
# Quickstart: Manage blobs with JavaScript v12 SDK in Node.js
-In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and list blobs, and you'll create and delete containers.
+In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data.
+
+These [**example code**](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/quickstarts/JavaScript/V12/nodejs) snippets show you how to perform the following with the Azure Blob storage package library for JavaScript:
+
+- [Get the connection string](#get-the-connection-string)
+- [Create a container](#create-a-container)
+- [Upload blobs to a container](#upload-blobs-to-a-container)
+- [List the blobs in a container](#list-the-blobs-in-a-container)
+- [Download blobs](#download-blobs)
+- [Delete a container](#delete-a-container)
Additional resources: -- [API reference documentation](/javascript/api/@azure/storage-blob)-- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob)-- [Package (Node Package Manager)](https://www.npmjs.com/package/@azure/storage-blob)-- [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+[API reference](/javascript/api/@azure/storage-blob) |
+[Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). - An Azure Storage account. [Create a storage account](../common/storage-account-create.md).-- [Node.js](https://nodejs.org/en/download/).
+- [Node.js LTS](https://nodejs.org/en/download/).
+- [Microsoft Visual Studio Code](https://code.visualstudio.com)
-## Setting up
+## Object model
-This section walks you through preparing a project to work with the Azure Blob storage client library v12 for JavaScript.
+Azure Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
+
+- The storage account
+- A container in the storage account
+- A blob in the container
+
+The following diagram shows the relationship between these resources.
+
+![Diagram of Blob storage architecture](./media/storage-blobs-introduction/blob1.png)
+
+Use the following JavaScript classes to interact with these resources:
+
+- [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient): The `BlobServiceClient` class allows you to manipulate Azure Storage resources and blob containers.
+- [ContainerClient](/javascript/api/@azure/storage-blob/containerclient): The `ContainerClient` class allows you to manipulate Azure Storage containers and their blobs.
+- [BlobClient](/javascript/api/@azure/storage-blob/blobclient): The `BlobClient` class allows you to manipulate Azure Storage blobs.
-### Create the project
+## Create the Node.js project
Create a JavaScript application named *blob-quickstart-v12*.
Create a JavaScript application named *blob-quickstart-v12*.
cd blob-quickstart-v12 ```
-1. Create a new text file called *package.json*. This file defines the Node.js project. Save this file in the *blob-quickstart-v12* directory. Here is the contents of the file:
-
- ```json
- {
- "name": "blob-quickstart-v12",
- "version": "1.0.0",
- "description": "Use the @azure/storage-blob SDK version 12 to interact with Azure Blob storage",
- "main": "blob-quickstart-v12.js",
- "scripts": {
- "start": "node blob-quickstart-v12.js"
- },
- "author": "Your Name",
- "license": "MIT",
- "dependencies": {
- "@azure/storage-blob": "^12.0.0",
- "@types/dotenv": "^4.0.3",
- "dotenv": "^6.0.0"
- }
- }
+1. Create a *package.json*.
+
+ ```console
+ npm init -y
```
- You can put your own name in for the `author` field, if you'd like.
+1. Open the project in Visual Studio Code:
-### Install the package
+ ```console
+ code .
+ ```
-While still in the *blob-quickstart-v12* directory, install the Azure Blob storage client library for JavaScript package by using the `npm install` command. This command reads the *package.json* file and installs the Azure Blob storage client library v12 for JavaScript package and all the libraries on which it depends.
+## Install the npm package for blob storage
-```console
-npm install
-```
+1. Install the Azure Storage npm package:
-### Set up the app framework
+ ```console
+ npm install @azure/storage-blob
+ ```
+
+1. Install other dependencies used in this quickstart:
-From the project directory:
+ ```console
+ npm install uuid dotenv
+ ```
+
+## Create JavaScript file
-1. Open another new text file in your code editor
-1. Add `require` calls to load Azure and Node.js modules
-1. Create the structure for the program, including basic exception handling
+From the project directory:
- Here's the code:
+1. Create a new file named `index.js`.
+1. Copy the following code into the file. More code will be added as you go through this quickstart.
```javascript const { BlobServiceClient } = require('@azure/storage-blob'); const { v1: uuidv1} = require('uuid');-
+ require('dotenv').config()
+
async function main() { console.log('Azure Blob storage v12 - JavaScript quickstart sample');+ // Quick start code goes here+ }
- main().then(() => console.log('Done')).catch((ex) => console.log(ex.message));
+ main()
+ .then(() => console.log('Done'))
+ .catch((ex) => console.log(ex.message));
```
-1. Save the new file as *blob-quickstart-v12.js* in the *blob-quickstart-v12* directory.
- [!INCLUDE [storage-quickstart-credentials-include](../../../includes/storage-quickstart-credentials-include.md)]
-## Object model
-
-Azure Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
--- The storage account-- A container in the storage account-- A blob in the container-
-The following diagram shows the relationship between these resources.
-
-![Diagram of Blob storage architecture](./media/storage-blobs-introduction/blob1.png)
-
-Use the following JavaScript classes to interact with these resources:
--- [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient): The `BlobServiceClient` class allows you to manipulate Azure Storage resources and blob containers.-- [ContainerClient](/javascript/api/@azure/storage-blob/containerclient): The `ContainerClient` class allows you to manipulate Azure Storage containers and their blobs.-- [BlobClient](/javascript/api/@azure/storage-blob/blobclient): The `BlobClient` class allows you to manipulate Azure Storage blobs.-
-## Code examples
-
-These example code snippets show you how to perform the following with the Azure Blob storage client library for JavaScript:
--- [Get the connection string](#get-the-connection-string)-- [Create a container](#create-a-container)-- [Upload blobs to a container](#upload-blobs-to-a-container)-- [List the blobs in a container](#list-the-blobs-in-a-container)-- [Download blobs](#download-blobs)-- [Delete a container](#delete-a-container)-
-### Get the connection string
+## Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in the [Configure your storage connection string](#configure-your-storage-connection-string) section. Add this code inside the `main` function:
-```javascript
-// Retrieve the connection string for use with the application. The storage
-// connection string is stored in an environment variable on the machine
-// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
-// environment variable is created after the application is launched in a
-// console or with Visual Studio, the shell or application needs to be closed
-// and reloaded to take the environment variable into account.
-const AZURE_STORAGE_CONNECTION_STRING = process.env.AZURE_STORAGE_CONNECTION_STRING;
-```
-
-### Create a container
-
-Decide on a name for the new container. The code below appends a UUID value to the container name to ensure that it is unique.
-
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-Create an instance of the [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) class by calling the [fromConnectionString](/javascript/api/@azure/storage-blob/blobserviceclient#fromconnectionstring-string--storagepipelineoptions-) method. Then, call the [getContainerClient](/javascript/api/@azure/storage-blob/blobserviceclient#getcontainerclient-string-) method to get a reference to a container. Finally, call [create](/javascript/api/@azure/storage-blob/containerclient#create-containercreateoptions-) to actually create the container in your storage account.
-
-Add this code to the end of the `main` function:
-
-```javascript
-// Create the BlobServiceClient object which will be used to create a container client
-const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
-
-// Create a unique name for the container
-const containerName = 'quickstart' + uuidv1();
-
-console.log('\nCreating container...');
-console.log('\t', containerName);
-// Get a reference to a container
-const containerClient = blobServiceClient.getContainerClient(containerName);
+## Create a container
-// Create the container
-const createContainerResponse = await containerClient.create();
-console.log("Container was created successfully. requestId: ", createContainerResponse.requestId);
-```
+1. Decide on a name for the new container. Container names must be lowercase.
-### Upload blobs to a container
+ For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-The following code snippet:
+1. Add this code to the end of the `main` function:
-1. Creates a text string to upload to a blob.
-1. Gets a reference to a [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object by calling the [getBlockBlobClient](/javascript/api/@azure/storage-blob/containerclient#getblockblobclient-string-) method on the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) from the [Create a container](#create-a-container) section.
-1. Uploads the text string data to the blob by calling the [upload](/javascript/api/@azure/storage-blob/blockblobclient#upload-httprequestbody--number--blockblobuploadoptions-) method.
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/quickstarts/JavaScript/V12/nodejs/index.js" id="snippet_CreateContainer":::
-Add this code to the end of the `main` function:
+ The preceding code creates an instance of the [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) class by calling the [fromConnectionString](/javascript/api/@azure/storage-blob/blobserviceclient#fromconnectionstring-string--storagepipelineoptions-) method. Then, call the [getContainerClient](/javascript/api/@azure/storage-blob/blobserviceclient#getcontainerclient-string-) method to get a reference to a container. Finally, call [create](/javascript/api/@azure/storage-blob/containerclient#create-containercreateoptions-) to actually create the container in your storage account.
-```javascript
-// Create a unique name for the blob
-const blobName = 'quickstart' + uuidv1() + '.txt';
+## Upload blobs to a container
-// Get a block blob client
-const blockBlobClient = containerClient.getBlockBlobClient(blobName);
+Copy the following code to the end of the `main` function to upload a text string to a blob:
-console.log('\nUploading to Azure storage as blob:\n\t', blobName);
-// Upload data to the blob
-const data = 'Hello, World!';
-const uploadBlobResponse = await blockBlobClient.upload(data, data.length);
-console.log("Blob was uploaded successfully. requestId: ", uploadBlobResponse.requestId);
-```
+The preceding code gets a reference to a [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object by calling the [getBlockBlobClient](/javascript/api/@azure/storage-blob/containerclient#getblockblobclient-string-) method on the [ContainerClient](/javascript/api/@azure/storage-blob/containerclient) from the [Create a container](#create-a-container) section.
+The code uploads the text string data to the blob by calling the [upload](/javascript/api/@azure/storage-blob/blockblobclient#upload-httprequestbody--number--blockblobuploadoptions-) method.
-### List the blobs in a container
+## List the blobs in a container
-List the blobs in the container by calling the [listBlobsFlat](/javascript/api/@azure/storage-blob/containerclient#listblobsflat-containerlistblobsoptions-) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob.
+Add the following code to the end of the `main` function to list the blobs in the container.
-Add this code to the end of the `main` function:
-```javascript
-console.log('\nListing blobs...');
+The preceding code calls the [listBlobsFlat](/javascript/api/@azure/storage-blob/containerclient#listblobsflat-containerlistblobsoptions-) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob.
-// List the blob(s) in the container.
-for await (const blob of containerClient.listBlobsFlat()) {
- console.log('\t', blob.name);
-}
-```
+## Download blobs
-### Download blobs
+1. Add the following code to the end of the `main` function to download the previously created blob into the app runtime.
-Download the previously created blob by calling the [download](/javascript/api/@azure/storage-blob/blockblobclient#download-undefinednumber--undefinednumber--blobdownloadoptions-) method. The example code includes a helper function called `streamToString`, which is used to read a Node.js readable stream into a string.
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/quickstarts/JavaScript/V12/nodejs/index.js" id="snippet_DownloadBlobs":::
-Add this code to the end of the `main` function:
+ The preceding code calls the [download](/javascript/api/@azure/storage-blob/blockblobclient#download-undefinednumber--undefinednumber--blobdownloadoptions-) method.
-```javascript
-// Get blob content from position 0 to the end
-// In Node.js, get downloaded data by accessing downloadBlockBlobResponse.readableStreamBody
-// In browsers, get downloaded data by accessing downloadBlockBlobResponse.blobBody
-const downloadBlockBlobResponse = await blockBlobClient.download(0);
-console.log('\nDownloaded blob content...');
-console.log('\t', await streamToString(downloadBlockBlobResponse.readableStreamBody));
-```
+2. Copy the following code *after* the `main` function to convert a stream back into a string.
-Add this helper function *after* the `main` function:
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/quickstarts/JavaScript/V12/nodejs/index.js" id="snippet_ConvertStreamToText":::
-```javascript
-// A helper function used to read a Node.js readable stream into a string
-async function streamToString(readableStream) {
- return new Promise((resolve, reject) => {
- const chunks = [];
- readableStream.on("data", (data) => {
- chunks.push(data.toString());
- });
- readableStream.on("end", () => {
- resolve(chunks.join(""));
- });
- readableStream.on("error", reject);
- });
-}
-```
+## Delete a container
-### Delete a container
+Add this code to the end of the `main` function to delete the container and all its blobs:
-The following code cleans up the resources the app created by removing the entire container using the [ΓÇïdelete](/javascript/api/@azure/storage-blob/containerclient#delete-containerdeletemethodoptions-) method. You can also delete the local files, if you like.
-Add this code to the end of the `main` function:
-
-```javascript
-console.log('\nDeleting container...');
-
-// Delete container
-const deleteContainerResponse = await containerClient.delete();
-console.log("Container was deleted successfully. requestId: ", deleteContainerResponse.requestId);
-```
+The preceding code cleans up the resources the app created by removing the entire container using the [ΓÇïdelete](/javascript/api/@azure/storage-blob/containerclient#delete-containerdeletemethodoptions-) method. You can also delete the local files, if you like.
## Run the code
-This app creates a text string and uploads it to Blob storage. The example then lists the blob(s) in the container, downloads the blob, and displays the downloaded data.
-
-From a console prompt, navigate to the directory containing the *blob-quickstart-v12.js* file, then execute the following `node` command to run the app.
-
-```console
-node blob-quickstart-v12.js
-```
-
-The output of the app is similar to the following example:
+1. From a Visual Studio Code terminal, run the app.
-```output
-Azure Blob storage v12 - JavaScript quickstart sample
+ ```console
+ node index.js
+ ```
-Creating container...
- quickstart4a0780c0-fb72-11e9-b7b9-b387d3c488da
+2. The output of the app is similar to the following example:
+
+ ```output
+ Azure Blob storage v12 - JavaScript quickstart sample
+
+ Creating container...
+ quickstart4a0780c0-fb72-11e9-b7b9-b387d3c488da
+
+ Uploading to Azure Storage as blob:
+ quickstart4a3128d0-fb72-11e9-b7b9-b387d3c488da.txt
+
+ Listing blobs...
+ quickstart4a3128d0-fb72-11e9-b7b9-b387d3c488da.txt
+
+ Downloaded blob content...
+ Hello, World!
+
+ Deleting container...
+ Done
+ ```
-Uploading to Azure Storage as blob:
- quickstart4a3128d0-fb72-11e9-b7b9-b387d3c488da.txt
+Step through the code in your debugger and check your [Azure portal](https://portal.azure.com) throughout the process. Check to see that the container is being created. You can open the blob inside the container and view the contents.
-Listing blobs...
- quickstart4a3128d0-fb72-11e9-b7b9-b387d3c488da.txt
+## Use the storage emulator
-Downloaded blob content...
- Hello, World!
+This quickstart created a container and blob on the Azure cloud. You can also use the Azure Blob storage npm package to create these resources locally on the [Azure Storage emulator](/azure/storage/common/storage-use-emulator) for development and testing.
-Deleting container...
-Done
-```
+## Clean up
-Step through the code in your debugger and check your [Azure portal](https://portal.azure.com) throughout the process. Check to see that the container is being created. You can open the blob inside the container and view the contents.
+1. When you're done with this quickstart, delete the `blob-quickstart-v12` directory.
+1. If you're done using your Azure Storage resource, use the [Azure CLI to remove the Storage resource](storage-quickstart-blobs-cli.md#clean-up-resources).
## Next steps
For tutorials, samples, quickstarts, and other documentation, visit:
> [Azure for JavaScript developer center](/azure/developer/javascript/) - To learn how to deploy a web app that uses Azure Blob storage, see [Tutorial: Upload image data in the cloud with Azure Storage](./storage-upload-process-images.md?preserve-view=true&tabs=javascript)-- To see Blob storage sample apps, continue to [Azure Blob storage client library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
+- To see Blob storage sample apps, continue to [Azure Blob storage package library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
- To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).
storage Customer Managed Keys Configure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault.md
When you configure customer-managed keys with the Azure portal, you can select a
To authorize access to the key vault with a user-assigned managed identity, you will need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity, then save the resource ID and principal ID to variables. You will need these values in subsequent steps: ```azurepowershell
-$userIdentityId = (Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName <resource-group>).Id
+$userIdentityId = Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName <resource-group>
$principalId = $userIdentity.PrincipalId ```
synapse-analytics Query Json Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-json-files.md
The JSON document in the preceding sample query includes an array of objects. Th
### Data source usage
-Previous example uses full path to the file. As an alternative, you can create an external data source with the location that points to the root folder of the storage, and use that data source and the relative path to the file in the `OPENROWSET` function:
+The previous example uses full path to the file. As an alternative, you can create an external data source with the location that points to the root folder of the storage, and use that data source and the relative path to the file in the `OPENROWSET` function:
```sql create external data source covid
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared-enable.md
To deploy a managed disk with the shared disk feature enabled, use the new prope
1. Sign in to the Azure portal. 1. Search for and Select **Disks**.
-1. Select **+ Create** to create a new disk.
+1. Select **+ Create** to create a new managed disk.
1. Fill in the details and select an appropriate region, then select **Change size**. :::image type="content" source="media/disks-shared-enable/create-shared-disk-basics-pane.png" alt-text="Screenshot of the create a managed disk pane, change size highlighted.." lightbox="media/disks-shared-enable/create-shared-disk-basics-pane.png":::
To deploy a managed disk with the shared disk feature enabled, use the new prope
1. Sign in to the Azure portal. 1. Search for and Select **Disks**.
-1. Select **+ Create** to create a new disk.
+1. Select **+ Create** to create a new managed disk.
1. Fill in the details and select an appropriate region, then select **Change size**. :::image type="content" source="media/disks-shared-enable/create-shared-disk-basics-pane.png" alt-text="Screenshot of the create a managed disk pane, change size highlighted.." lightbox="media/disks-shared-enable/create-shared-disk-basics-pane.png":::
To deploy a managed disk with the shared disk feature enabled, change the `maxSh
1. Sign in to the Azure portal. 1. Search for and Select **Disks**.
-1. Select **+ Create** to create a new disk.
+1. Select **+ Create** to create a new managed disk.
1. Fill in the details, then select **Change size**. 1. Select ultra disk for the **Disk SKU**.
virtual-machines N Series Amd Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-amd-driver-setup.md
For basic specs, storage capacities, and disk details, see [GPU Windows VM sizes
| -- |- | | Windows 10 - Build 2009, 2004, 1909 <br/><br/>Windows 10 Enterprise multi-session - Build 2009, 2004, 1909 <br/><br/>Windows Server 2016 (version 1607)<br/><br/>Windows Server 2019 (version 1909) | [21.Q2-1](https://download.microsoft.com/download/4/e/-Azure-NVv4-Driver-21Q2-1.exe) (.exe) |
-Previous supported driver version for Windows builds up to 1909 is [20.Q4](https://download.microsoft.com/download/f/1/6/f16e6275-a718-40cd-a366-9382739ebd39/AMD-Azure-NVv4-Driver-20Q4.exe) (.exe)
+Previous supported driver version for Windows builds up to 1909 is [20.Q4-1](https://download.microsoft.com/download/0/e/6/0e611412-093f-40b8-8bf9-794a1623b2be/AMD-Azure-NVv4-Driver-20Q4-1.exe) (.exe)
> [!NOTE] > If you use build 1903/1909 then you may need to update the following group policy for optimal performance. These changes are not needed for any other Windows builds.
virtual-machines Jboss Eap On Azure Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-migration.md
keytool -list -v -keystore <path to keystore>
- Inventory all Java Naming and Directory Interface (JNDI) resources. Some, such as Java Message Service (JMS) brokers, may require migration or reconfiguration.
-### InsideyYour application
+### Inside your application
Inspect the WEB-INF/jboss-web.xml and/or WEB-INF/web.xml files.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- February 28, 2022: Added E(d)sv5 VM storage configurations to [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
- February 13, 2022: Corrected broken links to HANA hardware directory in the following documents: SAP Business One on Azure Virtual Machines, Available SKUs for HANA Large Instances, Certification of SAP HANA on Azure (Large Instances), Installation of SAP HANA on Azure virtual machines, SAP workload planning and deployment checklist, SAP HANA infrastructure configurations and operations on Azure, SAP HANA on Azure Large Instance migration to Azure Virtual Machines, Install and configure SAP HANA (Large Instances) ,on Azure, High availability of SAP HANA scale-out system on Red Hat Enterprise Linux, High availability for SAP HANA scale-out system with HSR on SUSE Linux Enterprise Server, High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server, Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server, SAP workload on Azure virtual machine supported scenarios, What SAP software is supported for Azure deployments - February 13, 2022: Change in [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) to add instructions about adding the SAP installation user as `Administrators Privilege user` to avoid SWPM permission errors - February 09, 2022: Add more information around 4K sectors usage of Db2 11.5 in [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md)
virtual-machines Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md
vm-linux Previously updated : 06/09/2021 Last updated : 02/28/2022
Given that low storage latency is critical for DBMS systems, even as DBMS, like
Some guiding principles in selecting your storage configuration for HANA can be listed like: - Decide on the type of storage based on [Azure Storage types for SAP workload](./planning-guide-storage.md) and [Select a disk type](../../disks-types.md)-- The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for a VM. Overall VM storage throughput is documented in the article [Memory optimized virtual machine sizes](../../sizes-memory.md)
+- The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for a VM. Overall VM storage throughput is documented in the article [Memory optimized virtual machine sizes](../../sizes-memory.md).
- When deciding for the storage configuration, try to stay below the overall throughput of the VM with your **/hana/data** volume configuration. Writing savepoints, SAP HANA can be aggressive issuing I/Os. It is easily possible to push up to throughput limits of your **/hana/data** volume when writing a savepoint. If your disk(s) that build the **/hana/data** volume have a higher throughput than your VM allows, you could run into situations where throughput utilized by the savepoint writing is interfering with throughput demands of the redo log writes. A situation that can impact the application throughput - If you are considering using HANA System Replication, you need to use exactly the same type of Azure storage for **/hana/data** and **/hana/log** for all the VMs participating in the HANA System Replication configuration. For example, using Azure premium storage for **/hana/data** with one VM and Azure Ultra disk for **/hana/log** in another VM within the same HANA System replication configuration, is not supported
Check whether the storage throughput for the different suggested volumes meets t
Azure Write Accelerator only works with [Azure managed disks](https://azure.microsoft.com/services/managed-disks/). So at least the Azure premium storage disks forming the **/han).
-For the HANA certified VMs of the Azure [Esv3](../../ev3-esv3-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv3-series) family and the [Edsv4](../../edv4-edsv4-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv4-series), you need to ANF for the **/hana/data** and **/hana/log** volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume. As a result, the configurations for the **/hana/data** volume on Azure premium storage could look like:
+For the HANA certified VMs of the Azure [Esv3](../../ev3-esv3-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv3-series) family and the [Edsv4](../../edv4-edsv4-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv4-series), [Edsv5](../../edv5-edsv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv5-series), and [Esv5](../../ev5-esv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv5-series) you need to use ANF for the **/hana/data** and **/hana/log** volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume to be compliant with the SAP HANA certification KPIs. Though, many custmers are using premium storage SSD disks for the **/hana/log** volume for non-production purposes or even for smaller production workloads since the write latency experienced with premium storage for the critical redo log writes are meeting the workload requirements. The configurations for the **/hana/data** volume on Azure premium storage could look like:
| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS | | | | | | | | |
-| E20ds_v4 | 160 GiB | 480 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
+| E20ds_v4| 160 GiB | 480 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
+| E20(d)s_v5| 160 GiB | 750 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 |
| E32ds_v4 | 256 GiB | 768 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500|
+| E32ds_v5 | 256 GiB | 865 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500|
| E48ds_v4 | 384 GiB | 1,152 MBps | 3 x P15 | 375 MBps |510 MBps | 3,300 | 10,500 |
-| E64ds_v4 | 504 GiB | 1,200 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| E48ds_v4 | 384 GiB | 1,315 MBps | 3 x P15 | 375 MBps |510 MBps | 3,300 | 10,500 |
| E64s_v3 | 432 GiB | 1,200 MB/s | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| E64ds_v4 | 504 GiB | 1,200 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| E64(d)s_v5 | 512 GiB | 1,735 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| E96(d)s_v5 | 672 GiB | 2,600 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+ For the other volumes, including **/hana/log** on Ultra disk, the configuration could look like: | VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS | /hana/shared | /root volume | /usr/sap | | | | | | | | | | -- | | E20ds_v4 | 160 GiB | 480 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
+| E20(d)s_v5 | 160 GiB | 750 MBps | 80 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
| E32ds_v4 | 256 GiB | 768 MBps | 128 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
+| E32(d)s_v5 | 256 GiB | 865 MBps | 128 GB | 250 MBps | 1,800 | 1 x P15 | 1 x P6 | 1 x P6 |
| E48ds_v4 | 384 GiB | 1,152 MBps | 192 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
-| E64ds_v4 | 504 GiB | 1,200 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E48(d)s_v5 | 384 GiB | 1,315 MBps | 192 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
| E64s_v3 | 432 GiB | 1,200 MBps | 220 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E64ds_v4 | 504 GiB | 1,200 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E64(d)s_v5 | 512 GiB | 1,735 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+| E96(d)s_v5 | 672 GiB | 2,600 MBps | 256 GB | 250 MBps | 1,800 | 1 x P20 | 1 x P6 | 1 x P6 |
+ ## Azure Ultra disk storage configuration for SAP HANA
virtual-network Create Public Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-powershell.md
New-AzResourceGroup @rg
>[!NOTE] >Standard SKU public IP is recommended for production workloads. For more information about SKUs, see **[Public IP addresses](public-ip-addresses.md)**. >
->The following command works for Az.Network module version 4.5.0 or later. For more information about the Powershell modules currently being used, please refer to the [PowerShellGet documentation](/powershell/module/powershellget/).
+>The following command works for Az.Network module version 4.5.0 or later. For more information about the PowerShell modules currently being used, please refer to the [PowerShellGet documentation](/powershell/module/powershellget/).
In this section, you'll create a public IP with zones. Public IP addresses can be zone-redundant or zonal.
virtual-network Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/powershell-samples.md
# Azure PowerShell samples for virtual network
-The following table includes links to Azure Powershell scripts:
+The following table includes links to Azure PowerShell scripts:
| Script | Description | |-|-|
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
When there is an exact prefix match between a route with an explicit IP prefix a
3. AzureCloud regional tags (eg. AzureCloud.canadacentral, AzureCloud.eastasia) 4. The AzureCloud tag </br></br>
-To use this feature specify a Service Tag name for the address prefix parameter in route table commands. For example, in Powershell you can create a new route to direct traffic sent to an Azure Storage IP prefix to a virtual appliance by using: </br></br>
+To use this feature specify a Service Tag name for the address prefix parameter in route table commands. For example, in PowerShell you can create a new route to direct traffic sent to an Azure Storage IP prefix to a virtual appliance by using: </br></br>
```azurepowershell-interactive New-AzRouteConfig -Name "StorageRoute" -AddressPrefix "Storage" -NextHopType "VirtualAppliance" -NextHopIpAddress "10.0.100.4"
When BGP routes are present or a Service Endpoint is configured on your subnet,
> [!NOTE]
-> While in Public Preview, there are several limitations. The feature is not currently supported in the Azure Portal and is only available through Powershell and CLI. There is no support for use with containers.
+> While in Public Preview, there are several limitations. The feature is not currently supported in the Azure Portal and is only available through PowerShell and CLI. There is no support for use with containers.
## Next hop types across Azure tools
vpn-gateway Nva Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nva-work-remotely-support.md
Title: 'Working remotely: Network Virtual Appliance (NVA) considerations for rem
description: Learn about the things that you should take into consideration working with Network Virtual Appliances (NVAs) in Azure during the COVID-19 pandemic. -+ Last updated 09/08/2020-+
vpn-gateway Vpn Gateway Sample Vnet Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/scripts/vpn-gateway-sample-vnet-vnet-powershell.md
-# Use Powershell to configure a VNet-to-VNet VPN gateway connection
+# Use PowerShell to configure a VNet-to-VNet VPN gateway connection
This script connects two virtual networks by using the VNet-to-VNet connection type.
vpn-gateway Vpn Gateway Certificates Point To Site Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-certificates-point-to-site-linux.md
# Generate and export certificates
-Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using the Linux CLI and strongSwan. If you are looking for different certificate instructions, see the [Powershell](vpn-gateway-certificates-point-to-site.md) or [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md) articles. For information about how to install strongSwan using the GUI instead of CLI, see the steps in the [Client configuration](point-to-site-vpn-client-configuration-azure-cert.md#install) article.
+Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using the Linux CLI and strongSwan. If you are looking for different certificate instructions, see the [PowerShell](vpn-gateway-certificates-point-to-site.md) or [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md) articles. For information about how to install strongSwan using the GUI instead of CLI, see the steps in the [Client configuration](point-to-site-vpn-client-configuration-azure-cert.md#install) article.
## Install strongSwan