Updates from: 08/08/2022 01:05:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 03/16/2022 Last updated : 08/07/2022
Helga@contoso.com,1234567,2234567abcdef2234567abcdef,60,Contoso,HardwareKey
> [!NOTE] > Make sure you include the header row in your CSV file.
-Once properly formatted as a CSV file, a Global Administrator can then sign in to the Azure portal, navigate to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the resulting CSV file.
+Once properly formatted as a CSV file, a Global Administrator can then sign in to the Azure portal, navigate to **Azure Active Directory** > **Security** > **Multifactor authentication** > **OATH tokens**, and upload the resulting CSV file.
Depending on the size of the CSV file, it may take a few minutes to process. Select the **Refresh** button to get the current status. If there are any errors in the file, you can download a CSV file that lists any errors for you to resolve. The field names in the downloaded CSV file are different than the uploaded version.
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 06/20/2022 Last updated : 08/07/2022
The following settings are available:
To configure account lockout settings, complete these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator.
-1. Go to **Azure Active Directory** > **Security** > **MFA** > **Account lockout**.
+1. Go to **Azure Active Directory** > **Security** > **Multifactor authentication** > **Account lockout**.
1. Enter the values for your environment, and then select **Save**. ![Screenshot that shows the account lockout settings in the Azure portal.](./media/howto-mfa-mfasettings/account-lockout-settings.png)
To block a user, complete the following steps.
[Watch a short video that describes this process.](https://www.youtube.com/watch?v=WdeE1On4S1o&feature=youtu.be)
-1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Block/unblock users**.
+1. Browse to **Azure Active Directory** > **Security** > **Multifactor authentication** > **Block/unblock users**.
1. Select **Add** to block a user. 1. Enter the user name for the blocked user in the format `username@domain.com`, and then provide a comment in the **Reason** box. 1. Select **OK** to block the user.
To block a user, complete the following steps.
To unblock a user, complete the following steps:
-1. Go to **Azure Active Directory** > **Security** > **MFA** > **Block/unblock users**.
+1. Go to **Azure Active Directory** > **Security** > **Multifactor authentication** > **Block/unblock users**.
1. In the **Action** column next to the user, select **Unblock**. 1. Enter a comment in the **Reason for unblocking** box. 1. Select **OK** to unblock the user.
The following fraud alert configuration options are available:
To enable and configure fraud alerts, complete the following steps:
-1. Go to **Azure Active Directory** > **Security** > **MFA** > **Fraud alert**.
+1. Go to **Azure Active Directory** > **Security** > **Multifactor authentication** > **Fraud alert**.
1. Set **Allow users to submit fraud alerts** to **On**. 1. Configure the **Automatically block users who report fraud** or **Code to report fraud during initial greeting** setting as needed. 1. Select **Save**.
Helga@contoso.com,1234567,1234567abcdef1234567abcdef,60,Contoso,HardwareKey
> [!NOTE] > Be sure to include the header row in your CSV file.
-An administrator can sign in to the Azure portal, go to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the CSV file.
+An administrator can sign in to the Azure portal, go to **Azure Active Directory** > **Security** > **Multifactor authentication** > **OATH tokens**, and upload the CSV file.
Depending on the size of the CSV file, it might take a few minutes to process. Select **Refresh** to get the status. If there are any errors in the file, you can download a CSV file that lists them. The field names in the downloaded CSV file are different from those in the uploaded version.
In the United States, if you haven't configured MFA caller ID, voice calls from
To configure your own caller ID number, complete the following steps:
-1. Go to **Azure Active Directory** > **Security** > **MFA** > **Phone call settings**.
+1. Go to **Azure Active Directory** > **Security** > **Multifactor authentication** > **Phone call settings**.
1. Set the **MFA caller ID number** to the number you want users to see on their phones. Only US-based numbers are allowed. 1. Select **Save**.
You can use the following sample scripts to create your own custom messages. The
To use your own custom messages, complete the following steps:
-1. Go to **Azure Active Directory** > **Security** > **MFA** > **Phone call settings**.
+1. Go to **Azure Active Directory** > **Security** > **Multifactor authentication** > **Phone call settings**.
1. Select **Add greeting**. 1. Choose the **Type** of greeting, such as **Greeting (standard)** or **Authentication successful**. 1. Select the **Language**. See the previous section on [custom message language behavior](#custom-message-language-behavior).
To use your own custom messages, complete the following steps:
Settings for app passwords, trusted IPs, verification options, and remembering multi-factor authentication on trusted devices are available in the service settings. This is a legacy portal. It isn't part of the regular Azure AD portal.
-You can access service settings from the Azure portal by going to **Azure Active Directory** > **Security** > **MFA** > **Getting started** > **Configure** > **Additional cloud-based MFA settings**. A window or tab opens with additional service settings options.
+You can access service settings from the Azure portal by going to **Azure Active Directory** > **Security** > **Multifactor authentication** > **Getting started** > **Configure** > **Additional cloud-based MFA settings**. A window or tab opens with additional service settings options.
### Trusted IPs
After you enable the **remember multi-factor authentication** feature, users can
## Next steps
-To learn more, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
+To learn more, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
Title: Change approval settings for an access package in Azure AD entitlement ma
description: Learn how to change approval and requestor information settings for an access package in Azure Active Directory entitlement management. documentationCenter: ''-+ editor:
In the Approval section, you specify whether an approval is required when users
- Only one of the selected approvers or fallback approvers needs to approve a request for single-stage approval. - Only one of the selected approvers from each stage needs to approve a request for multi-stage approval for the request to progress to the next stage.-- If one of the selected approved in a stage denies a request before another approver in that stage approves it, or if no one approves, the request terminates and the user does not receive access.
+- If one of the selected approved in a stage denies a request before another approver in that stage approves it, or if no one approves, the request terminates and the user doesn't receive access.
- The approver can be a specified user or member of a group, the requestor's Manager, Internal sponsor, or External sponsor depending on who the policy is governing access. For a demonstration of how to add approvers to a request policy, watch the following video:
Follow these steps to specify the approval settings for requests for the access
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
1. Either select a policy to edit or add a new policy to the access package
- 1. Click **Policies** and then **Add policy** if you want to create a new policy.
- 1. Click the policy you wish to edit and then click **edit**.
+ 1. Select **Policies** and then **Add policy** if you want to create a new policy.
+ 1. Select the policy you wish to edit and then select **edit**.
1. Go to the **Request** tab.
Use the following steps to add approvers after selecting how many stages you req
1. Add the **First Approver**:
- If the policy is set to govern access for users in your directory, you can select **Manager as approver**. Or, add a specific user by clicking **Add approvers** after selecting Choose specific approvers from the dropdown menu.
+ If the policy is set to govern access for users in your directory, you can select **Manager as approver**. Or, add a specific user by selecting **Add approvers** after selecting **Choose specific approvers** from the dropdown menu.
![Access package - Requests - For users in directory - First Approver](./media/entitlement-management-access-package-approval-policy/approval-single-stage-first-approver-manager.png)
Use the following steps to add approvers after selecting how many stages you req
![Access package - Requests - For users out of directory - First Approver](./media/entitlement-management-access-package-approval-policy/out-directory-first-approver.png)
-1. If you selected **Manager** as the first approver, click **Add fallback** to select one or more users or groups in your directory to be a fallback approver. Fallback approvers receive the request if entitlement management can't find the manager for the user requesting access.
+1. If you selected **Manager** as the first approver, select **Add fallback** to select one or more users or groups in your directory to be a fallback approver. Fallback approvers receive the request if entitlement management can't find the manager for the user requesting access.
The manager is found by entitlement management using the **Manager** attribute. The attribute is in the user's profile in Azure AD. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/active-directory-users-profile-azure-portal.md).
-1. If you selected **Choose specific approvers**, click **Add approvers** to select one or more users or groups in your directory to be approvers.
+1. If you selected **Choose specific approvers**, select **Add approvers** to choose one or more users or groups in your directory to be approvers.
1. In the box under **Decision must be made in how many days?**, specify the number of days that an approver has to review a request for this access package.
Use the following steps to add approvers after selecting how many stages you req
### Multi-stage approval
-If you selected a multi-stage approval, you'll need to add an approver for each additional stage.
+If you selected a multi-stage approval, you'll need to add an approver for each extra stage.
1. Add the **Second Approver**:
- If the users are in your directory, add a specific user as the second approver by clicking **Add approvers** under Choose specific approvers.
+ If the users are in your directory, add a specific user as the second approver by selecting **Add approvers** under Choose specific approvers.
![Access package - Requests - For users in directory - Second Approver](./media/entitlement-management-access-package-approval-policy/in-directory-second-approver.png)
If you selected a multi-stage approval, you'll need to add an approver for each
1. Set the Require approver justification toggle to **Yes** or **No**.
- You also have the option to add an additional stage for a three-stage approval process. For example, you might want an employeeΓÇÖs manager to be the first stage approver for an access package. But, one of the resources in the access package contains confidential information. In this case, you could designate the resource owner as a second approver and a security reviewer as the third approver. That allows a security team to have oversight into the process and the ability to, for example, reject a request based on risk criteria not known to the resource owner.
+ You also have the option to add an extra stage for a three-stage approval process. For example, you might want an employeeΓÇÖs manager to be the first stage approver for an access package. But, one of the resources in the access package contains confidential information. In this case, you could designate the resource owner as a second approver and a security reviewer as the third approver. That allows a security team to have oversight into the process and the ability to, for example, reject a request based on risk criteria not known to the resource owner.
1. Add the **Third Approver**:
If you selected a multi-stage approval, you'll need to add an approver for each
You can specify alternate approvers, similar to specifying the primary approvers who can approve requests on each stage. Having alternate approvers will help ensure that the requests are approved or denied before they expire (timeout). You can list alternate approvers alongside the primary approver on each stage.
-By specifying alternate approvers on a stage, in the event that the primary approvers were unable to approve or deny the request, the pending request gets forwarded to the alternate approvers, per the forwarding schedule you specified during policy setup. They receive an email to approve or deny the pending request.
+By specifying alternate approvers on a stage, if the primary approvers were unable to approve or deny the request, the pending request gets forwarded to the alternate approvers, per the forwarding schedule you specified during policy setup. They receive an email to approve or deny the pending request.
After the request is forwarded to the alternate approvers, the primary approvers can still approve or deny the request. Alternate approvers use the same My Access site to approve or deny the pending request.
-You can list people or groups of people to be approvers and alternate approvers. Please ensure that you list different sets of people to be the first, second, and alternate approvers.
+You can list people or groups of people to be approvers and alternate approvers. Ensure that you list different sets of people to be the first, second, and alternate approvers.
For example, if you listed Alice and Bob as the first stage approver(s), list Carol and Dave as the alternate approvers. Use the following steps to add alternate approvers to an access package:
-1. Under the approver on a stage, click **Show advanced request settings**.
+1. Under the approver on a stage, select **Show advanced request settings**.
:::image type="content" source="media/entitlement-management-access-package-approval-policy/alternate-approvers-click-advanced-request.png" alt-text="Access package - Policy - Show advanced request settings"::: 1. Set **If no action taken, forward to alternate approvers?** toggle to **Yes**.
-1. Click **Add alternate approvers** and select the alternate approver(s) from the list.
+1. Select **Add alternate approvers** and select the alternate approver(s) from the list.
![Access package - Policy - Add Alternate Approvers](./media/entitlement-management-access-package-approval-policy/alternate-approvers-add.png)
- If you select Manager as approver for the First Approver, you will have an additional option, **Second level manager as alternate approver**, available to choose in the alternate approver field. If you select this option, you need to add a fallback approver to forward the request to in case the system can't find the second level manager.
+ If you select Manager as approver for the First Approver, you'll have an extra option, **Second level manager as alternate approver**, available to choose in the alternate approver field. If you select this option, you need to add a fallback approver to forward the request to in case the system can't find the second level manager.
1. In the **Forward to alternate approver(s) after how many days** box, put in the number of days the approvers have to approve or deny a request. If no approvers have approved or denied the request before the request duration, the request expires (timeout), and the user will have to submit another request for the access package.
- Requests can only be forwarded to alternate approvers a day after the request duration reaches half-life, and the decision of the main approver(s) has to time-out after at least 4 days. If the request time-out is less or equal than 3, there is not enough time to forward the request to alternate approver(s). In this example, the duration of the request is 14 days. So, the request duration reaches half-life at day 7. So the request can't be forwarded earlier than day 8. Also, requests can't be forwarded on the last day of the request duration. So in the example, the latest the request can be forwarded is day 13.
+ Requests can only be forwarded to alternate approvers a day after the request duration reaches half-life, and the decision of the main approver(s) has to time-out after at least four days. If the request time-out is less or equal than three, there isn't enough time to forward the request to alternate approver(s). In this example, the duration of the request is 14 days. So, the request duration reaches half-life at day 7. So the request can't be forwarded earlier than day 8. Also, requests can't be forwarded on the last day of the request duration. So in the example, the latest the request can be forwarded is day 13.
## Enable requests
For example, if you listed Alice and Bob as the first stage approver(s), list Ca
![Access package - Policy- Enable policy setting](./media/entitlement-management-access-package-approval-policy/enable-requests.png)
-1. Click **Next**.
+1. Select **Next**.
## Collect additional requestor information for approval
-In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or multiple choice questions at the time of request. There is a limit of 20 questions per policy and a limit of 25 answers for multiple choice questions. The questions will then be shown to approvers to help them make a decision.
+In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or multiple choice questions at the time of request. There's a limit of 20 questions per policy and a limit of 25 answers for multiple choice questions. The questions will then be shown to approvers to help them make a decision.
-1. Go to the **Requestor information** tab and click the **Questions** sub tab.
+1. Go to the **Requestor information** tab and select the **Questions** sub tab.
1. Type in what you want to ask the requestor, also known as the display string, for the question in the **Question** box. ![Access package - Policy- Enable Requestor information setting](./media/entitlement-management-access-package-approval-policy/add-requestor-info-question.png)
-1. If the community of users who will need access to the access package don't all have a common preferred language, then you can improve the experience for users requesting access on myaccess.microsoft.com. To improve the experience, you can provide alternative display strings for different languages. For example, if a user's web browser is set to Spanish, and you have Spanish display strings configured, then those strings will be displayed to the requesting user. To configure localization for requests, click **add localization**.
- 1. Once in the **Add localizations for question** pane, select the **language code** for the language in which you are localizing the question.
+1. If the community of users who will need access to the access package don't all have a common preferred language, then you can improve the experience for users requesting access on myaccess.microsoft.com. To improve the experience, you can provide alternative display strings for different languages. For example, if a user's web browser is set to Spanish, and you have Spanish display strings configured, then those strings will be displayed to the requesting user. To configure localization for requests, select **add localization**.
+ 1. Once in the **Add localizations for question** pane, select the **language code** for the language in which you're localizing the question.
1. In the language you configured, type the question in the **Localized Text** box.
- 1. Once you have added all the localizations needed, click **Save**.
+ 1. Once you've added all the localizations needed, select **Save**.
![Access package - Policy- Configure localized text](./media/entitlement-management-access-package-approval-policy/add-localization-question.png)
In order to make sure users are getting access to the right access packages, you
![Access package - Policy- Select Edit and localize multiple choice answer format](./media/entitlement-management-access-package-approval-policy/answer-format-view-edit.png)
-1. If selecting multiple choice, click on the **Edit and localize** button to configure the answer options.
+1. If selecting multiple choice, select on the **Edit and localize** button to configure the answer options.
1. After selecting Edit and localize the **View/edit question** pane will open. 1. Type in the response options you wish to give the requestor when answering the question in the **Answer values** boxes. 1. Type in as many responses as you need. 1. If you would like to add your own localization for the multiple choice options, select the **Optional language code** for the language in which you want to localize a specific option. 1. In the language you configured, type the option in the Localized text box.
- 1. Once you have added all of the localizations needed for each multiple choice option, click **Save**.
+ 1. Once you've added all of the localizations needed for each multiple choice option, select **Save**.
![Access package - Policy- Enter multiple choice options](./media/entitlement-management-access-package-approval-policy/answer-multiple-choice.png)
-1. To require requestors to answer this question when requesting access to an access package, click the check box under **Required**.
+1. To require requestors to answer this question when requesting access to an access package, select the check box under **Required**.
-1. Fill out the remaining tabs (e.g., Lifecycle) based on your needs.
+1. Fill out the remaining tabs (for example, Lifecycle) based on your needs.
After you have configured requestor information in your access package's policy, can view the requestor's responses to the questions. For guidance on seeing requestor information, see [View requestor's answers to questions](entitlement-management-request-approve.md#view-requestors-answers-to-questions).
api-management Api Management Configuration Repository Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-configuration-repository-git.md
Title: Configure your API Management service using Git - Azure | Microsoft Docs
-description: Learn how to save and configure your API Management service configuration using Git.
+ Title: Configure your Azure API Management service using Git | Microsoft Docs
+description: Learn how to save and configure your API Management service configuration using a Git repository.
- -- Previously updated : 03/12/2019+ Last updated : 08/05/2022 # How to save and configure your API Management service configuration using Git
-Each API Management service instance maintains a configuration database that contains information about the configuration and metadata for the service instance. Changes can be made to the service instance by changing a setting in the Azure portal, using a PowerShell cmdlet, or making a REST API call. In addition to these methods, you can also manage your service instance configuration using Git, enabling service management scenarios such as:
+Each API Management service instance maintains a configuration database that contains information about the configuration and metadata for the service instance. Changes can be made to the service instance by changing a setting in the Azure portal, using Azure tools such as Azure PowerShell or the Azure CLI, or making a REST API call. In addition to these methods, you can manage your service instance configuration using Git, enabling scenarios such as:
-* Configuration versioning - download and store different versions of your service configuration
-* Bulk configuration changes - make changes to multiple parts of your service configuration in your local repository and integrate the changes back to the server with a single operation
-* Familiar Git toolchain and workflow - use the Git tooling and workflows that you are already familiar with
+* **Configuration versioning** - Download and store different versions of your service configuration
+* **Bulk configuration changes** - Make changes to multiple parts of your service configuration in your local repository and integrate the changes back to the server with a single operation
+* **Familiar Git toolchain and workflow** - Use the Git tooling and workflows that you are already familiar with
The following diagram shows an overview of the different ways to configure your API Management service instance.
-![Git configure][api-management-git-configure]
-When you make changes to your service using the Azure portal, PowerShell cmdlets, or the REST API, you are managing your service configuration database using the `https://{name}.management.azure-api.net` endpoint, as shown on the right side of the diagram. The left side of the diagram illustrates how you can manage your service configuration using Git and Git repository for your service located at `https://{name}.scm.azure-api.net`.
+When you make changes to your service using the Azure portal, Azure tools such as Azure PowerShell or the Azure CLI, or the REST API, you're managing your service configuration database using the `https://{name}.management.azure-api.net` endpoint, as shown on the right side of the diagram. The left side of the diagram illustrates how you can manage your service configuration using Git and Git repository for your service located at `https://{name}.scm.azure-api.net`.
The following steps provide an overview of managing your API Management service instance using Git. 1. Access Git configuration in your service
-2. Save your service configuration database to your Git repository
-3. Clone the Git repo to your local machine
-4. Pull the latest repo down to your local machine, and commit and push changes back to your repo
-5. Deploy the changes from your repo into your service configuration database
+1. Save your service configuration database to your Git repository
+1. Clone the Git repo to your local machine
+1. Pull the latest repo down to your local machine, and commit and push changes back to your repo
+1. Deploy the changes from your repo into your service configuration database
This article describes how to enable and use Git to manage your service configuration and provides a reference for the files and folders in the Git repository. > [!IMPORTANT]
-> This feature is designed to work with API Management services that have a small/medium configuration. Services with large number of configuration elements (APIs, Operations, Schemas etc.) may experience unexpected failures when processing Git commands. If you encounter such failures, please reduce the size of your service configuration and try again. Contact support if you need assistance.
+> This feature is designed to work with small to medium API Management service configurations, such as those with an exported size less than 10 MB, or with fewer than 10,000 entities. Services with a large number of entities (products, APIs, operations, schemas, and so on) may experience unexpected failures when processing Git commands. If you encounter such failures, please reduce the size of your service configuration and try again. Contact Azure Support if you need assistance.
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)] ++ ## Access Git configuration in your service
-To view and configure your Git configuration settings, you can click the **Deployment and infrastructure** menu and navigate to the **Repository** tab.
+ 1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
-![Enable GIT][api-management-enable-git]
+ 1. In the left menu, under **Deployment and infrastructure**, select **Repository**.
+
-> [!IMPORTANT]
-> Any secrets that are not defined as Named Values will be stored in the repository and will remain in its history until you disable and re-enable Git access. Named Values provide a secure place to manage constant string values, including secrets, across all API configuration and policies, so you don't have to store them directly in your policy statements. For more information, see [How to use Named Values in Azure API Management policies](api-management-howto-properties.md).
->
+## Save the service configuration to the Git repository
+
+> [!CAUTION]
+> Any secrets that are not defined as named values will be stored in the repository and will remain in its history. Named values provide a secure place to manage constant string values, including secrets, across all API configuration and policies, so you don't have to store them directly in your policy statements. For more information, see [Use named values in Azure API Management policies](api-management-howto-properties.md).
>
-For information on enabling or disabling Git access using the REST API, see [Enable or disable Git access using the REST API](/rest/api/apimanagement/current-ga/tenant-access?EnableGit).
-## To save the service configuration to the Git repository
+Before cloning the repository, save the current state of the service configuration to the repository.
-The first step before cloning the repository is to save the current state of the service configuration to the repository. Click **Save to repository**.
+1. On the **Repository** page, select **Save to repository**.
-Make any desired changes on the confirmation screen and click **Save** to save.
+1. Make any desired changes on the confirmation screen, such as the name of the branch for saving the configuration, and select **Save**.
After a few moments the configuration is saved, and the configuration status of the repository is displayed, including the date and time of the last configuration change and the last synchronization between the service configuration and the repository. Once the configuration is saved to the repository, it can be cloned.
-For information on performing this operation using the REST API, see [Commit configuration snapshot using the REST API](/rest/api/apimanagement/current-ga/tenant-access?CommitSnapshot).
+For information on saving the service configuration using the REST API, see [Tenant configuration - Save](/rest/api/apimanagement/current-ga/tenant-configuration/save).
-## To clone the repository to your local machine
+## Get access credentials
-To clone a repository, you need the URL to your repository, a user name, and a password. To get user name and other credentials, click on **Access credentials** near the top of the page.
+To clone a repository, in addition to the URL to your repository, your need a username and a password.
-To generate a password, first ensure that the **Expiry** is set to the desired expiration date and time, and then click **Generate**.
+1. On the **Repository** page, select **Access credentials** near the top of the page.
+
+1. Note the username provided on the **Access credentials** page.
+
+1. To generate a password, first ensure that the **Expiry** is set to the desired expiration date and time, and then select **Generate**.
> [!IMPORTANT] > Make a note of this password. Once you leave this page the password will not be displayed again. >
-The following examples use the Git Bash tool from [Git for Windows](https://www.git-scm.com/downloads) but you can use any Git tool that you are familiar with.
+## Clone the repository to your local machine
-Open your Git tool in the desired folder and run the following command to clone the Git repository to your local machine, using the command provided by the Azure portal.
+The following examples use the Git Bash tool from [Git for Windows](https://www.git-scm.com/downloads) but you can use any Git tool that you're familiar with.
+
+Open your Git tool in the desired folder and run the following command to clone the Git repository to your local machine, using the following command:
``` git clone https://{name}.scm.azure-api.net/ ```
-Provide the user name and password when prompted.
+Provide the username and password when prompted.
If you receive any errors, try modifying your `git clone` command to include the user name and password, as shown in the following example.
Use the encoded password along with your user name and repository location to co
git clone https://username:url encoded password@{name}.scm.azure-api.net/ ```
-Once the repository is cloned, you can view and work with it in your local file system. For more information, see [File and folder structure reference of local Git repository](#file-and-folder-structure-reference-of-local-git-repository).
-
-## To update your local repository with the most current service instance configuration
-
-If you make changes to your API Management service instance in the Azure portal or using the REST API, you must save these changes to the repository before you can update your local repository with the latest changes. To do this, click **Save to repository** on the **Repository** tab in the Azure portal, and then issue the following command in your local repository.
+After cloning completes, change the directory to your repo by running a command like the following.
```
-git pull
+cd {name}.scm.azure-api.net/
```
-Before running `git pull` ensure that you are in the folder for your local repository. If you have just completed the `git clone` command, then you must change the directory to your repo by running a command like the following.
+If you saved the configuration to a branch other than the default branch (`master`), check out the branch:
```
-cd {name}.scm.azure-api.net/
+git checkout <branch_name>
```
-## To push changes from your local repo to the server repo
+Once the repository is cloned, you can view and work with it in your local file system. For more information, see [File and folder structure reference of local Git repository](#file-and-folder-structure-reference-of-local-git-repository).
+
+## Update your local repository with the most current service instance configuration
+
+If you make changes to your API Management service instance in the Azure portal or using other Azure tools, you must save these changes to the repository before you can update your local repository with the latest changes.
+
+To save changes using the Azure portal, select **Save to repository** on the **Repository** tab for your API Management instance.
+
+Then, to update your local repository:
+
+1. Ensure that you are in the folder for your local repository. If you've just completed the `git clone` command, then you must change the directory to your repo by running a command like the following.
+
+ ```
+ cd {name}.scm.azure-api.net/
+ ```
+
+1. In the folder for your local repository, issue the following command.
+
+ ```
+ git pull
+ ```
+
+## Push changes from your local repo to the server repo
To push changes from your local repository to the server repository, you must commit your changes and then push them to the server repository. To commit your changes, open your Git command tool, switch to the directory of your local repository, and issue the following commands. ```
To push all of the commits to the server, run the following command.
git push ```
-## To deploy any service configuration changes to the API Management service instance
+## Deploy service configuration changes to the API Management service instance
Once your local changes are committed and pushed to the server repository, you can deploy them to your API Management service instance.
-For information on performing this operation using the REST API, see [Deploy Git changes to configuration database using the REST API](/rest/api/apimanagement/current-ga/tenant-configuration).
+1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
+
+1. In the left menu, under **Deployment and infrastructure**, select **Repository** > **Deploy to API Management**.
+
+1. On the **Deploy repository configuration** page, enter the name of the branch containing the desired configuration changes, and optionally select **Remove subscriptions of deleted products**. Select **Save**.
+
+For information on performing this operation using the REST API, see [Tenant Configuration - Deploy](/rest/api/apimanagement/current-ga/tenant-configuration/deploy).
## File and folder structure reference of local Git repository
The files and folders in the local Git repository contain the configuration info
| Item | Description | | | | | root api-management folder |Contains top-level configuration for the service instance |
-| apis folder |Contains the configuration for the apis in the service instance |
+| apiReleases folder |Contains the configuration for the API releases in the service instance |
+| apis folder |Contains the configuration for the APIs in the service instance |
+| apiVersionSets folder |Contains the configuration for the API version sets in the service instance |
+| backends folder |Contains the configuration for the backend resources in the service instance |
| groups folder |Contains the configuration for the groups in the service instance | | policies folder |Contains the policies in the service instance | | portalStyles folder |Contains the configuration for the developer portal customizations in the service instance |
+| portalTemplates folder |Contains the configuration for the developer portal templates in the service instance |
| products folder |Contains the configuration for the products in the service instance | | templates folder |Contains the configuration for the email templates in the service instance |
These files can be created, deleted, edited, and managed on your local file syst
> > * [Users](/rest/api/apimanagement/current-ga/user) > * [Subscriptions](/rest/api/apimanagement/current-ga/subscription)
-> * Named Values
-> * Developer portal entities other than styles
+> * Named values
+> * Developer portal entities other than styles and templates
> ### Root api-management folder
The next four settings (`DelegationEnabled`, `DelegationUrl`, `DelegatedSubscrip
The final setting, `$ref-policy`, maps to the global policy statements file for the service instance.
+### apiReleases folder
+The `apiReleases` folder contains a folder for each API release deployed to a production API, and contains the following items.
+
+* `apiReleases\<api release Id>\configuration.json` - Configuration for the release, containing information about the release dates. This is the same information that would be returned if you were to call the [Get a specific release](/rest/api/apimanagement/current-ga/api-release/get) operation.
++ ### apis folder The `apis` folder contains a folder for each API in the service instance, which contains the following items.
-* `apis\<api name>\configuration.json` - this is the configuration for the API and contains information about the backend service URL and the operations. This is the same information that would be returned if you were to call [Get a specific API](/rest/api/apimanagement/current-ga/apis/get) with `export=true` in `application/json` format.
-* `apis\<api name>\api.description.html` - this is the description of the API and corresponds to the `description` property of the [API entity](/java/api/com.microsoft.azure.storage.table.entityproperty).
-* `apis\<api name>\operations\` - this folder contains `<operation name>.description.html` files that map to the operations in the API. Each file contains the description of a single operation in the API, which maps to the `description` property of the [operation entity](/rest/api/visualstudio/operations/list#operationproperties) in the REST API.
+* `apis\<api name>\configuration.json` - Configuration for the API, containing information about the backend service URL and the operations. This is the same information that would be returned if you were to call the [Get a specific API](/rest/api/apimanagement/current-ga/apis/get) operation.
+* `apis\<api name>\api.description.html` - Description of the API, corresponding to the `description` property of the API entity in the REST API.
+* `apis\<api name>\operations\` - Folder containing `<operation name>.description.html` files that map to the operations in the API. Each file contains the description of a single operation in the API, which maps to the `description` property of the [operation entity](/rest/api/apimanagement/current-ga/operation) in the REST API.
+
+### apiVersionSets folder
+The `apiVerionSets` folder contains a folder for each API version set created for an API, and contains the following items.
+
+* `apiVersionSets\<api version set Id>\configuration.json` - Configuration for the version set. This is the same information that would be returned if you were to call the [Get a specific version set](/rest/api/apimanagement/current-ga/api-version-set/get) operation.
### groups folder The `groups` folder contains a folder for each group defined in the service instance.
-* `groups\<group name>\configuration.json` - this is the configuration for the group. This is the same information that would be returned if you were to call the [Get a specific group](/rest/api/apimanagement/current-ga/group/get) operation.
-* `groups\<group name>\description.html` - this is the description of the group and corresponds to the `description` property of the [group entity](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-group-entity).
+* `groups\<group name>\configuration.json` - Configuration for the group. This is the same information that would be returned if you were to call the [Get a specific group](/rest/api/apimanagement/current-ga/group/get) operation.
+* `groups\<group name>\description.html` - Description of the group, corresponding to the `description` property of the [group entity](/rest/api/apimanagement/current-ga/group/).
### policies folder The `policies` folder contains the policy statements for your service instance.
-* `policies\global.xml` - contains policies defined at global scope for your service instance.
-* `policies\apis\<api name>\` - if you have any policies defined at API scope, they are contained in this folder.
-* `policies\apis\<api name>\<operation name>\` folder - if you have any policies defined at operation scope, they are contained in this folder in `<operation name>.xml` files that map to the policy statements for each operation.
-* `policies\products\` - if you have any policies defined at product scope, they are contained in this folder, which contains `<product name>.xml` files that map to the policy statements for each product.
+* `policies\global.xml` - Policies defined at global scope for your service instance.
+* `policies\apis\<api name>\` - If you have policies defined at API scope, they're contained in this folder.
+* `policies\apis\<api name>\<operation name>\` folder - If you have policies defined at operation scope, they're contained in this folder in `<operation name>.xml` files that map to the policy statements for each operation.
+* `policies\products\` - If you have policies defined at product scope, they're contained in this folder, which contains `<product name>.xml` files that map to the policy statements for each product.
### portalStyles folder
-The `portalStyles` folder contains configuration and style sheets for developer portal customizations for the service instance.
+The `portalStyles` folder contains configuration and style sheets for customizing the deprecated developer portal of the service instance.
-* `portalStyles\configuration.json` - contains the names of the style sheets used by the developer portal
-* `portalStyles\<style name>.css` - each `<style name>.css` file contains styles for the developer portal (`Preview.css` and `Production.css` by default).
+* `portalStyles\configuration.json` - Contains the names of the style sheets used by the developer portal
+* `portalStyles\<style name>.css` - Each `<style name>.css` file contains styles for the developer portal (`Preview.css` and `Production.css` by default).
+
+### portalTemplates folder
+The `portalTemplates` folder contains templates for customizing the deprecated developer portal of the service instance.
+
+* `portalTemplates\<template name>\configuration.json` - Configuration of the template.
+* `portalTemplates\<template name>\<page name>.html` - Original and modified HTML pages of the template.
### products folder The `products` folder contains a folder for each product defined in the service instance.
-* `products\<product name>\configuration.json` - this is the configuration for the product. This is the same information that would be returned if you were to call the [Get a specific product](/rest/api/apimanagement/current-ga/product/get) operation.
-* `products\<product name>\product.description.html` - this is the description of the product and corresponds to the `description` property of the [product entity](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-product-entity) in the REST API.
+* `products\<product name>\configuration.json` - Configuration for the product. This is the same information that would be returned if you were to call the [Get a specific product](/rest/api/apimanagement/current-ga/product/get) operation.
+* `products\<product name>\product.description.html` - Description of the product, corresponding to the `description` property of the [product entity](/rest/api/apimanagement/current-ga/product/) in the REST API.
### templates The `templates` folder contains configuration for the [email templates](api-management-howto-configure-notifications.md) of the service instance.
-* `<template name>\configuration.json` - this is the configuration for the email template.
-* `<template name>\body.html` - this is the body of the email template.
+* `<template name>\configuration.json` - Configuration for the email template.
+* `<template name>\body.html` - Body of the email template.
## Next steps For information on other ways to manage your service instance, see:
-* Manage your service instance using the following PowerShell cmdlets
- * [Service deployment PowerShell cmdlet reference](/powershell/module/wds)
- * [Service management PowerShell cmdlet reference](/powershell/azure/servicemanagement/overview)
-* Manage your service instance using the REST API
- * [API Management REST API reference](/rest/api/apimanagement/)
--
-[api-management-enable-git]: ./media/api-management-configuration-repository-git/api-management-enable-git.png
-[api-management-git-enabled]: ./media/api-management-configuration-repository-git/api-management-git-enabled.png
-[api-management-save-configuration]: ./media/api-management-configuration-repository-git/api-management-save-configuration.png
-[api-management-save-configuration-confirm]: ./media/api-management-configuration-repository-git/api-management-save-configuration-confirm.png
-[api-management-configuration-status]: ./media/api-management-configuration-repository-git/api-management-configuration-status.png
-[api-management-configuration-git-clone]: ./media/api-management-configuration-repository-git/api-management-configuration-git-clone.png
-[api-management-generate-password]: ./media/api-management-configuration-repository-git/api-management-generate-password.png
-[api-management-password]: ./media/api-management-configuration-repository-git/api-management-password.png
-[api-management-git-configure]: ./media/api-management-configuration-repository-git/api-management-git-configure.png
-[api-management-configuration-deploy]: ./media/api-management-configuration-repository-git/api-management-configuration-deploy.png
-[api-management-identity-settings]: ./media/api-management-configuration-repository-git/api-management-identity-settings.png
-[api-management-delegation-settings]: ./media/api-management-configuration-repository-git/api-management-delegation-settings.png
-[api-management-git-icon-enable]: ./media/api-management-configuration-repository-git/api-management-git-icon-enable.png
+* [Azure PowerShell cmdlet reference](/powershell/module/az.apimanagement)
+* [Azure CLI reference](/cli/azure/apim)
+* [API Management REST API reference](/rest/api/apimanagement/)
+* [Azure SDK releases](https://azure.github.io/azure-sdk/)
++
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
| [Synthetic GraphQL APIs (preview)](graphql-schema-resolve-api.md) | No | Yes | Yes | Yes | Yes | <sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/>
-<sup>2</sup> Including related functionality e.g. users, groups, issues, applications and email templates and notifications.<br/>
-<sup>3</sup> In the Developer tier self-hosted gateways are limited to single gateway node.<br/>
-<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key.<br/>
-<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier.
+<sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/>
+<sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/>
+<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key. <br/>
+<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier.
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
+
+ Title: API gateway overview | Azure API Management
+description: Learn more about the features of the API gateway component of Azure API Management. API Management offers both Azure-managed and self-hosted gateways.
+
+documentationcenter: ''
++++ Last updated : 08/04/2022+++
+# API gateway in Azure API Management
+
+This article provides information about the roles and features of the API Management *gateway* component and compares the gateways you can deploy.
+
+Related information:
+
+* For an overview of API Management scenarios, components, and concepts, see [What is Azure API Management?](api-management-key-concepts.md)
+
+* For more information about the API Management service tiers and features, see [Feature-based comparison of the Azure API Management tiers](api-management-features.md).
+
+## Role of the gateway
+
+The API Management *gateway* (also called *data plane* or *runtime*) is the service component that's responsible for proxying API requests, applying policies, and collecting telemetry.
++
+## Managed and self-hosted
+
+API Management offers both managed and self-hosted gateways:
+
+* **Managed** - The managed gateway is the default gateway component that is deployed in Azure for every API Management instance in every service tier. With the managed gateway, all API traffic flows through Azure regardless of where backends implementing the APIs are hosted.
+
+ > [!NOTE]
+ > Because of differences in the underlying service architecture, the Consumption tier gateway currently lacks some capabilities of the dedicated gateway. For details, see the section [Feature comparison: Managed versus self-hosted gateways](#feature-comparison-managed-versus-self-hosted-gateways).
+ >
+
+
+* **Self-hosted** - The [self-hosted gateway](self-hosted-gateway-overview.md) is an optional, containerized version of the default managed gateway. It's useful for hybrid and multi-cloud scenarios where there is a requirement to run the gateways off Azure in the same environments where API backends are hosted. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
+
+ * The self-hosted gateway is [packaged](self-hosted-gateway-overview.md#packaging) as a Linux-based Docker container and is commonly deployed to Kubernetes, including to [Azure Kubernetes Service](how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md) and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
+
+ * Each self-hosted gateway is associated with a **Gateway** resource in a cloud-based API Management instance from which it receives configuration updates and communicates status.
++
+## Feature comparison: Managed versus self-hosted gateways
+
+The following table compares features available in the managed gateway versus those in the self-hosted gateway. Differences are also shown between the managed gateway for dedicated service tiers (Developer, Basic, Standard, Premium) and for the Consumption tier.
+
+> [!NOTE]
+> * Some features of managed and self-hosted gateways are supported only in certain [service tiers](api-management-features.md) or with certain [deployment environments](self-hosted-gateway-overview.md#packaging) for self-hosted gateways.
+> * See also self-hosted gateway [limitations](self-hosted-gateway-overview.md#limitations).
++
+### Infrastructure
+
+| Feature support | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
+| | -- | -- | - |
+| [Custom domains](configure-custom-domain.md) | ✔️ | ✔️ | ✔️ |
+| [Built-in cache](api-management-howto-cache.md) | ✔️ | ❌ | ❌ |
+| [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ | ✔️ |
+| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ✔️<sup>1</sup> |
+| [Private endpoints](private-endpoint.md) | ✔️ | ✔️ | ❌ |
+| [Availability zones](zone-redundancy.md) | Premium | ❌ | ✔️<sup>1</sup> |
+| [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ✔️<sup>1</sup> |
+| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ❌ | ✔️<sup>2</sup> |
+| [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | ✔️ | ✔️ | ❌ |
+| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ |
+
+<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/>
+<sup>2</sup> Requires configuration of local CA certificates.<br/>
+
+### Backend APIs
+
+| API | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
+| | -- | -- | - |
+| [OpenAPI specification](import-api-from-oas.md) | ✔️ | ✔️ | ✔️ |
+| [WSDL specification)](import-soap-api.md) | ✔️ | ✔️ | ✔️ |
+| WADL specification | ✔️ | ✔️ | ✔️ |
+| [Logic App](import-logic-app-as-api.md) | ✔️ | ✔️ | ✔️ |
+| [App Service](import-app-service-as-api.md) | ✔️ | ✔️ | ✔️ |
+| [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ |
+| [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ |
+| [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ |
+| [Passthrough GraphQL](graphql-api.md) | ✔️ | ✔️<sup>1</sup> | ❌ |
+| [Synthetic GraphQL](graphql-schema-resolve-api.md) | ✔️ | ❌ | ❌ |
+| [Passthrough WebSocket](websocket-api.md) | ✔️ | ❌ | ❌ |
+
+<sup>1</sup> GraphQL subscriptions aren't supported in the Consumption tier.
+
+### Policies
+
+Managed and self-hosted gateways support all available [policies](api-management-howto-policies.md) in policy definitions with the following exceptions.
+
+| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
+| | -- | -- | - |
+| [Dapr integration](api-management-dapr-policies.md) | ❌ | ❌ | ✔️ |
+| [Get authorization context](api-management-access-restriction-policies.md#GetAuthorizationContext) | ✔️ | ❌ | ❌ |
+| [Quota and rate limit](api-management-access-restriction-policies.md) | ✔️ | ✔️<sup>1</sup> | ✔️<sup>2</sup>
+| [Set GraphQL resolver](graphql-policies.md#set-graphql-resolver) | ✔️ | ❌ | ❌ |
+
+<sup>1</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/>
+<sup>2</sup> By default, rate limit counts in self-hosted gateways are per-gateway, per-node.
+
+### Monitoring
+
+For details about monitoring options, see [Observability in Azure API Management](observability.md).
+
+| Feature | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
+| | -- | -- | - |
+| [API analytics](howto-use-analytics.md) | ✔️ | ❌ | ❌ |
+| [Application Insights](api-management-howto-app-insights.md) | ✔️ | ✔️ | ✔️ |
+| [Logging through Event Hubs](api-management-howto-log-event-hubs.md) | ✔️ | ✔️ | ✔️ |
+| [Metrics in Azure Monitor](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) | ✔️ | ❌ | ✔️ |
+| [OpenTelemetry Collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) | ❌ | ❌ | ✔️ |
+| [Request logs in Azure Monitor](api-management-howto-use-azure-monitor.md#resource-logs) | ✔️ | ❌ | ❌<sup>1</sup> |
+| [Local metrics and logs](how-to-configure-local-metrics-logs.md) | ❌ | ❌ | ✔️ |
+| [Request tracing](api-management-howto-api-inspector.md) | ✔️ | ✔️ | ✔️ |
+
+<sup>1</sup> The self-hosted gateway currently doesn't send resource logs (diagnostic logs) to Azure Monitor. Optionally [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.
+
+### Authentication and authorization
+
+| Feature | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
+| | -- | -- | - |
+| [Authorizations](authorizations-overview.md) | ✔️ | ✔️ | ❌ |
++
+## Gateway throughput and scaling
+
+> [!IMPORTANT]
+> Throughput is affected by the number and rate of concurrent client connections, the kind and number of configured policies, payload sizes, backend API performance, and other factors. Self-hosted gateway throughput is also dependent on the compute capacity (CPU and memory) of the host where it runs. Perform gateway load testing using anticipated production conditions to determine expected throughput accurately.
+
+### Managed gateway
+
+For estimated maximum gateway throughput in the API Management service tiers, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
+
+> [!IMPORTANT]
+> Throughput figures are presented for information only and must not be relied upon for capacity and budget planning. See [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/) for details.
+
+* **Dedicated service tiers**
+ * Scale gateway capacity by adding and removing scale [units](upgrade-and-scale.md), or upgrade the service tier. (Scaling not available in the Developer tier.)
+ * In the Standard and Premium tiers, optionally configure [Azure Monitor autoscale](api-management-howto-autoscale.md).
+ * In the Premium tier, optionally add and distribute gateway capacity across multiple [regions](api-management-howto-deploy-multi-region.md).
+
+* **Consumption tier**
+ * API Management instances in the Consumption tier scale automatically based on the traffic.
+
+### Self-hosted gateway
+* In environments such as [Kubernetes](how-to-self-hosted-gateway-on-kubernetes-in-production.md), add multiple gateway replicas to handle expected usage.
+* Optionally [configure autoscaling](how-to-self-hosted-gateway-on-kubernetes-in-production.md#autoscaling) to meet traffic demands.
+
+## Next steps
+
+- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
+- Learn more about using the [capacity metric](api-management-capacity.md) for scaling decisions
+- Learn about [observability capabilities](observability.md) in API Management
api-management Api Management Howto Developer Portal Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal-customize.md
You can find more details on the developer portal in the [Azure API Management d
## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)-- Import and publish an Azure API Management instance. For more information, see [Import and publish](import-and-publish.md)
+- Import and publish an API. For more information, see [Import and publish](import-and-publish.md)
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)]
To let the visitors of your portal test the APIs through the built-in interactiv
Learn more about the developer portal: - [Azure API Management developer portal overview](api-management-howto-developer-portal.md)-- [Migrate to the new developer portal](developer-portal-deprecated-migration.md) from the deprecated legacy portal.
+- [Migrate to the new developer portal](developer-portal-deprecated-migration.md) from the deprecated legacy portal.
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md
Azure API Management is made up of an API *gateway*, a *management plane*, and a
All requests from client applications first reach the API gateway, which then forwards them to respective backend services. The API gateway acts as a facade to the backend services, allowing API providers to abstract API implementations and evolve backend architecture without impacting API consumers. The gateway enables consistent configuration of routing, security, throttling, caching, and observability.
-The API gateway:
-
- * Accepts API calls and routes them to configured backends
- * Verifies API keys, JWT tokens, certificates, and other credentials
- * Enforces usage quotas and rate limits
- * Optionally transforms requests and responses as specified in [policy statements](#policies)
- * If configured, caches responses to improve response latency and minimize the load on backend services
- * Emits logs, metrics, and traces for monitoring, reporting, and troubleshooting
With the [self-hosted gateway](self-hosted-gateway-overview.md), customers can deploy the API gateway to the same environments where they host their APIs, to optimize API traffic and ensure compliance with local regulations and guidelines. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
-The self-hosted gateway is packaged as a Linux-based Docker container and is commonly deployed to Kubernetes, including to Azure Kubernetes Service and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
+The self-hosted gateway is packaged as a Linux-based Docker container and is commonly deployed to Kubernetes, including to Azure Kubernetes Service and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
+
+More information:
+* [API gateway in Azure API Management](api-management-gateways-overview.md)
### Management plane
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Previously updated : 03/18/2022 Last updated : 07/11/2022 # Self-hosted gateway overview
+The self-hosted gateway is an optional, containerized version of the default managed gateway included in every API Management service. It's useful for scenarios such as placing gateways in the same environments where you host your APIs. Use the self-hosted gateway to improve API traffic flow and address API security and compliance requirements.
+ This article explains how the self-hosted gateway feature of Azure API Management enables hybrid and multi-cloud API management, presents its high-level architecture, and highlights its capabilities.
+For an overview of the features across the various gateway offerings, see [API gateway in API Management](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways).
+ ## Hybrid and multi-cloud API management The self-hosted gateway feature expands API Management support for hybrid and multi-cloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
-With the self-hosted gateway, customers have the flexibility to deploy a containerized version of the API Management gateway component to the same environments where they host their APIs. All self-hosted gateways are managed from the API Management service they're federated with, thus providing customers with the visibility and unified management experience across all internal and external APIs. Placing the gateways close to the APIs allows customers to optimize API traffic flows and address security and compliance requirements.
+With the self-hosted gateway, customers have the flexibility to deploy a containerized version of the API Management gateway component to the same environments where they host their APIs. All self-hosted gateways are managed from the API Management service they're federated with, thus providing customers with the visibility and unified management experience across all internal and external APIs.
Each API Management service is composed of the following key components:
Deploying self-hosted gateways into the same environments where the backend API
:::image type="content" source="media/self-hosted-gateway-overview/with-gateways.png" alt-text="API traffic flow with self-hosted gateways":::
-## Packaging and features
-
-The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/shgw/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
-### Known limitations
-
-The following functionality found in the managed gateways is **not available** in the self-hosted gateways:
+## Packaging
-- Sending resource logs (diagnostic logs) to Azure Monitor. However, you can [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.-- Upstream (backend side) TLS version and cipher management-- Validation of server and client certificates using [CA root certificates](api-management-howto-ca-certificates.md) uploaded to API Management service. You can configure [custom certificate authorities](api-management-howto-ca-certificates.md#create-custom-ca-for-self-hosted-gateway) for your self-hosted gateways and [client certificate validation](api-management-access-restriction-policies.md#validate-client-certificate) policies to enforce them.-- Integration with [Service Fabric](../service-fabric/service-fabric-api-management-overview.md)-- TLS session resumption-- Client certificate renegotiation. This means that for [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) to work, API consumers must present their certificates as part of the initial TLS handshake. To ensure this behavior, enable the Negotiate Client Certificate setting when configuring a self-hosted gateway custom hostname.-- Built-in cache. Learn about using an [external Redis-compatible cache](api-management-howto-cache-external.md) in self-hosted gateways.
+The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/shgw/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
### Container images
We provide a variety of container images for self-hosted gateways to meet your n
You can find a full list of available tags [here](https://mcr.microsoft.com/product/azure-api-management/gateway/tags).
-#### Use of tags in our official deployment options
+### Use of tags in our official deployment options
Our deployment options in the Azure portal use the `v2` tag that allows customers to use the most recent version of the self-hosted gateway v2 container image with all feature updates and patches.
When installing with our Helm chart, image tagging is optimized for you. The Hel
Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
-#### Risk of using rolling tags
+### Risk of using rolling tags
Rolling tags are tags that are potentially updated when a new version of the container image is released. This allows container users to receive updates to the container image without having to update their deployments.
When connectivity is restored, each self-hosted gateway affected by the outage w
## Security
+### Limitations
+
+The following functionality found in the managed gateways is **not available** in the self-hosted gateways:
+
+- TLS session resumption.
+- Client certificate renegotiation. To use [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md), API consumers must present their certificates as part of the initial TLS handshake. To ensure this behavior, enable the Negotiate Client Certificate setting when configuring a self-hosted gateway custom hostname (domain name).
+ ### Transport Layer Security (TLS) > [!IMPORTANT]
As of v2.1.1 and above, you can manage the ciphers that are being used through t
- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) - [Self-hosted gateway configuration settings](self-hosted-gateway-settings-reference.md) - Learn about [observability capabilities](observability.md) in API Management
+- Learn about [Dapr integration with the self-hosted gateway](https://github.com/dapr/samples/tree/master/dapr-apim-integration)
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
To learn more about Form Recognizer features and development options, visit our
* [Windows](https://curl.haxx.se/windows/) * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows)
-* [PowerShell version 7.*+](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true), or a similar command-line application. To check your PowerShell version, type `Get-Host | Select-Object Version`.
+* **PowerShell version 7.*+** (or a similar command-line application.):
+ * [Windows](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true)
+ * [macOS](/powershell/scripting/install/installing-powershell-on-macos?view=powershell-7.2&preserve-view=true)
+ * [Linux](/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.2&preserve-view=true)
+
+* To check your PowerShell version, type the following:
+ * Windows: `Get-Host | Select-Object Version`
+ * macOS or Linux: `$PSVersionTable`
* A Form Recognizer (single-service) or Cognitive Services (multi-service) resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
+
+ Title: Geo-replication in Azure App Configuration (Preview)
+description: Details of the geo-replication feature in Azure App Configuration.
+++++ Last updated : 08/01/2022++
+# Geo-replication overview (Preview)
+
+For application developers and IT engineers, a common goal is to build and run resilient applications. Resiliency is defined as the ability of your application to react to failure and still remain functional. To achieve resilience in the face of regional failures in the cloud, the first step is to build in redundancy to avoid a single point of failure. This redundancy can be achieved with geo-replication.
+
+The App Configuration geo-replication feature allows you to replicate your configuration store at-will to the regions of your choice. Each new **replica** will be in a different region and creates a new endpoint for your applications to send requests to. The original endpoint of your configuration store is called the **Origin**. The origin can't be removed, but otherwise behaves like any replica.
+
+Changing or updating your key-values can be done in any replica. These changes will be synchronized with all other replicas following an eventual consistency model.
+
+Replicating your configuration store adds the following benefits:
+- **Added resiliency for Azure outages:** In the event of a regional outage, replicas are individually affected. If one region has an outage, all replicas located in unaffected regions will still be accessible and continuously synchronize. Once the outage has been mitigated, all affected replicas will be synced to the most recent state. Note that geo-replication only offers automatic failover functionalities through App Configuration's configuration providers. Otherwise, you can also build your own custom failover mechanisms in your application's configuration to switch between different replica endpoints to mitigate the impact of an Azure outage.
+- **Redistribution of Request Limits:** You can customize in code which replica endpoint your application uses letting you distribute your request load to avoid exhausting request limits. For example, if your applications run in multiple regions and only send requests to one region, you may begin exhausting App Configuration request limits. You can help redistribute this load by creating replicas in the regions your applications are running in. Each replica has isolated request limits, equal in size to the request limits of the origin. Exhausting the request limits in one replica has no impact on the request limits in another replica.
+- **Regional Compartmentalization:** Accessing multiple regions can improve latency between your application and configuration store, leading to faster request responses and better performance if an application sends requests to its closest replica. Specifying replica access also allows you to limit data storage and flow between different regions based on your preferences.
+
+<!-- Learn more about enabling geo-replication in our **how-to (add link to how to doc here)**. -->
+
+## Sample use case
+
+A developer team is building a system consisting of multiple applications and currently has one Azure App Configuration store in the West US region. Usage of their system is rapidly growing, and they're looking to scale and meet their customer needs in: Sweden Central, West US, North Europe, and East Asia. All applications they have are currently using the West US configuration store, creating a single point of failure. If there was a regional outage in West US, and they had no other failover mechanisms or default behaviors, their system would be unavailable to customers. Also, globally all applications are currently restricted by the request limit of one configuration store. As the team scales to more regions, this limit will be unsustainable.
+
+This team would benefit from geo-replication. They can create a replica of their configuration store in each region where their application will be running. Then their applications can send requests to a replica in the same region, rather than all applications sending requests to West US. This will provide two benefits: improved request latency and better load distribution. Having a well distributed request load will help avoid exhaustion of request quota. Additionally, having multiple replicas enables the team to configure their applications to fail over in the case of a regional outage. For example, the team can configure applications running in Sweden Central to pull configuration from that region, but fallback to North Europe if Sweden Central is experiencing an outage. Even if App Configuration is unavailable in a given region, the team's system is unaffected.
+
+## Considerations
+
+- Geo-replication isn't available in the free tier.
+- Each replica has limits, as outlined in the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/). These limits are isolated per replica.
+- Azure App Configuration also supports Azure availability zones to create a resilient and highly available store within an Azure Region. Availability zone support is automatically included for a replica if the replica's region has availability zone support. The combination of availability zones for redundancy within a region, and geo-replication across multiple regions, enhances both the availability and performance of a configuration store.
+- Currently, you can only authenticate with replica endpoints with [Azure AD](/azure-app-configuration/overview-managed-identity).
+<!--
+To add once these links become available:
+ - Request handling for replicas will vary by configuration provider, for further information reference [.NET Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/) and [Java Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/).
+ - -->
+
+## Cost and billing
+
+Each replica created will add extra charges. Reference the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/) for details. As an example, if your origin is a standard tier configuration store and you have five replicas, you would be charged the rate of six standard tier configuration stores for your system, but each of your replica's isolated quota and requests are included in this charge.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to enable Geo replication](./quickstart-feature-flag-aspnet-core.md)
+
+> [Resiliency and Disaster Recovery](./concept-disaster-recovery.md)
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
After a few minutes, the command completes and returns JSON-formatted informatio
### Existing clusters with service principal AKS Clusters with service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
+1. Get the configured Log Analytics workspace resource id:
-1. Disable monitoring with the following command:
+```cli
+az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+```
+
+2. Disable monitoring with the following command:
```cli az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id> ```
-2. Upgrade cluster to system managed identity with the following command:
+3. Upgrade cluster to system managed identity with the following command:
```cli az aks update -g <resource-group-name> -n <cluster-name> --enable-managed-identity --workspace-resource-id <workspace-resource-id> ```
-3. Enable Monitoring addon with managed identity authentication with the following command:
+4. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
```cli az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id> ```
-### Existing clusters with system assigned identity
-AKS Clusters with system assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
+### Existing clusters with system or user assigned identity
+AKS Clusters with system assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user assigned identity, only Azure Public cloud is supported.
-1. Disable monitoring with the following command:
+1. Get the configured Log Analytics workspace resource id:
```cli
- az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
+ az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+ ```
+
+2. Disable monitoring with the following command:
+
+ ```cli
+ az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
```
-2. Enable Monitoring addon with Managed Identity Auth Option
+3. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
```cli az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
description: Describes the functions to use in a Bicep file to retrieve values a
Previously updated : 05/16/2022 Last updated : 08/05/2022 # Resource functions for Bicep
Built-in policy definitions are tenant level resources. For an example of deploy
`keyVaultName.getSecret(secretName)`
-Returns a secret from an Azure Key Vault. The `getSecret` function can only be called on a `Microsoft.KeyVault/vaults` resource. Use this function to pass a secret to a secure string parameter of a Bicep module. The function can be used only with a parameter that has the `@secure()` decorator.
+Returns a secret from an Azure Key Vault. Use this function to pass a secret to a secure string parameter of a Bicep module.
+
+You can only use the `getSecret` function from within the `params` section of a module. You can only use it with a `Microsoft.KeyVault/vaults` resource.
+
+```bicep
+module sql './sql.bicep' = {
+ name: 'deploySQL'
+ params: {
+ adminPassword: keyVault.getSecret('vmAdminPassword')
+ }
+}
+```
+
+You'll get an error if you attempt to use this function in any other part of the Bicep file. You'll also get an error if you use this function with string interpolation, even when used in the params section.
+
+The function can be used only with a module parameter that has the `@secure()` decorator.
The key vault must have `enabledForTemplateDeployment` set to `true`. The user deploying the Bicep file must have access to the secret. For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md).
azure-resource-manager Deploy Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-vscode.md
description: Deploy Bicep files from Visual Studio Code.
Previously updated : 06/30/2022 Last updated : 08/04/2022 # Deploy Bicep files from Visual Studio Code You can use [Visual Studio Code with the Bicep extension](./visual-studio-code.md#deploy-bicep-file) to deploy a Bicep file. You can deploy to any scope. This article shows deploying to a resource group.
-From an opened Bicep file in VS Code, there are two ways you can find the command:
+From an opened Bicep file in VS Code, there are there ways you can find the command:
+
+- Right-click the Bicep file name from the Explorer pane, not the one under **OPEN EDITORS**:
+
+ :::image type="content" source="./media/deploy-vscode/bicep-deploy-from-explorer.png" alt-text="Screenshot of Deploying Bicep File in the Context menu from the explore pane.":::
- Right-click anywhere inside a Bicep file, and then select **Deploy Bicep File**.+ - Select **Command Palette** from the **View** menu, and then select **Bicep: Deploy Bicep File**.
+ :::image type="content" source="./media/deploy-vscode/bicep-deploy-from-command-palette.png" alt-text="Screenshot of Deploy Bicep File in the Context menu.":::
+ After you select the command, you follow the wizard to enter the values: -- Select or create a resource group.-- Select a parameter file or select **None** to enter the parameter values. After you enter the parameter values, you have the options to create a parameter file or overwrite the existing parameter file.
+1. Sign in to Azure and select subscription.
+
+ :::image type="content" source="./media/deploy-vscode/bicep-deploy-select-subscription.png" alt-text="Screenshot of Select subscription.":::
+
+1. Select or create a resource group.
+
+1. Select a parameter file or select **None** to enter the parameter values.
+
+ :::image type="content" source="./media/deploy-vscode/bicep-deploy-select-parameter-file.png" alt-text="Screenshot of Select parameter file.":::
+
+1. If you choose **None**, enter the parameter values.
+
+ :::image type="content" source="./media/deploy-vscode/bicep-deploy-enter-parameter-values.png" alt-text="Screenshot of Enter parameter values.":::
+
+ After you enter the values, you have the option to create a parameters file from values used in this deployment:
+
+ :::image type="content" source="./media/deploy-vscode/bicep-deploy-create-parameter-file.png" alt-text="Screenshot of Create parameter file.":::
+
+ If you select **Yes**, a parameter file with the file name **&lt;Bicep-file-name>.parameters.json** is created in the same folder.
For more information about VS Code commands, see [Visual Studio Code](./visual-studio-code.md).
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
The visualizer shows the resources defined in the Bicep file with the resource d
## Deploy the Bicep file 1. Right-click the Bicep file inside the VSCode, and then select **Deploy Bicep file**.+
+ :::image type="content" source="./media/quickstart-create-bicep-use-visual-studio-code/vscode-bicep-deploy.png" alt-text="Screenshot of Deploy Bicep file.":::
+ 1. From the **Select Resource Group** listbox on the top, select **Create new Resource Group**. 1. Enter **exampleRG** as the resource group name, and then press **[ENTER]**.
+1. Select a location for the resource group, and then press **[ENTER]**.
+1. From **Select a parameter file**, select **None**.
+
+ :::image type="content" source="./media/quickstart-create-bicep-use-visual-studio-code/vscode-bicep-select-parameter-file.png" alt-text="Screenshot of Select parameter file.":::
+ 1. Enter a unique storage account name, and then press **[ENTER]**. If you get an error message indicating the storage account is already taken, the storage name you provided is in use. Provide a name that is more likely to be unique.
+1. From **Create parameters file from values used in this deployment?**, select **No**.
+
+It takes a few moments to create the resources. For more information, see [Deploy Bicep files with visual Studio Code](./deploy-vscode.md).
You can also deploy the Bicep file by using Azure CLI or Azure PowerShell:
azure-resource-manager Resource Providers And Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-providers-and-types.md
Title: Resource providers and resource types description: Describes the resource providers that support Azure Resource Manager. It describes their schemas, available API versions, and the regions that can host the resources. Previously updated : 11/15/2021 Last updated : 08/05/2022 # Azure resource providers and types
-When deploying resources, you frequently need to retrieve information about the resource providers and types. For example, if you want to store keys and secrets, you work with the Microsoft.KeyVault resource provider. This resource provider offers a resource type called vaults for creating the key vault.
+An Azure resource provider is a collection of REST operations that provide functionality for an Azure service. For example, the Key Vault service consists of a resource provider named **Microsoft.KeyVault**. The resource provider defines [REST operations](/rest/api/keyvault/) for working with vaults, secrets, keys, and certificates.
-The name of a resource type is in the format: **{resource-provider}/{resource-type}**. The resource type for a key vault is **Microsoft.KeyVault/vaults**.
+The resource provider defines the Azure resources that are available for you to deploy to your account. The name of a resource type is in the format: **{resource-provider}/{resource-type}**. The resource type for a key vault is **Microsoft.KeyVault/vaults**.
In this article, you learn how to:
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
Title: 'Quickstart: Deploy Bastion with default settings' description: Learn how to deploy Bastion with default settings from the Azure portal.- Last updated 08/02/2022
-#Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
+ # Quickstart: Deploy Azure Bastion with default settings
When you deploy from VM settings, Bastion is automatically configured with defau
When you create Azure Bastion using default settings, the settings are configured for you. You can't modify or specify additional values for a default deployment. After deployment completes, you can always go to the bastion host **Configuration** page to select additional settings and features. For example, the default SKU is the Basic SKU. You can later upgrade to the Standard SKU to support more features. For more information, see [About configuration settings](configuration-settings.md). 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
-1. On the page for your VM, in the **Operations** section on the left menu, select **Bastion**. You can view some of the values that will be used when creating the bastion host for your virtual network. Select **Create Azure Bastion using defaults**.
+1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
+1. On the page for your VM, in the **Operations** section on the left menu, select **Bastion**. When the **Bastion** page opens, it checks to see if you have enough available address space to create the AzureBastionSubnet. If you don't, you'll see settings to allow you to add more address space to your VNet to meet this requirement.
+1. On the **Bastion** page, you can view some of the values that will be used when creating the bastion host for your virtual network. Select **Create Azure Bastion using defaults** to deploy bastion using default settings.
:::image type="content" source="./media/quickstart-host-portal/deploy-bastion.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion.png"::: 1. Bastion begins deploying. This can take around 10 minutes to complete.
- :::image type="content" source="./media/quickstart-host-portal/creating-bastion.png" alt-text="Screenshot of Bastion resources being created." lightbox="./media/quickstart-host-portal/creating-bastion.png":::
- ## <a name="connect"></a>Connect to a VM When the Bastion deployment is complete, the screen changes to the **Connect** page.
-1. Type the username and password for your virtual machine. Then, select **Connect**.
+1. Type your authentication credentials. Then, select **Connect**.
:::image type="content" source="./media/quickstart-host-portal/connect-vm.png" alt-text="Screenshot shows the Connect using Azure Bastion dialog." lightbox="./media/quickstart-host-portal/connect-vm.png":::+ 1. The connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service. Select **Allow** when asked for permissions to the clipboard. This lets you use the remote clipboard arrows on the left of the screen. * When you connect, the desktop of the VM may look different than the example screenshot.
When you're done using the virtual network and the virtual machines, delete the
## Next steps
-In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following steps if you want to copy and paste to your VM.
+In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections.
> [!div class="nextstepaction"]
-> [Copy and paste to a Windows VM](bastion-vm-copy-paste.md)
+> [VM connections](vm-about.md)
+> [Azure Bastion configuration settings and features](configuration-settings.md).
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/role-based-access-control.md
Previously updated : 11/09/2021 Last updated : 08/02/2022
Use the following table to determine access needs for your LUIS application.
These custom roles only apply to authoring (Language Understanding Authoring) and not prediction resources (Language Understanding).
+> [!NOTE]
+> * "Owner" and "Contributor" roles take priority over the custom LUIS roles.
+> * Azure Active Directory (Azure AD) is only used with custom LUIS roles.
++ ### Cognitive Services LUIS reader A user that should only be validating and reviewing LUIS applications, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets (utterances, intents, entities) to notify the app developers of any changes that need to be made, but do not have direct access to make them.
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md
+
+ Title: Language service role-based access control (RBAC)
+
+description: Use this article to learn about access controls for Azure Cognitive Service for Language
++++++ Last updated : 08/02/2022++++
+# Language role-based access control
+
+Azure Cognitive Service for Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](/azure/role-based-access-control/) for more information.
+
+## Enable Azure Active Directory authentication
+
+To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../../authentication.md) or [create a custom subdomain for your existing resource](../../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
+
+## Add role assignment to Language Authoring resource
+
+Azure RBAC can be assigned to a Language Authoring resource. To grant access to an Azure resource, you add a role assignment.
+1. In the [Azure portal](https://ms.portal.azure.com/), select **All services**.
+2. Select **Cognitive Services**, and navigate to your specific Language Authoring resource.
+
+ > [!NOTE]
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
+
+1. Select **Access control (IAM)** on the left navigation pane.
+1. Select **Add**, then select **Add role assignment**.
+1. On the **Role** tab on the next screen, select a role you want to add.
+1. On the **Members** tab, select a user, group, service principal, or managed identity.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+
+## Language role types
+
+Use the following table to determine access needs for your Language projects.
+
+These custom roles only apply to Language authoring resources.
+
+> [!NOTE]
+> * All prebuilt capabilities are accessible to all roles.
+> * The "Owner" and "Contributor" roles take priority over custom language roles.
+> * Azure Active Directory (Azure AD) is only used for custom Language roles.
+
+### Cognitive Services Language reader
+
+A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
++
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * Read
+ * Test
+ :::column-end:::
+ :::column span="":::
+ * All GET APIs under:
+ * [Language Authoring CLU APIs](/rest/api/language/conversational-analysis-authoring)
+ * [Language Authoring Text Analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+ * Only the `TriggerExportProjectJob` POST operation under:
+ * [Language Authoring CLU export API](/rest/api/language/conversational-analysis-authoring/export?tabs=HTTP)
+ * [Language Authoring Text Analysis export API](/rest/api/language/text-analysis-authoring/export?tabs=HTTP)
+ * Only Export POST operation under:
+ * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export)
+ * All the Batch testing web APIs
+ *[Language Runtime CLU APIs](/rest/api/language/conversation-analysis-runtime)
+ *[Language Runtime Text Analysis APIs](/rest/api/language/text-analysis-runtime)
+ :::column-end:::
+
+### Cognitive Services Language writer
+
+A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldnΓÇÖt have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldnΓÇÖt be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned.
+
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Cognitive Services Language Reader.
+ * Ability to:
+ * Train
+ * Write
+ :::column-end:::
+ :::column span="":::
+ * All APIs under Language reader
+ * All POST, PUT and PATCH APIs under:
+ * [Language Authoring CLU APIs](/rest/api/language/conversational-analysis-authoring)
+ * [Language Authoring Text Analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+ Except for
+ * Delete deployment
+ * Delete trained model
+ * Delete project
+ * Deploy model
+ :::column-end:::
+
+### Cognitive Services Language owner
+
+These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments
+
+ :::column span="":::
+ **Functionality**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Cognitive Services Language Writer
+ * Deploy
+ * Delete
+ :::column-end:::
+ :::column span="":::
+ * All APIs available under:
+ * [Language Authoring CLU APIs](/rest/api/language/conversational-analysis-authoring)
+ * [Language Authoring Text Analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+
+ :::column-end:::
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Title: Azure Communication Services Rooms overview description: Learn about the Azure Communication Services Rooms.--++ - Previously updated : 11/24/2021+ Last updated : 07/24/2022 - # Rooms overview -
-Azure Communication services provide a concept of rooms for developers who are building structured conversations. Rooms support only voice and video calling in Private Preview.
-Here are main scenarios where Rooms are useful:
-- **Service-managed communication.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can create and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services. -- **Ability to have Invite-only experiences.** Rooms allow your services to control which users can join the rooms. Board members can discuss sensitive topics confidentially.
+Azure Communication Services provides a concept of a room for developers who are building structured conversations such as virtual appointments or virtual events. Rooms currently allow voice and video calling.
+Here are the main scenarios where rooms are useful:
-## When to use Rooms
-Not every solution needs a Room. Some scenarios, like building basic one-to-one or one-to-few ad-hoc interactions, can be created using the Calling or Chat SDKs without the need for rooms.
+- **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services.
+- **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. Developers can use the "Join Policy" for a room, to either let all or only a subset of users with assigned Communication Services identities to join a room call.
+- **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference.
-Use rooms when you need:
-- Control who can access a calling session on server side-- Need coordinates that can be expired at a specific moment of time-- Have a call to which only invited users can join
+## When to use rooms
- :::image type="content" source="../media/rooms/decision-tree.png" alt-text="Diagram showing decision tree to select a Room.":::
+Use rooms when you need any of the following capabilities:
+- Control which users can join room calls.
+- Need scheduling/coordinates that are enabled and expire at a specified time and date.
+- Need structured communication through roles and permissions for users.
-Note while you can use either group CallID or rooms if you just need an ephemeral coordinate. We recommend using rooms API for all new solutions you are building.
+ :::image type="content" source="../media/rooms/room-decision-tree.png" alt-text="Diagram showing decision tree to select a Room.":::
-| Capability | 1:N Call | 1:N Call <br>with ephemeral ID</br> | Room call |
+| Capability | 1:N Call | 1:N Call <br>with ephemeral ID</br> | Room call |
| | :: | :: | :: |
-| Interactive participants | 350 | 350 | 350 |
-| Ephemeral ID to distribute to participants | No | Yes (Group ID) | Yes (Room ID) |
-| Invitee only participation | No | No | Yes <br>(Mandatory in private preview)</br> |
-| API to create. remove, update, delete the call | No | No | Rooms API |
-| Set validity period for a call | No | No | Yes <br> Up to six months </br> |
-
+| Interactive participants | 350 | 350 | 350 |
+| Ephemeral ID to distribute to participants | ❌ | ✔️ <br>(Group ID)</br> | ✔️ <br>(Room ID)</br> |
+| Invitee only participation | ❌ | ❌ | ✔️ |
+| All users in communication service resource to join a call | ❌ | ✔️ | ✔️ |
+| Set validity period for a call | ❌ | ❌ | ✔️ <br> Up to six months </br> |
+| Set user roles and permissions for a call | ❌ | ❌ | ✔️ |
+| API to create, remove, update, delete the call | ❌ | ❌ | ✔️ <br> Rooms API <br> |
-## Managing the Rooms
-Rooms are managed via Rooms SDK or Rooms API. In the initial release, the rooms allows only have voice and video calls within the Room.
-
-Use the **Rooms API/SDK** in your server application for Room:
-- Creation -- Modification-- Deletion-- Defining and updating the set of participants-- Setting and modifying the Room validity (up to six months).-
-Use the **JS Calling SDKs** (with other Calling SDKs and chat support on the roadmap) to join the room.
+## Managing rooms and joining room calls
+ **Rooms API/SDK** is used to accomplish actions such as creating a room, adding participants, and setting up schedule etc. Calling SDK is used to initiate the call within a Room from the client side. Most actions available in a one-to-one or group-calls in **Calling SDKs** are also available in room calls. Full list of capabilities offered in Calling SDK is listed in the [Calling SDK Overview](../voice-video-calling/calling-sdk-features.md#detailed-capabilities).
+
+| Capability | Calling SDK | Rooms API/SDK |
+|-|--|--|
+| Join a room call with voice and video | ✔️ | ❌ |
+| List participants that joined the rooms call | ✔️ | ❌ |
+| Create room | ❌ | ✔️ |
+| List all participants that are invited to the room | ❌ | ✔️ |
+| Add or remove a VoIP participant | ❌ | ✔️ |
+| Assign roles to room participants | ❌ | ✔️ |
The picture below illustrates the concept of managing and joining the rooms.
- :::image type="content" source="../media/rooms/rooms-management.png" alt-text="Diagram showing Rooms Management.":::
- ## Runtime operations
-
- Most actions available in regular one-to-one or group calls in JS Calling SDK are also available in rooms. You cannot promote the existing one-to-one or group call to a room call or Invite an ad hoc user to join a Room (you need to add the user using the Rooms API)
-Full list of capabilities that are available in JS SDK are listed in the [Calling SDK Overview](../voice-video-calling/calling-sdk-features.md#detailed-capabilities).
-
-| Capability | JS Calling SDK | Rooms API/SDK |
-|-| :--: | :: |
-| Join a Room call with voice and video | ✔️ | ❌ |
-| List participants that joined the Rooms call | ✔️ | ❌ |
-| List all participants that are invited to the Room call | ❌ | ✔️ |
-| Add or remove a VoIP participant | ❌ | ✔️ |
-| Add or remove a new PSTN participant | ❌ | ❌ |
+### Rooms API/SDKs
+
+Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in your server application for `room` operations:
+- Create
+- Modify
+- Delete
+- Set and update the list of participants
+- Set and modify the Room validity
+- Control who gets to join a room, using `roomJoinPolicy`. Details below.
+- Assign roles and permissions to users. Details below.
+
+### Calling SDKs
+
+Use the [Calling SDKs](../voice-video-calling/calling-sdk-features.md) to join the room call. Room calls can be joined using the Web, iOS or Android Calling SDKs. You can find quick start samples for joining room calls [here](../../quickstarts/rooms/join-rooms-call.md).
+
+## Control access to room calls
+
+Rooms can be set to operate in two levels of control over who is allowed to join a room call.
+
+| Room type | roomJoinPolicy value | Who can participate in Room?
+|-| | |
+| **Private Room** | `inviteOnly` | User must be explicitly added to the room roster, to be able to join a room |
+| **Open Room** | `communicationServiceUsers` | All valid users created under company's Azure Communication Service resource are allowed to join this room |
+
+## Predefined participant roles and permissions
+
+Room participants can be assigned one of the following roles: **Presenter**, **Attendee** and **Consumer**. By default, a user is assigned an **Attendee** role, if no other role is assigned.
+
+The tables below provide detailed capabilities mapped to the roles. At a high level, **Presenter** role has full control, **Attendee** capabilities are limited to audio and video, while **Consumer** can only receive audio, video and screen sharing.
+
+| Capability | Role: Presenter | Role: Attendee | Role: Consumer
+|| :--: | :--: | :--: |
+| **Mid call controls** | | |
+| - Turn video on/off | ✔️ | ✔️ | ❌ |
+| - Mute/Unmute mic | ✔️ | ✔️ | ❌ |
+| - Switch between cameras | ✔️ | ✔️ | ❌ |
+| - Active speaker | ✔️ | ✔️ | ✔️ |
+| - Choose speaker for calls | ✔️ | ✔️ | ✔️ |
+| - Choose mic for calls | ✔️ | ✔️ | ❌ |
+| - Show participants state (idle, connecting, connected, On-hold, Disconnecting, Disconnected etc.) | ✔️ | ✔️ | ✔️ |
+| - Show call state (Early media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected | ✔️ | ✔️ | ✔️ |
+| - Show if a participant is muted | ✔️ | ✔️ | ✔️ |
+| - Show the reason why a participant left a call | ✔️ | ✔️ | ✔️ |
+| **Screen sharing** | | |
+| - Share screen | ✔️ * | ❌ | ❌ |
+| - Share an application | ✔️ * | ❌ | ❌ |
+| - Share a browser tab | ✔️ * | ❌ | ❌ |
+| - Participants can view shared screen | ✔️ | ✔️ | ✔️ |
+| **Roster management** | | |
+| - Remove a participant | ✔️ | ❌ | ❌ |
+| **Device management** | | |
+| - Ask for permission to use audio and/or video | ✔️ | ✔️ | ❌ |
+| - Get camera list | ✔️ | ✔️ | ❌ |
+| - Set camera | ✔️ | ✔️ | ❌ |
+| - Get selected camera | ✔️ | ✔️ | ❌ |
+| - Get mic list | ✔️ * | ✔️ * | ❌ |
+| - Set mic | ✔️ * | ✔️ * | ❌ |
+| - Get selected mic | ✔️ * | ✔️ * | ❌ |
+| - Get speakers list | ✔️ * | ✔️ * | ✔️ * |
+| - Set speaker | ✔️ * | ✔️ * | ✔️ * |
+| - Get selected speaker | ✔️ | ✔️ | ✔️ |
+| **Video rendering** | | |
+| - Render a video in multiple places (local camera or remote stream) | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> |
+| - Set/Update video scaling mode | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> |
+| - Render remote video stream | ✔️ | ✔️ | ✔️ |
+
+*) Only available on the web calling SDK. Not available on iOS and Android calling SDKs
+
+## Event handling
+
+[Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) published via [Event Grid](../../../event-grid/event-schema-communication-services.md) are annotated with room call information.
+
+- **CallStarted** is published when a room call starts.
+- **CallEnded** is published when a room call ends.
+- **CallParticipantAdded** is published when a new participant joins a room call.
+- **CallParticipantRemoved** is published when a participant drops from a room call.
## Next steps:-- Use the [QuickStart to create, manage and join a room.](../../quickstarts/rooms/get-started-rooms.md)-- Review the [Network requirements for media and signaling](../voice-video-calling/network-requirements.md)---
+- Use the [QuickStart to create, manage and join a room](../../quickstarts/rooms/get-started-rooms.md).
+- Learn how to [join a room call](../../quickstarts/rooms/join-rooms-call.md).
+- Review the [Network requirements for media and signaling](../voice-video-calling/network-requirements.md).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
The native cloud connector requires:
> [!IMPORTANT] > To present the current status of your recommendations, the CSPM plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM), this increased volume of calls might also increase ingestion costs. In such cases, We recommend filtering out the read-only calls from the Defender for Cloud user or role ARN: `arn:aws:iam::[accountId]:role/CspmMonitorAws` (this is the default role name, confirm the role name configured on your account).
-1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2. Ensure you've fulfilled the [network requirements for Azure Arc](/azure-arc/servers/network-requirements.md).
+1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2. Ensure you've fulfilled the [network requirements for Azure Arc](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud).
- (Optional) Select **Configure**, to edit the configuration as required.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To have full visibility to Microsoft Defender for Servers security content, ensu
- **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that are not connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines. -- Ensure you've fulfilled the [network requirements for Azure Arc](/azure-arc/servers/network-requirements.md).
+- Ensure you've fulfilled the [network requirements for Azure Arc](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud).
- Additional extensions should be enabled on the Arc-connected machines. - Microsoft Defender for Endpoint
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
Defender for IoT can detect the following protocols when identifying assets and
|**Omron** | FINS | |**Oracle** | TDS<br> TNS | |**Rockwell Automation** | ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above |
-|**Schneider Electric** | Modbus/TCP<br> Modbus TCPΓÇôSchneider Unity Extensions<br> OASYS (Schneider Electric Telvant) |
+|**Schneider Electric** | Modbus/TCP<br> Modbus TCPΓÇôSchneider Unity Extensions<br> OASYS (Schneider Electric Telvant)<br> Schneider TSAA |
|**Schneider Electric / Invensys** | Foxboro Evo<br> Foxboro I/A<br> Trident<br> TriGP<br> TriStation | |**Schneider Electric / Modicon** | Modbus RTU | |**Schneider Electric / Wonderware** | Wonderware Suitelink |
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
Title: Integrations with partner services - Microsoft Defender for IoT description: Learn about supported integrations with Microsoft Defender for IoT. Previously updated : 06/21/2022 Last updated : 08/02/2022
-# Integrations with partner services
+# Integrations with Microsoft and partner services
Integrate Microsoft Defender for Iot with partner services to view partner data in Defender for IoT, or to view Defender for IoT data in a partner service.
-## Supported integrations
-
-The following table lists available integrations for Microsoft Defender for IoT, as well as links for specific configuration information.
--
-|Partner service |Description | Learn more |
-||||
-| **ArcSight** | Forward Defender for IoT alerts to ArcSight. | [Integrate ArcSight with Microsoft Defender for IoT](integrations/arcsight.md) |
-|**Aruba ClearPass** | Share Defender for IoT data with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
-|**CyberArk** | Send CyberArk PSM syslog data on remote sessions and verification failures to Defender for IoT for data correlation. | [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md) |
-|**Forescout** | Automate actions in Forescout based on activity detected by Defender for IoT, and correlate Defender for IoT data with other *Forescout eyeExtended* modules that oversee monitoring, incident management, and device control. | [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md) |
-|**Fortinet** | Send Defender for IoT data to Fortinet services for: <br><br>- Enhanced network visibility in FortiSIEM<br>- Extra abilities in FortiGate to stop anomalous behavior | [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md) |
-| **LogRhythm** | Forward Defender for IoT alerts to LogRhythm. | [Integrate LogRhythm with Microsoft Defender for IoT](integrations/logrhythm.md) |
-| **RSA NetWitness** | Forward Defender for IoT alerts to RSA NetWitness | [Integrate RSA NetWitness with Microsoft Defender for IoT](integrations/netwitness.md) <br>[CyberX Platform - RSA NetWitness CEF Parser Implementation Guide](https://community.netwitness.com//t5/netwitness-platform-integrations/cyberx-platform-rsa-netwitness-cef-parser-implementation-guide/ta-p/554364) |
-|**Palo Alto** |Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
-|**QRadar** |Forward Defender for IoT alerts to IBM QRadar. | [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md) |
-|**ServiceNow** | View Defender for IoT device detections, attributes, and connections in ServiceNow. | [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) |
-| **Splunk** | Send Defender for IoT alerts to Splunk | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
-|**Axonius Cybersecurity Asset Management** | Import and manage device inventory discovered by Defender for IoT in your Axonius instance. | [Axonius documentation](https://docs.axonius.com/docs/azure-defender-for-iot) |
-|**Skybox** | Import vulnerability occurrence data discovered by Defender for IoT in your Skybox platform. | [Skybox documentation](https://docs.skyboxsecurity.com) <br><br> [Skybox integration page](https://www.skyboxsecurity.com/products/integrations) |
+## Aruba ClearPass
++
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Aruba ClearPass** | Share Defender for IoT data with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+
+## Axonius
++
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Axonius Cybersecurity Asset Management** | Import and manage device inventory discovered by Defender for IoT in your Axonius instance. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Axonius | [Axonius documentation](https://docs.axonius.com/docs/azure-defender-for-iot) |
+
+## CyberArk PSM
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**CyberArk Privileged Session Manager (PSM)** | Send CyberArk PSM syslog data on remote sessions and verification failures to Defender for IoT for data correlation. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md) |
+
+## Forescout
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Forescout** | Automate actions in Forescout based on activity detected by Defender for IoT, and correlate Defender for IoT data with other *Forescout eyeExtended* modules that oversee monitoring, incident management, and device control. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md) |
+
+## Fortinet
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Fortinet FortiSIEM and FortiGate** | Send Defender for IoT data to Fortinet services for: <br><br>- Enhanced network visibility in FortiSIEM<br>- Extra abilities in FortiGate to stop anomalous behavior | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md) |
+
+## IBM QRadar
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+| **IBM QRadar** | Send Defender for IoT alerts to IBM QRadar | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Microsoft Defender for IoT alerts to a 3rd party SIEM](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot-blog/stream-microsoft-defender-for-iot-alerts-to-a-3rd-party-siem/ba-p/3581242) |
+|**IBM QRadar** | Forward Defender for IoT alerts to IBM QRadar. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md) |
+
+## LogRhythm
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**LogRhythm** | Forward Defender for IoT alerts to LogRhythm. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate LogRhythm with Microsoft Defender for IoT](integrations/logrhythm.md) |
+
+## Micro Focus ArcSight
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Micro Focus ArcSight** | Forward Defender for IoT alerts to ArcSight. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate ArcSight with Microsoft Defender for IoT](integrations/arcsight.md) |
+
+## Microsoft Defender for Endpoint
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Microsoft Defender for Endpoint** | Integrates Defender for IoT data in Defender for Endpoint's device inventory, alerts, recommendations, and vulnerabilities. Displays device data about Defender for Endpoint endpoints in the Defender for IoT **Device inventory** page on the Azure portal. | - Enterprise IoT networks and sensors | Microsoft | [Onboard with Microsoft Defender for IoT](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration) |
+
+## Microsoft Sentinel
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Defender for IoT data connector** | Displays Defender for IoT data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | [Integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended) |
++
+## Palo Alto
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Palo Alto** | Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
++
+## RSA NetWitness
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**RSA NetWitness** | Forward Defender for IoT alerts to RSA NetWitness | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate RSA NetWitness with Microsoft Defender for IoT](integrations/netwitness.md) <br><br>[Defender for IoT - RSA NetWitness CEF Parser Implementation Guide](https://community.netwitness.com//t5/netwitness-platform-integrations/cyberx-platform-rsa-netwitness-cef-parser-implementation-guide/ta-p/554364) |
+
+## ServiceNow
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+| **Vulnerability Response Integration with Microsoft Azure Defender for IoT** | View Defender for IoT device vulnerabilities in ServiceNow. | - OT networks<br>- Locally managed sensors and on-premises management consoles | ServiceNow | [ServiceNow store](https://store.servicenow.com/sn_appstore_store.do#!/store/application/463a7907c3313010985a1b2d3640dd7e/1.0.1?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%25253Btemplate%26q%3Ddefender%2520for%2520iot&sl=sh) |
+| **Service Graph Connector Integration with Microsoft Azure Defender for IoT** | View Defender for IoT device detections, sensors, and network connections in ServiceNow. | - OT networks<br>- Locally managed sensors and on-premises management consoles | ServiceNow | [ServiceNow store](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ddd4bf1b53f130104b5cddeeff7b1229/1.0.0?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%25253Btemplate%26q%3Ddefender%2520for%2520iot&sl=sh) |
+| **Microsoft Defender for IoT** (Legacy) | View Defender for IoT device detections and alerts in ServiceNow. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [ServiceNow store](https://store.servicenow.com/sn_appstore_store.do#!/store/application/6dca6137dbba13406f7deeb5ca961906/3.1.5?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%25253Btemplate%26q%3Ddefender%2520for%2520iot&sl=sh)<br><br>[Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) |
+
+## Skybox
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+|**Skybox** | Import vulnerability occurrence data discovered by Defender for IoT in your Skybox platform. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Skybox | [Skybox documentation](https://docs.skyboxsecurity.com) <br><br> [Skybox integration page](https://www.skyboxsecurity.com/products/integrations) |
++
+## Splunk
+
+|Name |Description |Support scope |Supported by |Learn more |
+||||||
+| **Splunk** | Send Defender for IoT alerts to Splunk | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Microsoft Defender for IoT alerts to a 3rd party SIEM](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot-blog/stream-microsoft-defender-for-iot-alerts-to-a-3rd-party-siem/ba-p/3581242) |
+|**Splunk** | Send Defender for IoT alerts to Splunk | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
+ ## Next steps
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Title: What's new archive for Microsoft Defender for IoT for organizations description: Learn about the features and enhancements released for Microsoft Defender for IoT for organizations more than 6 months ago. Previously updated : 03/03/2022 Last updated : 08/07/2022 # What's new archive for in Microsoft Defender for IoT for organizations
For more recent updates, see [What's new in Microsoft Defender for IoT?](release
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +
+## November 2021
+
+**Sensor software version**: 10.5.3
+
+The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT.
+
+- The on-premises management console, has a new [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](references-work-with-defender-for-iot-apis.md#servicenow-integration-apiexternalv3integration-preview).
+
+- Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors.
+
+- As part of our automated maintenance, archived alerts that are over 90 days old will now be automatically deleted.
+
+- Many enhancements have been made to the exporting of alert metadata based on customer feedback.
+
+## October 2021
+
+**Sensor software version**: 10.5.2
+
+The following feature enhancements are available with version 10.5.2 of Microsoft Defender for IoT.
+
+- [PLC operating mode detections (Public Preview)](#plc-operating-mode-detections-public-preview)
+
+- [PCAP API](#pcap-api)
+
+- [On-premises Management Console Audit](#on-premises-management-console-audit)
+
+- [Webhook Extended](#webhook-extended)
+
+- [Unicode support for certificate passphrases](#unicode-support-for-certificate-passphrases)
+
+### PLC operating mode detections (Public Preview)
+
+Users can now view PLC operating mode states, changes, and risks. The PLC Operating mode consists of the PLC logical Run state and the physical Key state, if a physical key switch exists on the PLC.
+
+This new capability helps improve security by detecting *unsecure* PLCs, and as a result prevents malicious attacks such as PLC Program Downloads. The 2017 Triton attack on a petrochemical plant illustrates the effects of such risks.
+This information also provides operational engineers with critical visibility into the operational mode of enterprise PLCs.
+
+#### What is an unsecure mode?
+
+If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*.
+
+#### Visibility and risk assessment
+
+- Use the Device Inventory to view the PLC state of organizational PLCs, and contextual device information. Use the Device Inventory Settings dialog box to add this column to the Inventory.
+
+ :::image type="content" source="media/release-notes/device-inventory-plc.png" alt-text="Device inventory showing PLC operating mode.":::
+
+- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false.
+
+ :::image type="content" source="media/release-notes/attributes-plc.png" alt-text="Attributes screen showing PLC information.":::
+
+- View all network PLC Run and Key State statuses by creating a Data Mining with PLC operating mode information.
+
+ :::image type="content" source="media/release-notes/data-mining-plc.png" alt-text="Data inventory screen showing PLC option.":::
+
+- Use the Risk Assessment Report to review the number of network PLCs in the unsecure mode, and additional information you can use to mitigate unsecure PLC risks.
+
+### PCAP API
+
+The new PCAP API lets the user retrieve PCAP files from the sensor via the on-premises management console with, or without direct access to the sensor itself.
+
+### On-premises Management Console audit
+
+Audit logs for the on-premises management console can now be exported to facilitate investigations into what changes were made, and by who.
+
+### Webhook extended
+
+Webhook extended can be used to send extra data to the endpoint. The extended feature includes all of the information in the Webhook alert and adds the following information to the report:
+
+- sensorID
+- sensorName
+- zoneID
+- zoneName
+- siteID
+- siteName
+- sourceDeviceAddress
+- destinationDeviceAddress
+- remediationSteps
+- handled
+- additionalInformation
+
+### Unicode support for certificate passphrases
+
+Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [About certificates](how-to-deploy-certificates.md#about-certificates)
+ ## April 2021 ### Work with automatic threat Intelligence updates (Public Preview)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 07/21/2022 Last updated : 08/07/2022 # What's new in Microsoft Defender for IoT?
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | ||| |**Enterprise IoT networks** | - [Enterprise IoT and Defender for Endpoint integration in GA](#enterprise-iot-and-defender-for-endpoint-integration-in-ga) |
-|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) <br><br>**To update to version 22.2.x**:<br>- **From version 22.1.x**, update directly to the latest **22.2.x** version<br>- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version <br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
+|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Sensor connections restored after certificate rotation](#sensor-connections-restored-after-certificate-rotation)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) <br><br>**To update to version 22.2.x**:<br>- **From version 22.1.x**, update directly to the latest **22.2.x** version<br>- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version <br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) | ### Enterprise IoT and Defender for Endpoint integration in GA
For more information, see:
- [Manage alerts from the sensor console](how-to-manage-the-alert-event.md) - [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
+### Sensor connections restored after certificate rotation
+
+Starting in version 22.2.3, after rotating your certificates, your sensor connections are automatically restored to your central manager, and you don't need to reconnect them manually.
+
+For more information, see [About certificates](how-to-deploy-certificates.md).
+ ### Support diagnostic log enhancements (Public preview) Starting in sensor version [22.1.1](#new-support-diagnostics-log), you've been able to download a diagnostic log from the sensor console to send to support when you open a ticket.
This new functionality is available on the following alerts:
- Malware alerts, based on activity of the source device. (generated by the Malware engine). - Suspicion of Denial of Service Attack alerts, based on activity of the destination device (generated by the Malware engine)
-## November 2021
-
-**Sensor software version**: 10.5.3
-
-The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT.
--- The on-premises management console, has a new [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](references-work-with-defender-for-iot-apis.md#servicenow-integration-apiexternalv3integration-preview).--- Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors.--- As part of our automated maintenance, archived alerts that are over 90 days old will now be automatically deleted.--- Many enhancements have been made to the exporting of alert metadata based on customer feedback.-
-## October 2021
-
-**Sensor software version**: 10.5.2
-
-The following feature enhancements are available with version 10.5.2 of Microsoft Defender for IoT.
--- [PLC operating mode detections (Public Preview)](#plc-operating-mode-detections-public-preview)--- [PCAP API](#pcap-api)--- [On-premises Management Console Audit](#on-premises-management-console-audit)--- [Webhook Extended](#webhook-extended)--- [Unicode support for certificate passphrases](#unicode-support-for-certificate-passphrases)-
-### PLC operating mode detections (Public Preview)
-
-Users can now view PLC operating mode states, changes, and risks. The PLC Operating mode consists of the PLC logical Run state and the physical Key state, if a physical key switch exists on the PLC.
-
-This new capability helps improve security by detecting *unsecure* PLCs, and as a result prevents malicious attacks such as PLC Program Downloads. The 2017 Triton attack on a petrochemical plant illustrates the effects of such risks.
-This information also provides operational engineers with critical visibility into the operational mode of enterprise PLCs.
-
-#### What is an unsecure mode?
-
-If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*.
-
-#### Visibility and risk assessment
--- Use the Device Inventory to view the PLC state of organizational PLCs, and contextual device information. Use the Device Inventory Settings dialog box to add this column to the Inventory.-
- :::image type="content" source="media/release-notes/device-inventory-plc.png" alt-text="Device inventory showing PLC operating mode.":::
--- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false.-
- :::image type="content" source="media/release-notes/attributes-plc.png" alt-text="Attributes screen showing PLC information.":::
--- View all network PLC Run and Key State statuses by creating a Data Mining with PLC operating mode information.-
- :::image type="content" source="media/release-notes/data-mining-plc.png" alt-text="Data inventory screen showing PLC option.":::
--- Use the Risk Assessment Report to review the number of network PLCs in the unsecure mode, and additional information you can use to mitigate unsecure PLC risks.-
-### PCAP API
-
-The new PCAP API lets the user retrieve PCAP files from the sensor via the on-premises management console with, or without direct access to the sensor itself.
-
-### On-premises Management Console audit
-
-Audit logs for the on-premises management console can now be exported to facilitate investigations into what changes were made, and by who.
-
-### Webhook extended
-
-Webhook extended can be used to send extra data to the endpoint. The extended feature includes all of the information in the Webhook alert and adds the following information to the report:
--- sensorID-- sensorName-- zoneID-- zoneName-- siteID-- siteName-- sourceDeviceAddress-- destinationDeviceAddress-- remediationSteps-- handled-- additionalInformation-
-### Unicode support for certificate passphrases
-
-Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [About certificates](how-to-deploy-certificates.md#about-certificates)
- ## Next steps [Getting started with Defender for IoT](getting-started.md)
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources. Previously updated : 06/29/2021+ Last updated : 08/05/2022 + # Get compliance data of Azure resources
PowerShell, a call to the REST API, or by using the
[Azure Policy Compliance Scan GitHub Action](https://github.com/marketplace/actions/azure-policy-compliance-scan). This scan is an asynchronous process.
+> [!NOTE]
+> Not all Azure resource providers support on-demand evaluation scans. For example, [Azure Virtual Network Manager (AVNM)](../../../virtual-network-manager/overview.md) currently doesn't support either manual triggers or the standard policy compliance evaluation cycle (daily scans).
+ #### On-demand evaluation scan - GitHub Action Use the
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
When created, these compute resources are automatically part of your workspace,
### Supported VM series and sizes
+> [!NOTE]
+> H-series virtual machine series will be retired on August 31, 2022. Create compute instance and compute clusters with alternate VM sizes. Existing compute instances and clusters with H-series virtual machines will not work after August 31, 2022.
+ When you select a node size for a managed compute resource in Azure Machine Learning, you can choose from among select VM sizes available in Azure. Azure offers a range of sizes for Linux and Windows for different workloads. To learn more, see [VM types and sizes](../virtual-machines/sizes.md). There are a few exceptions and limitations to choosing a VM size:
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
az ml job download --name <sweep-job> --output-name model
## Next steps * [Track an experiment](how-to-log-view-metrics.md)
-* [Deploy a trained model](how-to-deploy-and-where.md)
+* [Deploy a trained model](how-to-deploy-managed-online-endpoint-sdk-v2.md)
mysql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-server-logs-cli.md
az mysql flexible-server server-logs download --resource-group <myresourcegroup>
``` ## Next Steps-- To enable and disable Server logs from portal, you can refer to the [article].(./how-to-server-logs-portal.md)
+- To enable and disable Server logs from portal, you can refer to the [article.](./how-to-server-logs-portal.md)
- Learn more about [Configure slow logs using Azure CLI](./tutorial-query-performance-insights.md#configure-slow-query-logs-by-using-the-azure-cli)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Last updated 05/24/2022
[Azure Database for MySQL - Flexible Server](./overview.md) is a deployment mode that's designed to provide more granular control and flexibility over database management functions and configuration settings than does the Single Server deployment mode. The service currently supports community version of MySQL 5.7 and 8.0. This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## August 2022
+
+**Server logs for Azure Database for MySQL - Flexible Server**
+
+ Server Logs will help customers to emit the server logs to server storage space in file format, which you can later download. Slow query logs are supported with server logs, which can help customers in performance troubleshooting and query tuning. Customers have ability to store logs up to a week or up-to 7 GB of logs size. You can configure or download them from [Azure portal](./how-to-server-logs-portal.md) or [Azure CLI](./how-to-server-logs-cli.md).[Learn more](./concepts-monitoring.md#server-logs)
+ ## June 2022 **Known Issues**
-On few servers where audit or slow logs are enabled, you may no longer see logs being uploaded to data sinks configured under diagnostics settings. Please verify whether your logs have the latest updated timestamp for the events, based on the [data sink](./tutorial-query-performance-insights.md#set-up-diagnostics) you have configured. If your server is affected by this issue, please open a [support ticket](https://azure.microsoft.com/support/create-ticket/) so that we can apply a quick fix on the server to resolve the issue. Alternatively, you can wait for our next deployment cycle, during which we will apply a permanent fix in all regions.
+On few servers where audit or slow logs are enabled, you may no longer see logs being uploaded to data sinks configured under diagnostics settings. Please verify whether your logs have the latest updated timestamp for the events, based on the [data sink](./tutorial-query-performance-insights.md#set-up-diagnostics) you've configured. If your server is affected by this issue, please open a [support ticket](https://azure.microsoft.com/support/create-ticket/) so that we can apply a quick fix on the server to resolve the issue. Alternatively, you can wait for our next deployment cycle, during which we'll apply a permanent fix in all regions.
## May 2022
On few servers where audit or slow logs are enabled, you may no longer see logs
Azure Database for MySQL ΓÇô Flexible Server Business Critical service tier is now generally available. Business Critical service tier is ideal for Tier 1 production workloads that require low latency, high concurrency, fast failover, and high scalability, such as gaming, e-commerce, and Internet-scale applications, to learn more about [Business Critical service Tier](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/announcing-azure-database-for-mysql-flexible-server-for-business/ba-p/3361718). - **Announcing the addition of new Burstable compute instances for Azure Database for MySQL - Flexible Server**
- We are announcing the addition of new Burstable compute instances to support customersΓÇÖ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](./concepts-compute-storage.md).
+ We're announcing the addition of new Burstable compute instances to support customersΓÇÖ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](./concepts-compute-storage.md).
- **Known issues**
- - The Reserved instances (RI) feature in Azure Database for MySQL ΓÇô Flexible server is not working properly for the Business Critical service tier, after its rebranding from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue.
- - Private DNS integration details are not displayed on few Azure Database for MySQL Database flexible servers which have HA option enabled. This issue does not have any impact on availability of the server or name resolution. We are working on a permanent fix to resolve the issue and it will be available in the next deployment. Meanwhile, if you want to view the Private DNS Zone details, you can either search under [Private DNS zones](../../dns/private-dns-getstarted-portal.md) in the Azure portal or you can perform a [manual failover](concepts-high-availability.md#planned-forced-failover) of the HA enabled flexible server and refresh the Azure portal.
+ - The Reserved instances (RI) feature in Azure Database for MySQL ΓÇô Flexible server isn't working properly for the Business Critical service tier, after its rebranding from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we're currently working to fix the issue.
+ - Private DNS integration details aren't displayed on few Azure Database for MySQL Database flexible servers, which have HA option enabled. This issue doesn't have any impact on availability of the server or name resolution. We're working on a permanent fix to resolve the issue and it will be available in the next deployment. Meanwhile, if you want to view the Private DNS Zone details, you can either search under [Private DNS zones](../../dns/private-dns-getstarted-portal.md) in the Azure portal or you can perform a [manual failover](concepts-high-availability.md#planned-forced-failover) of the HA enabled flexible server and refresh the Azure portal.
## April 2022
On few servers where audit or slow logs are enabled, you may no longer see logs
- **Deprecation of TLSv1 or TLSv1.1 protocols with Azure Database for MySQL - Flexible Server (8.0.28)**
- Starting version 8.0.28, MySQL community edition supports TLS protocol TLSv1.2 or TLSv1.3 only. Azure Database for MySQL ΓÇô Flexible Server will also stop supporting TLSv1 and TLSv1.1 protocols, to align with modern security standards. You will no longer be able to configure TLSv1 or TLSv1.1 from the server parameter blade for newly created resources as well as for resources created previously. The default will be TLSv1.2. Resources created before the upgrade will still support communication through TLS protocol TLSv1 or TLSv1.1 through 1 May 2022.
+ Starting version 8.0.28, MySQL community edition supports TLS protocol TLSv1.2 or TLSv1.3 only. Azure Database for MySQL ΓÇô Flexible Server will also stop supporting TLSv1 and TLSv1.1 protocols, to align with modern security standards. You'll no longer be able to configure TLSv1 or TLSv1.1 from the server parameter pane for newly created resources as well as for resources created previously. The default will be TLSv1.2. Resources created before the upgrade will still support communication through TLS protocol TLSv1 or TLSv1.1 through 1 May 2022.
## March 2022
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Azure Maps](../azure-maps/index.yml)
### Microsoft.Media
-Azure service: [Media Services](/media-services/)
+Azure service: [Media Services](/azure/media-services)
> [!div class="mx-tableFixed"] > | Action | Description |
sentinel Indicators Bulk File Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/indicators-bulk-file-import.md
Here's an example ipv4-addr indicator using the JSON template.
This article has shown you how to manually bolster your threat intelligence by importing indicators gathered in flat files. Check out these links to learn how indicators power other analytics in Microsoft Sentinel. - [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)-- [Threat indicators for cyber threat intelligence in Microsoft Sentinel](/azure/architecture/example-scenario/dat)
+- [Threat indicators for cyber threat intelligence in Microsoft Sentinel](/azure/architecture/example-scenario/data/sentinel-threat-intelligence)
- [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md)
synapse-analytics Distribution Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/distribution-advisor.md
EXEC dbo.read_dist_recommendation;
go ```
-To see which queries were analyzed by DA, run [the e2e_queries_used_for_recommendations.sql script available for download from GitHub](https://github.com/microsoft/Azure_Synapse_Toolbox/blob/master/DistributionAdvisor/e2e_queries_used_for_recommendations.sql).
+To see which queries were analyzed by DA, run [the e2e_queries_used_for_recommendations.sql script available for download from GitHub](https://github.com/microsoft/Azure_Synapse_Toolbox/blob/master/Distribution_Advisor/e2e_queries_used_for_recommendations.sql).
### 2b. Run the advisor on selected queries
But not the second resultset containing the table change T-SQL commands:
- Check the output of `CommandToInvokeAdvisorString` above.
+ - Remove queries that may not be valid anymore which may have been added here from either the hand-selected queries or from the DMV by editing `WHERE` clause in: [Queries Considered by DA](https://github.com/microsoft/Azure_Synapse_Toolbox/blob/master/Distribution_Advisor/e2e_queries_used_for_recommendations.sql).
### 3. Error during post-processing of recommendation output
Invalid length parameter passed to the LEFT or SUBSTRING function.
##### 3b. Mitigation: Ensure that you have the most up to date version of the stored procedure from GitHub:
+ - [e2e_queries_used_for_recommendations.sql script available for download from GitHub](https://github.com/microsoft/Azure_Synapse_Toolbox/blob/master/Distribution_Advisor/e2e_queries_used_for_recommendations.sql)
- [CreateDistributionAdvisor_PublicPreview.sql script available for download from GitHub](https://github.com/microsoft/Azure_Synapse_Toolbox/blob/master/Distribution_Advisor/CreateDistributionAdvisor_PublicPreview.sql)
virtual-machines Automation Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md
The table below contains the networking parameters.
> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. | > | | | | | > | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Bastion subnet | Mandatory | For brown field deployments. |
-> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
+> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
+> | | | | |
+> | `webapp_subnet_arm_id` | The Azure resource identifier for the web app subnet | Mandatory | For brown field deployments using the web app |
+> | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments using the web app |
+
+> [!NOTE]
+> When using an existing subnet for the web app, the subnet must be empty, in the same region as the resource group being deployed, and delegated to Microsoft.Web/serverFarms
+
### Deployer Virtual Machine Parameters
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Start by importing the SAP Deployment Automation Framework GitHub repository int
Navigate to the Repositories section and choose Import a repository, import the 'https://github.com/Azure/sap-automation.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true)
-If you're unable to import a repository, you can create the 'sap-automation' repository and manually import the content from the SAP Deployment Automation Framework GitHub repository to it.
+If you're unable to import a repository, you can create the 'sap-automation' repository, and manually import the content from the SAP Deployment Automation Framework GitHub repository to it.
### Create the repository for manual import
Optionally you may copy the sample configuration files from the 'samples/WORKSPA
Push the changes back to the repository by selecting the source control icon and providing a message about the change, for example: "Import of sample configurations" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository.
+## Set up the web app
+
+The automation framework optionally provisions a web app as a part of the control plane to assist with the deployment of SAP workload zones and systems. If you would like to use the web app, you must first create an app registration for authentication purposes. Open the Azure Cloud Shell and execute the following commands:
+
+# [Linux](#tab/linux)
+Replace MGMT with your environment as necessary.
+```bash
+echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json
+
+TF_VAR_app_registration_app_id=$(az ad app create --display-name MGMT-webapp-registration --enable-id-token-issuance true --sign-in-audience AzureADMyOrg --required-resource-access @manifest.json --query "appId" | tr -d '"')
+
+echo $TF_VAR_app_registration_app_id
+
+az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query "password"
+
+rm manifest.json
+```
+# [Windows](#tab/windows)
+Replace MGMT with your environment as necessary.
+```powershell
+Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]'
+
+$TF_VAR_app_registration_app_id=(az ad app create --display-name MGMT-webapp-registration --enable-id-token-issuance true --sign-in-audience AzureADMyOrg --required-resource-access ./manifest.json --query "appId").Replace('"',"")
+
+echo $TF_VAR_app_registration_app_id
+
+az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query "password"
+
+rm ./manifest.json
+```
+
+Save the app registration ID and password values for later.
++ ## Create Azure Pipelines Azure Pipelines are implemented as YAML files and they're stored in the 'deploy/pipelines' folder in the repository.
Create the SAP system deployment pipeline by choosing _New Pipeline_ from the Pi
Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP system deployment (infrastructure)' by choosing 'Rename/Move' from the three-dot menu on the right.
+## SAP web app deployment pipeline
+
+Create the SAP web app deployment pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipeline YAML File. Specify the pipeline with the following settings:
+
+| Setting | Value |
+| - | |
+| Branch | main |
+| Path | `deploy/pipelines/21-deploy-web-app.yaml` |
+| Name | Web app deployment |
+
+Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Web app deployment' by choosing 'Rename/Move' from the three-dot menu on the right.
+
+> [!NOTE]
+> In order for the web app to function correctly, the SAP workload zone deployment and SAP system deployment pipelines must be named as specified.
+ ## SAP software acquisition pipeline Create the SAP software acquisition pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
The pipelines use a custom task to perform cleanup activities post deployment. T
:::image type="content" source="./media/automation-devops/automation-select-personal-access-tokens.jpg" alt-text="Diagram showing the creation of the Personal Access Token (PAT).":::
-1. Create a personal access token. Ensure that _Read & manage_ is selected for _Agent Pools_ and _Read & write_ is selected for _Code_. Write down the created token value.
+1. Create a personal access token. Ensure that _Read & manage_ is selected for _Agent Pools_, _Read & write_ is selected for _Code_, _Read & execute_ is selected for _Build_, and _Read, create, & manage_ is selected for _Variable Groups_. Write down the created token value.
:::image type="content" source="./media/automation-devops/automation-new-pat.png" alt-text="Diagram showing the attributes of the Personal Access Token (PAT).":::
Create a new variable group 'SDAF-General' using the Library page in the Pipelin
| `POOL` | `<Agent Pool name>` | Use the Agent pool defined in the previous step. | | `advice.detachedHead` | false | | | `skipComponentGovernanceDetection` | true | |
-| `tf_version` | 1.1.7 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
+| `tf_version` | 1.2.6 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
Save the variables.
+Or alternatively you can use the Azure DevOps CLI to set up the groups.
+
+```bash
+s-user="<SAP Support user account name>"
+s-password="<SAP Support user password>"
+
+az devops login
+
+az pipelines variable-group create --name SDAF-General --variables ANSIBLE_HOST_KEY_CHECKING=false Deployment_Configuration_Path=WORKSPACES Branch=main S-Username=$s-user S-Password=$s-password --output yaml
+
+```
+ > [!NOTE] > Remember to assign permissions for all pipelines using _Pipeline permissions_.
As each environment may have different deployment credentials you'll need to cre
Create a new variable group 'SDAF-MGMT' for the control plane environment using the Library page in the Pipelines section. Add the following variables:
-| Variable | Value | Notes |
-| | | -- |
-| Agent | 'Azure Pipelines' or the name of the agent pool | Note, this pool will be created in a later step. |
-| ARM_CLIENT_ID | Enter the Service principal application ID. | |
-| ARM_CLIENT_SECRET | Enter the Service principal password. | Change variable type to secret by clicking the lock icon |
-| ARM_SUBSCRIPTION_ID | Enter the target subscription ID. | |
-| ARM_TENANT_ID | Enter the Tenant ID for the service principal. | |
-| AZURE_CONNECTION_NAME | Previously created connection name. | |
-| sap_fqdn | SAP Fully Qualified Domain Name, for example 'sap.contoso.net'. | Only needed if Private DNS isn't used. |
-| FENCING_SPN_ID | Enter the service principal application ID for the fencing agent. | Required for highly available deployments. |
-| FENCING_SPN_PWD | Enter the service principal password for the fencing agent. | Required for highly available deployments. |
-| FENCING_SPN_TENANT | Enter the service principal tenant ID for the fencing agent. | Required for highly available deployments. |
+| Variable | Value | Notes |
+| - | | -- |
+| Agent | 'Azure Pipelines' or the name of the agent pool | Note, this pool will be created in a later step. |
+| ARM_CLIENT_ID | Enter the Service principal application ID. | |
+| ARM_CLIENT_SECRET | Enter the Service principal password. | Change variable type to secret by clicking the lock icon |
+| ARM_SUBSCRIPTION_ID | Enter the target subscription ID. | |
+| ARM_TENANT_ID | Enter the Tenant ID for the service principal. | |
+| AZURE_CONNECTION_NAME | Previously created connection name. | |
+| sap_fqdn | SAP Fully Qualified Domain Name, for example 'sap.contoso.net'. | Only needed if Private DNS isn't used. |
+| FENCING_SPN_ID | Enter the service principal application ID for the fencing agent. | Required for highly available deployments. |
+| FENCING_SPN_PWD | Enter the service principal password for the fencing agent. | Required for highly available deployments. |
+| FENCING_SPN_TENANT | Enter the service principal tenant ID for the fencing agent. | Required for highly available deployments. |
+| `PAT` | `<Personal Access Token>` | Use the Personal Token defined in the previous |
+| `POOL` | `<Agent Pool name>` | Use the Agent pool defined in the previous |
+| TF_VAR_app_registration_app_id | App registration application ID | Required if deploying the web app |
+| TF_VAR_webapp_client_secret | App registration password | Required if deploying the web app |
Save the variables. > [!NOTE] > Remember to assign permissions for all pipelines using _Pipeline permissions_. >
+> For use with the web app, assign the administrator role to the build service using _Security_.
+>
> You can use the clone functionality to create the next environment variable group. + ## Create a service connection To remove the Azure resources, you need an Azure Resource Manager service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true)
You must use the Deployer as a [self-hosted agent for Azure DevOps](/azure/devop
Newly created pipelines might not be visible in the default view. Select on recent tab and go back to All tab to view the new pipelines.
-Select the _Control plane deployment_ pipeline, provide the configuration names for the deployer and the SAP library and choose "Run" to deploy the control plane.
+Select the _Control plane deployment_ pipeline, provide the configuration names for the deployer and the SAP library and choose "Run" to deploy the control plane. Make sure to check "deploy the web app infrastructure" if you would like to set up the web app.
Wait for the deployment to finish.
Accept the license and when prompted for server URL, enter the URL you captured
When prompted enter the application pool name, you created in the previous step. Accept the default agent name and the default work folder name. The agent will now be configured and started. +
+## Deploy the web app
+
+Checking the "deploy the web app infrastructure" parameter when running the Control plane deployment pipeline will provision the infrastructure necessary for hosting the web app. The "Deploy web app" pipeline will publish the application's software to that infrastructure.
+
+Before running the Deploy web app pipeline, first update the reply-url values for the app registration. As a result of running the SAP workload zone deployment pipeline, part of the web app URL needed will be stored in a variable named "WEBAPP_URL_BASE" in your environment-specific variable group. Copy this value, and use it in the following command:
+
+# [Linux](#tab/linux)
+
+```bash
+webapp_url_base=<WEBAPP_URL_BASE>
+az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback
+```
+# [Windows](#tab/windows)
+
+```powershell
+$webapp_url_base="<WEBAPP_URL_BASE>"
+az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback
+```
+
+After updating the reply-urls, run the pipeline.
+
+By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, navigate to the app service resource. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](/azure/app-service/app-service-ip-restrictions).
+
+You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality won't work.
+
+You should now be able to visit the web app, and use it to deploy SAP workload zones and SAP system infrastructure.
+ ## Next step > [!div class="nextstepaction"]
virtual-machines Automation Configure Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-webapp.md
+
+ Title: Configure a Deployer UX Web Application for SAP Deployment Automation Framework
+description: Configure a web app as a part of the control plane to help creating and deploying SAP workload zones and systems on Azure.
+++ Last updated : 06/21/2022++++
+# Configure the Control Plane UX Web Application
+
+As a part of the SAP automation framework control plane, you can optionally create an interactive web application that will assist you in creating the required configuration files and deploying SAP workload zones and systems using Azure DevOps Pipelines.
++
+## Create an app registration
+
+If you would like to use the web app, you must first create an app registration for authentication purposes. Open the Azure Cloud Shell and execute the following commands:
+
+# [Linux](#tab/linux)
+Replace MGMT with your environment as necessary.
+```bash
+echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json
+
+TF_VAR_app_registration_app_id=$(az ad app create \
+ --display-name MGMT-webapp-registration \
+ --enable-id-token-issuance true \
+ --sign-in-audience AzureADMyOrg \
+ --required-resource-access @manifest.json \
+ --query "appId" | tr -d '"')
+
+TF_VAR_webapp_client_secret=$(az ad app credential reset \
+ --id $TF_VAR_app_registration_app_id --append \
+ --query "password" | tr -d '"')
+
+rm manifest.json
+```
+# [Windows](#tab/windows)
+Replace MGMT with your environment as necessary.
+```powershell
+Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]'
+
+$TF_VAR_app_registration_app_id=(az ad app create `
+ --display-name $region_code-webapp-registration `
+ --enable-id-token-issuance true `
+ --sign-in-audience AzureADMyOrg `
+ --required-resource-accesses ./manifest.json `
+ --query "appId").Replace('"',"")
+
+$TF_VAR_webapp_client_secret=(az ad app credential reset `
+ --id $TF_VAR_app_registration_app_id --append `
+ --query "password").Replace('"',"")
+
+rm ./manifest.json
+```
++
+## Deploy via Azure DevOps (pipelines)
+
+For full instructions on setting up the web app using Azure DevOps, see [Use SAP Deployment Automation Framework from Azure DevOps Services](automation-configure-devops.md)
+
+### Summary of steps required to set up the web app before deploying the control plane:
+1. Add the web app deployment pipeline (deploy/pipelines/21-deploy-web-app.yaml).
+2. Add the variables TF_VAR_app_registration_app_id and TF_VAR_webapp_client_secret to your environment specific variable group before deployment.
+3. Assign the administrator role to the build service using the Security tab in your environment specific variable group.
+4. Check the box next to "deploy the web app infrastructure" when running the deploy control plane pipeline.
+
+### Summary of steps required to access the web app after deploying the control plane:
+1. Update the app registration reply URLs.
+2. Assign the reader role with the subscription scope to the app service system assigned managed identity.
+3. Run the web app deployment pipeline.
+4. (Optionally) add an additional access policy to the app service.
+
+## Deploy via Azure CLI (Cloud Shell)
+
+For full instructions on setting up the web app using the Azure CLI, see [Deploy the control plane](automation-deploy-control-plane.md)
+
+### Summary of steps required to set up the web app before deploying the control plane:
+1. Export the environment variables TF_VAR_app_registration_app_id, TF_VAR_webapp_client_secret, and TF_VAR_use_webapp="true".
+
+### Summary of steps required to access the web app after deploying the control plane:
+1. Update the app registration reply URLs.
+2. Assign the reader role with the subscription scope to the app service system assigned managed identity.
+3. Generate a zip file of the web app code.
+4. Deploy the software to the app service.
+5. Configure the application settings.
+6. (Optionally) add an additional access policy to the app service.
++
+## Using the web app
+
+The web app allows you to create SAP workload zone objects and system infrastructure objects. These are essentially another representation of the Terraform configuration file.
+If deploying using Azure Pipelines, you have ability to deploy these workload zones and system infrastructures right from the web app.
+If deploying using the Azure CLI, you can download the parameter file for any landscape or system object you create, and use that in your command line deployments.
+
+### Creating a landscape or system object from scratch
+1. Navigate to the "Workload zones" or "Systems" tab at the top of the website.
+2. Click "Create New" in the bottom left corner.
+3. Fill out the required parameters in the "Basic" and "Advanced" tabs, and any additional parameters you desire.
+4. Certain parameters will be dropdowns populated with existing Azure resources.
+ * If no results are shown for a dropdown, you probably need to specify another dropdown before you can see any options. Or, see step 2 above regarding the system assigned managed identity.
+ - The subscription parameter must be specified before any other dropdown functionality is enabled
+ - The network_arm_id parameter must be specified before any subnet dropdown functionality is enabled
+5. Select submit in the bottom left hand corner
+
+### Creating a workload zone or system object from a file
+1. Navigate to the "File" tab at the top of the website.
+2. Your options are
+ * Create a new file from scratch there in browser.
+ * Import an existing.tfvars file, and (optionally) edit it before saving.
+ * Use an existing template, and (optionally) edit it before saving.
+3. Make sure your file conforms to the correct naming conventions.
+4. Next to the file you would like to convert to a workload zone or system object, click "Convert".
+5. The workload zone or system object will appear in its respective tab.
+
+### Deploying a workload zone or system object (Azure DevOps Pipelines deployment)
+1. Navigate to the Workload zones or Systems tab.
+2. Next to the workload zone or system you would like to deploy, click "Deploy".
+ * If you would like to deploy a file, first convert it to a workload zone or system object.
+4. Specify the necessary parameters, and confirm it's the correct object.
+5. Click deploy.
+6. The web app will automatically generate a '.tfvars' file from the object, update your Azure DevOps repository, and kick off the workload zone or system (infrastructure) pipeline. You can monitor the deployment in the Azure DevOps Portal.
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
Optionally assign the following permissions to the Service Principal:
az role assignment create --assignee <appId> --role "User Access Administrator" --scope /subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName> ``` +
+## Prepare the webapp
+This step is optional. If you would like a browser-based UX to assist in the configuration of SAP workload zones and systems, run the following commands before deploying the control plane.
+
+# [Linux](#tab/linux)
+
+```bash
+echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json
+
+region_code=WEEU
+
+export TF_VAR_app_registration_app_id=$(az ad app create \
+ --display-name ${region_code}-webapp-registration \
+ --enable-id-token-issuance true \
+ --sign-in-audience AzureADMyOrg \
+ --required-resource-access @manifest.json \
+ --query "appId" | tr -d '"')
+
+export TF_VAR_webapp_client_secret=$(az ad app credential reset \
+ --id $TF_VAR_app_registration_app_id --append \
+ --query "password" | tr -d '"')
+
+export TF_VAR_use_webapp=true
+rm manifest.json
+
+```
+# [Windows](#tab/windows)
+
+```powershell
+
+Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]'
+
+$region_code="WEEU"
+
+$env:TF_VAR_app_registration_app_id = (az ad app create `
+ --display-name $region_code-webapp-registration `
+ --enable-id-token-issuance true `
+ --sign-in-audience AzureADMyOrg `
+ --required-resource-accesses ./manifest.json `
+ --query "appId").Replace('"',"")
+
+$env:TF_VAR_webapp_client_secret=(az ad app credential reset `
+ --id $env:TF_VAR_app_registration_app_id --append `
+ --query "password").Replace('"',"")
+
+$env:TF_VAR_use_webapp="true"
+
+del manifest.json
+
+```
+
+# [Azure DevOps](#tab/devops)
+
+It is currently not possible to perform this action from Azure DevOps.
++++ ## Deploy the control plane The sample Deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder. The sample SAP Library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder.
-Running the command below will create the Deployer, the SAP Library and add the Service Principal details to the deployment key vault.
+Running the command below will create the Deployer, the SAP Library and add the Service Principal details to the deployment key vault. If you followed the web app setup in the step above, this command will also create the infrastructure to host the application.
# [Linux](#tab/linux)
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
--spn_secret "${spn_secret}" \ --tenant_id "${tenant_id}" \ --auto-approve
- ```
+```
# [Windows](#tab/windows)
xcopy /E sap-automation\samples\WORKSPACES WORKSPACES
``` - ```powershell
New-SAPAutomationRegion -DeployerParameterfile .\DEPLOYER\MGMT-WEEU-DEP00-INFRAS
``` - > [!NOTE] > Be sure to replace the sample value `<subscriptionID>` with your subscription ID. > Replace the `<appID>`, `<password>`, `<tenant>` values with the output values of the SPN creation # [Azure DevOps](#tab/devops)
-Open (https://dev.azure.com) and and go to your Azure DevOps project.
+Open (https://dev.azure.com) and go to your Azure DevOps project.
> [!NOTE] > Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
cd sap-automation/deploy/scripts
The script will install Terraform and Ansible and configure the deployer. +
+## Deploy the web app software
+
+If you would like to use the web app, follow the steps below. If not, ignore this section.
+
+The web app resource can be found in the deployer resource group. In the Azure portal, select resource groups in your subscription. The deployer resource group will be named something like MGMT-[region]-DEP00-INFRASTRUCTURE. Inside the deployer resource group, locate the app service, named something like mgmt-[region]-dep00-sapdeployment123. Open the app service and copy the URL listed. It should be in the format of https://mgmt-[region]-dep00-sapdeployment123.azurewebsites.net. This will be the value for webapp_url below.
+
+The following commands will configure the application urls, generate a zip file of the web app code, deploy the software to the app service, and configure the application settings.
+
+# [Linux](#tab/linux)
+
+```bash
+
+webapp_url=<webapp_url>
+az ad app update \
+ --id $TF_VAR_app_registration_app_id \
+ --web-home-page-url ${webapp_url} \
+ --web-redirect-uris ${webapp_url}/ ${webapp_url}/.auth/login/aad/callback
+
+```
+# [Windows](#tab/windows)
+
+```powershell
+
+$webapp_url="<webapp_url>"
+az ad app update `
+ --id $TF_VAR_app_registration_app_id `
+ --web-home-page-url $webapp_url `
+ --web-redirect-uris $webapp_url/ $webapp_url/.auth/login/aad/callback
+
+```
+# [Azure DevOps](#tab/devops)
+
+It is currently not possible to perform this action from Azure DevOps.
++
+> [!TIP]
+> Perform the following task from the deployer.
+```bash
+
+cd ~/Azure_SAP_Automated_Deployment/sap-automation/Webapp/AutomationForm
+
+dotnet build
+dotnet publish --configuration Release
+
+cd bin/Release/netcoreapp3.1/publish/
+
+sudo apt install zip
+zip -r deploymentfile.zip .
+
+az webapp deploy --resource-group <group-name> --name <app-name> --src-path deploymentfile.zip
+
+```
+```bash
+
+az webapp config appsettings set -g <group-name> -n <app-name> --settings \
+IS_PIPELINE_DEPLOYMENT=false
+
+```
++
+## Accessing the web app
+
+By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, find the web app. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](/azure/app-service/app-service-ip-restrictions).
+
+You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality will not work.
+
+You can log in and visit the web app by following the URL from earlier or clicking browse inside the app service resource. With the web app, you are able to configure SAP workload zones and system infrastructure. Click download to obtain a parameter file of the workload zone or system you specified, for use in the later deployment steps.
++ ## Next step > [!div class="nextstepaction"] > [Configure SAP Workload Zone](automation-configure-workload-zone.md)--
virtual-wan Expressroute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/expressroute-powershell.md
If you have sites connected to a Virtual WAN VPN gateway in the same virtual hub
### To change gateway size
-In the following example, an ExpressRoute gateway is modified to 3 scale units.
+In the following example, an ExpressRoute gateway is modified to a different scale unit (3 scale units).
```azurepowershell-interactive Set-AzExpressRouteGateway -ResourceGroupName "testRG" -Name "testergw" -MinScaleUnits 3