Updates from: 01/26/2023 02:13:01
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Previously updated : 04/20/2022 Last updated : 01/25/2023
We also have the ability to remove, export or modify specific data should the Gl
## Do I require a license to use Entra Permissions Management? Yes, as of July 1st, 2022, new customers must acquire a free 45-day trial license or a paid license to use the service. You can enable a trial here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) or you can directly purchase resource-based licenses here: [https://aka.ms/BuyPermissionsManagement](https://aka.ms/BuyPermissionsManagement)
-
+
+## How is Permissions Management priced?
+
+Permissions Management is $125 per resources/year ($10.40 per resource/month). Permissions Management requires licenses for workloads, which include any resource that uses compute or memory.
+
+## Do I need to pay for all resources?
+
+Although Permissions Management supports all resources, Microsoft only requires licenses for certain resources per cloud. To learn more about billable resources, visit [View billable resources listed in your authorization system](product-data-billable-resources.md)
+
+## How do I figure out how many resources I have?
+
+To find out how many resources you have across your multicloud infrastructure, view the Billable Resources tab in Permissions Management.
+ ## What do I do if IΓÇÖm using Public Preview version of Entra Permissions Management? If you are using the Public Preview version of Entra Permissions Management, your current deployment(s) will continue to work through October 1st.
active-directory Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-dashboard.md
Previously updated : 02/23/2022 Last updated : 01/25/2023
The Permissions Management **Dashboard** provides an overview of the authorizati
The **Permission Creep Index (PCI)** chart updates to display information about the accounts and folders you selected. The number of days since the information was last updated displays in the upper right corner.
+ >[!NOTE]
+ >Default and GCP-managed service accounts are not included in the PCI calculation.
+ 1. In the Permission Creep Index (PCI) graph, select a bubble. The bubble displays the number of identities that are considered high-risk.
active-directory Product Data Billable Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-billable-resources.md
+
+ Title: View current billable resources in your authorization systems
+description: How to view current billable resources in your authorization system in Permissions Management.
+++++++ Last updated : 01/25/2023+++
+# View billable resources listed in your authorization system
+
+Gain insight into current billable resources listed in your authorization system. In Microsoft Entra Permissions Management, a billable resource is defined as a cloud service that uses compute or memory and requires a license. The Permissions Management Billable Resources tab shows you which resources are in your authorization system, and how many of them you're being billed for.
+
+Here is the current list of resources per cloud provider. This list is subject to change as cloud providers add more services in the future.
++
+## View resources in your authorization system
+
+1. To access your billable resource information, from the Permissions Management home page, select Settings (gear icon).
+1. Select the Billable Resources tab.
+1. Select your Authorization System:
+
+ - **AWS** for Amazon Web Services.
+ - **Azure** for Microsoft Azure.
+ - **GCP** for Google Cloud Platform.
+
+ The interface displays information showing which resource you have in your Authorization System per category.
+
+1. To change the columns displayed in the table, select **Columns**, and then select the information you want to display.
+
+ - To discard your changes, select **Reset to default**.
++
+## Next steps
+
+- For information about viewing and configuring settings for collecting data from your authorization system and its associated accounts, see [View and configure settings for data collection](product-data-sources.md).
active-directory Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-inventory.md
- Title: Display an inventory of created resources and licenses for your authorization system
-description: How to display an inventory of created resources and licenses for your authorization system in Permissions Management.
------- Previously updated : 02/23/2022---
-# Display an inventory of created resources and licenses for your authorization system
-
-You can use the **Inventory** dashboard in Permissions Management to display an inventory of created resources and licensing information for your authorization system and its associated accounts.
-
-## View resources created for your authorization system
-
-1. To access your inventory information, in the Permissions Management home page, select **Settings** (the gear icon).
-1. Select the **Inventory** tab, select the **Inventory** subtab, and then select your authorization system type:
-
- - **AWS** for Amazon Web Services.
- - **Azure** for Microsoft Azure.
- - **GCP** for Google Cloud Platform.
-
- The **Inventory** tab displays information pertinent to your authorization system type.
-
-1. To change the columns displayed in the table, select **Columns**, and then select the information you want to display.
-
- - To discard your changes, select **Reset to default**.
-
-## View the number of licenses associated with your authorization system
-
-1. To access licensing information about your data sources, in the Permissions Management home page, select **Settings** (the gear icon).
-
-1. Select the **Inventory** tab, select the **Licensing** subtab, and then select your authorization system type.
-
- The **Licensing** table displays the following information pertinent to your authorization system type:
-
- - The names of your accounts in the **Authorization system** column.
- - The number of **Compute** licenses.
- - The number of **Serverless** licenses.
- - The number of **Compute containers**.
- - The number of **Databases**.
- - The **Total number of licenses**.
--
-## Next steps
--- For information about viewing and configuring settings for collecting data from your authorization system and its associated accounts, see [View and configure settings for data collection](product-data-sources.md).
active-directory Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-sources.md
Title: View and configure settings for data collection from your authorization system in Permissions Management
-description: How to view and configure settings for collecting data from your authorization system in Permissions Management.
+ Title: View and configure settings for data collection
+description: How to view and configure settings for collecting data from your authorization system.
Previously updated : 02/23/2022 Last updated : 01/25/2023
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
Title: Generate and download the Permissions analytics report in Permissions Management
-description: How to generate and download the Permissions analytics report in Permissions Management.
+ Title: View and download the Permissions Analytics Report in Permissions Management
+description: How to view and download the Permissions Analytics Report in Permissions Management.
Previously updated : 01/20/2023 Last updated : 01/25/2023
-# Generate and download the Permissions analytics report
-
-This article describes how to generate and download the **Permissions analytics report** in Permissions Management for AWS, Azure, and GCP. You can generate the report in Excel format, and also as a PDF.
--
-## Generate the Permissions analytics report
-
-1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
-
- The **Systems Reports** subtab displays a list of reports the **Reports** table.
-1. Select **Permissions Analytics Report** from the list. o download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
-
- The following message displays: **Successfully Started To Generate On Demand Report.**
-
-1. For detailed information in the report, select the right arrow next to one of the following categories. Or, select the required category under the **Findings** column.
-
- - **AWS**
- - Inactive Identities
- - Users
- - Roles
- - Resources
- - Serverless Functions
- - Inactive Groups
- - Super Identities
- - Users
- - Roles
- - Resources
- - Serverless Functions
- - Over-Provisioned Active Identities
- - Users
- - Roles
- - Resources
- - Serverless Functions
- - PCI Distribution
- - Privilege Escalation
- - Users
- - Roles
- - Resources
- - S3 Bucket Encryption
- - Unencrypted Buckets
- - SSE-S3 Buckets
- - S3 Buckets Accessible Externally
- - EC2 S3 Buckets Accessibility
- - Open Security Groups
- - Identities That Can Administer Security Tools
- - Users
- - Roles
- - Resources
- - Serverless Functions
- - Identities That Can Access Secret Information
- - Users
- - Roles
- - Resources
- - Serverless Functions
- - Cross-Account Access
- - External Accounts
- - Roles That Allow All Identities
- - Hygiene: MFA Enforcement
- - Hygiene: IAM Access Key Age
- - Hygiene: Unused IAM Access Keys
- - Exclude From Reports
- - Users
- - Roles
- - Resources
- - Serverless Functions
- - Groups
- - Security Groups
- - S3 Buckets
--
-1. Select a category and view the following columns of information:
-
- - **User**, **Role**, **Resource**, **Serverless Function Name**: Displays the name of the identity.
- - **Authorization System**: Displays the authorization system to which the identity belongs.
- - **Domain**: Displays the domain name to which the identity belongs.
- - **Permissions**: Displays the maximum number of permissions that the identity can be granted.
- - **Used**: Displays how many permissions that the identity has used.
- - **Granted**: Displays how many permissions that the identity has been granted.
- - **PCI**: Displays the permission creep index (PCI) score of the identity.
- - **Date Last Active On**: Displays the date that the identity was last active.
- - **Date Created On**: Displays the date when the identity was created.
+# View and download the Permissions analytics report
+This article describes how to view and download the **Permissions analytics report** in Permissions Management for AWS, Azure, and GPC authorization systems.
+
+>[!NOTE]
+>The Permissions analytics report can be downloaded in Excel and PDF formats.
+
+## View the Permissions Analytics Report in the Permissions Management UI
+
+You can view the Permissions Analytics Report information directly in the Permissions Management UI.
+
+1. In Permissions Management, select **Reports** in the navigation menu.
+2. Locate the **Permissions Analytics Report** in the list, then select it.
+3. View detailed report information from the list of categories that are displayed.
+ >[!NOTE]
+ > Categories will vary depending on which Authorization System you are viewing.
+
+4. To view more detailed information into each category, select the drop-down arrow next to the category name.
++
+## Download the Permissions Analytics Report in Excel format
+
+1. From the Permissions Management home page, select the **Reports** tab, then select the **Systems Reports** subtab.
+
+ The **Systems Reports** subtab displays a list of report names in the **Reports** table.
+2. Locate the **Permissions Analytics Report** in the list.
+3. To download the report in Excel format, click on the ellipses **(...)**, the select **Generate & Download**.
+
+ The Permissions Analytics Report screen is displayed.
+4. Click on **Report Format** and make sure that **XLSX** is selected.
+5. Click on **Schedule** and, if you want to download this report regularly, select the frequency for which you want it downloaded. You can also leave this at the default setting of **None**.
+6. Click on **Authorization Systems** and select which system you want to download the report for (AWS, Azure, or GCP).
+ >[!NOTE]
+ > To download a report for all Authorization Systems, check the **Collate** box. This will combine all selected Authorization Systems into one report.
+7. Click **Save**
+
+ The following message displays: **Report has been created**.
+
+ Once the Excel file is generated, the report is automatically sent to your email.
+
+## Download the Permissions Analytics Report in PDF format
+
+1. From the Permissions Management home page, select the **Reports** tab, then select the **Systems Reports** subtab.
+
+ The **Systems Reports** subtab displays a list of reports names in the **Reports** table.
+2. Locate the **Permissions Analytics Report** in the list, then select it.
+3. Select which Authorization System you want to generate the PDF download for (AWS, Azure, or GCP).
+ >[!NOTE]
+ > The PDF can only be downloaded for one Authorization System at a time. If more than one Authorization System is selected, the **Export PDF** button will be disabled.
+4. To download the report in PDF format, click on **Export PDF**.
+
+ The following message displays: **Successfully started to generate PDF report**.
+
+ Once the PDF is generated, the report is automatically sent to your email.
<!## Add and remove tags in the Permissions analytics report
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
The following table provides a comparison between Azure AD Connect and Azure AD
| Allow removing attributes from flowing from AD to Azure AD |ΓùÅ |ΓùÅ | | Allow advanced customization for attribute flows |ΓùÅ | | | Support for password writeback |ΓùÅ |ΓùÅ |
-| Support for device writeback|ΓùÅ |Customers should use [Cloud kerberose trust](https://learn.microsoft.com/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust?tabs=intune) for this moving forward|
+| Support for device writeback|ΓùÅ |Customers should use [Cloud Kerberos trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust?tabs=intune) for this moving forward|
| Support for group writeback|ΓùÅ | | | Support for merging user attributes from multiple domains|ΓùÅ | | | Azure AD Domain Services support|ΓùÅ | |
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
description: Use filter for devices in Conditional Access to enhance security po
Previously updated : 04/28/2022 Last updated : 01/25/2023
When creating Conditional Access policies, administrators have asked for the abi
## Common scenarios
-There are multiple scenarios that organizations can now enable using filter for devices condition. Below are some core scenarios with examples of how to use this new condition.
+There are multiple scenarios that organizations can now enable using filter for devices condition. The following scenarios provide examples of how to use this new condition.
- **Restrict access to privileged resources**. For this example, lets say you want to allow access to Microsoft Azure Management from a user who is assigned a privileged role Global Admin, has satisfied multifactor authentication and accessing from a device that is [privileged or secure admin workstations](/security/compass/privileged-access-devices) and attested as compliant. For this scenario, organizations would create two Conditional Access policies: - Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
Setting extension attributes is made possible through the Graph API. For more in
### Filter for devices Graph API
-The filter for devices API is available in Microsoft Graph v1.0 endpoint and can be accessed using https://graph.microsoft.com/v1.0/identity/conditionalaccess/policies/. You can configure a filter for devices when creating a new Conditional Access policy or you can update an existing policy to configure the filter for devices condition. To update an existing policy, you can do a patch call on the Microsoft Graph v1.0 endpoint mentioned above by appending the policy ID of an existing policy and executing the following request body. The example here shows configuring a filter for devices condition excluding devices that aren't marked as SAW devices. The rule syntax can consist of more than one single expression. To learn more about the syntax, see [dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
+The filter for devices API is available in Microsoft Graph v1.0 endpoint and can be accessed using the endpoint `https://graph.microsoft.com/v1.0/identity/conditionalaccess/policies/`. You can configure a filter for devices when creating a new Conditional Access policy or you can update an existing policy to configure the filter for devices condition. To update an existing policy, you can do a patch call on the Microsoft Graph v1.0 endpoint by appending the policy ID of an existing policy and executing the following request body. The example here shows configuring a filter for devices condition excluding devices that aren't marked as SAW devices. The rule syntax can consist of more than one single expression. To learn more about the syntax, see [dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
```json {
The following device attributes can be used with the filter for devices conditio
## Policy behavior with filter for devices
-The filter for devices condition in Conditional Access evaluates policy based on device attributes of a registered device in Azure AD and hence it's important to understand under what circumstances the policy is applied or not applied. The table below illustrates the behavior when a filter for devices condition is configured.
+The filter for devices condition in Conditional Access evaluates policy based on device attributes of a registered device in Azure AD and hence it's important to understand under what circumstances the policy is applied or not applied. The following table illustrates the behavior when a filter for devices condition is configured.
| Filter for devices condition | Device registration state | Device filter Applied | | | |
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Most of the IPv6 traffic that gets proxied to Azure AD comes from Microsoft Exch
If you're using Azure VNets, you'll have traffic coming from an IPv6 address. If you have VNet traffic blocked by a Conditional Access policy, check your Azure AD sign-in log. Once youΓÇÖve identified the traffic, you can get the IPv6 address being used and exclude it from your policy. > [!NOTE]
-> If you want to specify an IP CIDR range for a single address, apply the /128 bit mask. If you see the IPv6 address 2607:fb90:b27a:6f69:f8d5:dea0:fb39:74a and wanted to exclude that single address as a range, you would use 2607:fb90:b27a:6f69:f8d5:dea0:fb39:74a/128.
+> If you want to specify an IP CIDR range for a single address, apply the /128 bit mask. If you see the IPv6 address 2001:db8:4a7d:3f57:a1e2:6b4a:8f3e:d17b and wanted to exclude that single address as a range, you would use 2001:db8:4a7d:3f57:a1e2:6b4a:8f3e:d17b/128.
## What you should know
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md
Get answers to your identity app development questions directly from Microsoft e
If you can't find an answer to your problem by searching Microsoft Q&A, submit a new question. Use one of following tags when you ask your [high-quality question](/answers/articles/24951/how-to-write-a-quality-question.html):
-| Component/area | Tags |
-| | -- |
-| Microsoft Authentication Library (MSAL) | [[msal]](/answers/topics/azure-ad-msal.html) |
-| Open Web Interface for .NET (OWIN) middleware | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
-| [Azure AD B2B / External Identities](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](/answers/topics/azure-ad-b2b.html) |
-| [Azure AD B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](/answers/topics/azure-ad-b2c.html) |
-| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](/answers/topics/azure-ad-graph.html) |
-| All other authentication and authorization areas | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
+| Component/area | Tags |
+| -| |
+| Azure AD B2B / External Identities | [Azure Active Directory External Identities](/answers/tags/231/azure-active-directory-b2c) |
+| Azure AD B2C | [Azure Active Directory External Identities](/answers/tags/231/azure-active-directory-b2c) |
+| All other Azure Active Directory areas | [Azure Active Diretory](/answers/tags/49/azure-active-directory) |
+| Azure RBAC | [Azure Role-Based access control](/answers/tags/189/azure-rbac) |
+| Azure Key Vault | [Azure Key Vault](/answers/tags/5/azure-key-vault) |
+| Microsoft Security | [Microsoft Defender for Cloud](/answers/tags/392/defender-for-cloud) |
+| Microsoft Sentinel | [Microsoft Sentinel](/answers/tags/423/microsoft-sentinel) |
+| Azure AD Domain Services | [Azure Active Directory Domain Services](/answers/tags/222/azure-active-directory-domain) |
+| Azure Windows and Linux Virtual Machines | [Azure Virtual Machines](/answers/tags/94/azure-virtual-machines) |
## Create a GitHub issue
active-directory Msal Error Handling Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-dotnet.md
Previously updated : 11/26/2020 Last updated : 01/25/2023
### Exception types [MsalClientException](/dotnet/api/microsoft.identity.client.msalexception) is thrown when the library itself detects an error state, such as a bad configuration.
-[MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) is thrown when the Identity Provider (AAD) returns an error. It is a translation of the server error.
+[MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) is thrown when the Identity Provider (Azure AD) returns an error. It's a translation of the server error.
-[MsalUIRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) is type of [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) and indicates that user interaction is required, for example because MFA is required or because the user has changed their password and a token cannot be acquired silently.
+[MsalUIRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) is type of [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) and indicates that user interaction is required, for example because MFA is required or because the user has changed their password and a token can't be acquired silently.
### Processing exceptions
You can also have a look at the fields of [MsalClientException](/dotnet/api/micr
If [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) is thrown, try [Authentication and authorization error codes](reference-aadsts-error-codes.md) to see if the code is listed there.
-If [MsalUIRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) is thrown, it is an indication that an interactive flow needs to happen for the user to resolve the issue. In public client apps such as desktop and mobile app, this is resolved by calling `AcquireTokenInteractive` which displays a browser. In confidential client apps, web apps should redirect the user to the authorization page, and web APIs should return an HTTP status code and header indicative of the authentication failure (401 Unauthorized and a WWW-Authenticate header).
+If [MsalUIRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) is thrown, it's an indication that an interactive flow needs to happen for the user to resolve the issue. In public client apps such as desktop and mobile app, this is resolved by calling `AcquireTokenInteractive`, which displays a browser. In confidential client apps, web apps should redirect the user to the authorization page, and web APIs should return an HTTP status code and header indicative of the authentication failure (401 Unauthorized and a WWW-Authenticate header).
### Common .NET exceptions
Here are the common exceptions that might be thrown and some possible mitigation
| Exception | Error code | Mitigation| | | | |
-| [MsalUiRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) | AADSTS65001: The user or administrator has not consented to use the application with ID '{appId}' named '{appName}'. Send an interactive authorization request for this user and resource.| Get user consent first. If you aren't using .NET Core (which doesn't have any Web UI), call (once only) `AcquireTokeninteractive`. If you are using .NET core or don't want to do an `AcquireTokenInteractive`, the user can navigate to a URL to give consent: `https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id={clientId}&response_type=code&scope=user.read`. to call `AcquireTokenInteractive`: `app.AcquireTokenInteractive(scopes).WithAccount(account).WithClaims(ex.Claims).ExecuteAsync();`|
-| [MsalUiRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) | AADSTS50079: The user is required to use [multi-factor authentication (MFA)](../authentication/concept-mfa-howitworks.md).| There is no mitigation. If MFA is configured for your tenant and Azure Active Directory (AAD) decides to enforce it, fall back to an interactive flow such as `AcquireTokenInteractive`.|
+| [MsalUiRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) | AADSTS65001: The user or administrator hasn't consented to use the application with ID '{appId}' named '{appName}'. Send an interactive authorization request for this user and resource.| Get user consent first. If you aren't using .NET Core (which doesn't have any Web UI), call (once only) `AcquireTokeninteractive`. If you're using .NET core or don't want to do an `AcquireTokenInteractive`, the user can navigate to a URL to give consent: `https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id={clientId}&response_type=code&scope=user.read`. to call `AcquireTokenInteractive`: `app.AcquireTokenInteractive(scopes).WithAccount(account).WithClaims(ex.Claims).ExecuteAsync();`|
+| [MsalUiRequiredException](/dotnet/api/microsoft.identity.client.msaluirequiredexception) | AADSTS50079: The user is required to use [multi-factor authentication (MFA)](../authentication/concept-mfa-howitworks.md).| There's no mitigation. If MFA is configured for your tenant and Azure Active Directory (Azure AD) decides to enforce it, fall back to an interactive flow such as `AcquireTokenInteractive`.|
| [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) |AADSTS90010: The grant type isn't supported over the */common* or */consumers* endpoints. Use the */organizations* or tenant-specific endpoint. You used */common*.| As explained in the message from Azure AD, the authority needs to have a tenant or otherwise */organizations*.|
-| [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) | AADSTS70002: The request body must contain the following parameter: `client_secret or client_assertion`.| This exception can be thrown if your application was not registered as a public client application in Azure AD. In the Azure portal, edit the manifest for your application and set `allowPublicClient` to `true`. |
-| [MsalClientException](/dotnet/api/microsoft.identity.client.msalclientexception)| `unknown_user Message`: Could not identify logged in user| The library was unable to query the current Windows logged-in user or this user isn't AD or Azure AD joined (work-place joined users aren't supported). Mitigation 1: on UWP, check that the application has the following capabilities: Enterprise Authentication, Private Networks (Client and Server), User Account Information. Mitigation 2: Implement your own logic to fetch the username (for example, john@contoso.com) and use the `AcquireTokenByIntegratedWindowsAuth` form that takes in the username.|
+| [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) | AADSTS70002: The request body must contain the following parameter: `client_secret or client_assertion`.| This exception can be thrown if your application wasn't registered as a public client application in Azure AD. In the Azure portal, edit the manifest for your application and set `allowPublicClient` to `true`. |
+| [MsalClientException](/dotnet/api/microsoft.identity.client.msalclientexception)| `unknown_user Message`: Couldn't identify logged in user| The library was unable to query the current Windows logged-in user or this user isn't AD or Azure AD joined (work-place joined users aren't supported). Mitigation 1: on UWP, check that the application has the following capabilities: Enterprise Authentication, Private Networks (Client and Server), User Account Information. Mitigation 2: Implement your own logic to fetch the username (for example, john@contoso.com) and use the `AcquireTokenByIntegratedWindowsAuth` form that takes in the username.|
| [MsalClientException](/dotnet/api/microsoft.identity.client.msalclientexception)|integrated_windows_auth_not_supported_managed_user| This method relies on a protocol exposed by Active Directory (AD). If a user was created in Azure AD without AD backing ("managed" user), this method will fail. Users created in AD and backed by Azure AD ("federated" users) can benefit from this non-interactive method of authentication. Mitigation: Use interactive authentication.| ### `MsalUiRequiredException`
The interaction aims at having the user do an action. Some of those conditions a
### `MsalUiRequiredException` classification enumeration
-MSAL exposes a `Classification` field, which you can read to provide a better user experience. For example to tell the user that their password expired or that they'll need to provide consent to use some resources. The supported values are part of the `UiRequiredExceptionClassification` enum:
+MSAL exposes a `Classification` field, which you can read to provide a better user experience. For example to tell the user that their password expired or that they'll need to provide consent to use some resources. The supported values are part of the [`UiRequiredExceptionClassification`](/dotnet/api/microsoft.identity.client.uirequiredexceptionclassification) enum:
| Classification | Meaning | Recommended handling | |-|-|-|
catch (MsalUiRequiredException ex) when (ex.ErrorCode == MsalError.InvalidGrantE
When calling an API requiring Conditional Access from MSAL.NET, your application will need to handle claim challenge exceptions. This will appear as an [MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) where the [Claims](/dotnet/api/microsoft.identity.client.msalserviceexception.claims) property won't be empty.
-To handle the claim challenge, you'll need to use the `.WithClaim()` method of the `PublicClientApplicationBuilder` class.
+To handle the claim challenge, you'll need to use the `.WithClaim()` method of the [`PublicClientApplicationBuilder`](/dotnet/api/microsoft.identity.client.publicclientapplicationbuilder) class.
[!INCLUDE [Active directory error handling retries](../../../includes/active-directory-develop-error-handling-retries.md)]
MSAL.NET implements a simple retry-once mechanism for errors with HTTP error cod
[MsalServiceException](/dotnet/api/microsoft.identity.client.msalserviceexception) surfaces `System.Net.Http.Headers.HttpResponseHeaders` as a property `namedHeaders`. You can use additional information from the error code to improve the reliability of your applications. In the case described, you can use the `RetryAfterproperty` (of type `RetryConditionHeaderValue`) and compute when to retry.
-Here is an example for a daemon application using the client credentials flow. You can adapt this to any of the methods for acquiring a token.
+Here's an example for a daemon application using the client credentials flow. You can adapt this to any of the methods for acquiring a token.
```csharp
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should attempt to fetch it from the cache first.
-You can monitor the source of the tokens by inspecting the `AuthenticationResult.AuthenticationResultMetadata.TokenSource` property
+You can monitor the source of the tokens by inspecting the `AuthenticationResult.AuthenticationResultMetadata.TokenSource` property.
## Websites and web APIs
active-directory Tutorial V2 React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-react.md
Title: "Tutorial: Create a React single-page app that uses auth code flow" description: In this tutorial, you create a React SPA that can sign in users and use the auth code flow to obtain an access token from the Microsoft identity platform and call the Microsoft Graph API. -+ Previously updated : 05/05/2022- Last updated : 01/24/2023++
Once you have [Node.js](https://nodejs.org/en/download/) installed, open up a te
```console npx create-react-app msal-react-tutorial # Create a new React app cd msal-react-tutorial # Change to the app directory
-npm install @azure/msal-browser @azure/msal-react # Install the MSAL packages
+npm install @azure/msal-browser @azure/msal-react @azure/msal-common # Install the MSAL packages
npm install react-bootstrap bootstrap # Install Bootstrap for styling ```
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
You can usually leave an organization on your own without having to contact an a
For example: https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com or
- https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
+ https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789. You might need to open this URL in a private browser session.
1. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
This happens when another object in the directory has the same invited email add
## The guest user object doesn't have a proxyAddress
-Sometimes, the external guest user you're inviting conflicts with an existing [Contact object](/graph/api/resources/contact). When this occurs, the guest user is created without a proxyAddress. This means that the user won't be able to redeem this account using [just-in-time redemption](redemption-experience.md#redemption-through-a-direct-link) or [email one-time passcode authentication](one-time-passcode.md#user-experience-for-one-time-passcode-guest-users).
+Sometimes, the external guest user you're inviting conflicts with an existing [Contact object](/graph/api/resources/contact). When this occurs, the guest user is created without a proxyAddress. This means that the user won't be able to redeem this account using [just-in-time redemption](redemption-experience.md#redemption-through-a-direct-link) or [email one-time passcode authentication](one-time-passcode.md#user-experience-for-one-time-passcode-guest-users). Also, if the contact object you're synchronizing from on-premises AD conflicts with an existing guest user, the conflicting proxyAddress is removed from the existing guest user.
## How does ΓÇÿ\#ΓÇÖ, which isn't normally a valid character, sync with Azure AD?
Let's say you inadvertently invite a guest user with an email address that match
## Next steps - [Get support for B2B collaboration](../fundamentals/active-directory-troubleshooting-support-howto.md)-- [Use audit logs and access reviews](auditing-and-reporting.md)
+- [Use audit logs and access reviews](auditing-and-reporting.md)
active-directory Azure Ad Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md
The location selected during tenant creation will map to one of the following ge
* North America * Worldwide
-Azure AD handles Core Store data based on usability, performance, residency and/or other requirements based on geo-location. The term residency indicates Microsoft provides assurance the data isnΓÇÖt persisted outside the geo-location.
-
-Azure AD replicates each tenant through its scale unit, across data centers, based on the following criteria:
+Azure AD handles Core Store data based on usability, performance, residency and/or other requirements based on geo-location. Azure AD replicates each tenant through its scale unit, across data centers, based on the following criteria:
* Azure AD Core Store data, stored in data centers closest to the tenant-residency location, to reduce latency and provide fast user sign-in times * Azure AD Core Store data stored in geographically isolated data centers to assure availability during unforeseen single-datacenter, catastrophic events
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
na Previously updated : 05/16/2021 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
na Previously updated : 01/05/2022 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
na Previously updated : 06/18/2020 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-edit.md
na Previously updated : 06/18/2020 Last updated : 01/25/2023
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
na Previously updated : 08/01/2022 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
na Previously updated : 12/15/2021 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
na Previously updated : 03/24/2022 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
na Previously updated : 07/01/2021 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md
na Previously updated : 9/20/2021 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
na Previously updated : 12/14/2020 Last updated : 01/25/2023
active-directory Entitlement Management Access Package Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-settings.md
na Previously updated : 06/18/2020 Last updated : 01/25/2023
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
na Previously updated : 12/27/2022 Last updated : 01/25/2023
active-directory Entitlement Management Access Reviews Self Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-self-review.md
na Previously updated : 06/18/2020 Last updated : 01/25/2023
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
na Previously updated : 8/31/2021 Last updated : 01/25/2023
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md
na Previously updated : 07/6/2021 Last updated : 01/25/2023
active-directory Entitlement Management Delegate Managers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-managers.md
na Previously updated : 06/18/2020 Last updated : 01/25/2023
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md
na Previously updated : 7/6/2021 Last updated : 01/25/2023
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
na Previously updated : 12/27/2020 Last updated : 01/25/2023
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
na Previously updated : 11/02/2020 Last updated : 01/25/2023
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
na Previously updated : 5/19/2021 Last updated : 01/25/2023
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
na Previously updated : 12/11/2020 Last updated : 01/25/2023
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
na Previously updated : 08/01/2022 Last updated : 01/25/2023
active-directory Entitlement Management Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-process.md
na Previously updated : 08/01/2022 Last updated : 01/25/2023
active-directory Entitlement Management Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reports.md
na Previously updated : 12/23/2020 Last updated : 01/25/2023
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
na Previously updated : 06/18/2020 Last updated : 01/25/2023
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
Previously updated : 01/20/2022 Last updated : 01/25/2023
This document explains how to set up synchronization from on-premises Azure AD C
## Understanding EmployeeHireDate and EmployeeLeaveDateTime formatting+ The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must be formatted in a specific way. This means that you may need to use an expression to convert the value of your source attribute to a format that will be accepted by the EmployeeHireDate or EmployeeLeaveDateTime. The table below outlines the format that is expected and provides an example expression on how to convert the values. |Scenario|Expression/Format|Target|More Information|
For more attributes, see the [Workday attribute reference](../app-provisioning/w
To ensure timing accuracy of scheduled workflows itΓÇÖs curial to consider: - The time portion of the attribute must be set accordingly, for example the `employeeHireDate` should have a time at the beginning of the day like 1AM or 5AM and the `employeeLeaveDateTime` should have time at the end of the day like 9PM or 11PM
- - Workflow won't run earlier than the time specified in the attribute, however the [tenant schedule (default 3h)](customize-workflow-schedule.md) may delay the workflow run. For instance, if you set the `employeeHireDate` to 8AM but the tenant schedule doesn't run until 9AM, the workflow won't be processed until then. If a new hire is starting at 8AM, you would want to set the time to something like (start time - tenant schedule) to ensure it had run before the employee arrives.
+- The Workflows won't run earlier than the time specified in the attribute, however the [tenant schedule (default 3h)](customize-workflow-schedule.md) may delay the workflow run. For instance, if you set the `employeeHireDate` to 8AM but the tenant schedule doesn't run until 9AM, the workflow won't be processed until then. If a new hire is starting at 8AM, you would want to set the time to something like (start time - tenant schedule) to ensure it had run before the employee arrives.
- It's recommended, that if you're using temporary access pass (TAP), that you set the maximum lifetime to 24 hours. Doing this will help ensure that the TAP hasn't expired after being sent to an employee who may be in a different timezone. For more information, see [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods.](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) - When importing the data, you should understand if and how the source provides time zone information for your users to potentially make adjustments to ensure timing accuracy.
-## Create a custom synch rule in Azure AD Connect cloud sync for EmployeeHireDate
+## Create a custom sync rule in Azure AD Connect cloud sync for EmployeeHireDate
The following steps will guide you through creating a synchronization rule using cloud sync. 1. In the Azure portal, select **Azure Active Directory**. 2. Select **Azure AD Connect**.
To ensure timing accuracy of scheduled workflows itΓÇÖs curial to consider:
For more information on attributes, see [Attribute mapping in Azure AD Connect cloud sync.](../cloud-sync/how-to-attribute-mapping.md)
-## How to create a custom synch rule in Azure AD Connect for EmployeeHireDate
+## How to create a custom sync rule in Azure AD Connect for EmployeeHireDate
The following example will walk you through setting up a custom synchronization rule that synchronizes the Active Directory attribute to the employeeHireDate attribute in Azure AD. 1. Open a PowerShell window as administrator and run `Set-ADSyncScheduler -SyncCycleEnabled $false` to disable the scheduler.
The following example will walk you through setting up a custom synchronization
17. Close the Synchronization Rules Editor 18. Enable the scheduler again by running `Set-ADSyncScheduler -SyncCycleEnabled $true`. ----
+> [!NOTE]
+> **msDS-cloudExtensionAttribute1** is an example source.
For more information, see [How to customize a synchronization rule](../hybrid/how-to-connect-create-custom-sync-rule.md) and [Make a change to the default configuration.](../hybrid/how-to-connect-sync-change-the-configuration.md)
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
Previously updated : 01/20/2022 Last updated : 01/25/2023
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
+ # Securing workload identities with Identity Protection Azure AD Identity Protection has historically protected users in detecting, investigating, and remediating identity-based risks. We're now extending these capabilities to workload identities to protect applications and service principals.
A [workload identity](../develop/workload-identities-overview.md) is an identity
These differences make workload identities harder to manage and put them at higher risk for compromise. > [!IMPORTANT]
-> Detections are visible only to Workload Identities Premium customers. Customers without Workload Identities Premium licenses still receive all detections but the reporting of details is limited.
+> Detections are visible only to [Workload Identities Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-identities#office-StandaloneSKU-k3hubfz) customers. Customers without Workload Identities Premium licenses still receive all detections but the reporting of details is limited.
## Prerequisites
The [Azure AD Toolkit](https://github.com/microsoft/AzureADToolkit) is a PowerSh
- [Azure AD audit logs](../reports-monitoring/concept-audit-logs.md) - [Azure AD sign-in logs](../reports-monitoring/concept-sign-ins.md) - [Simulate risk detections](howto-identity-protection-simulate-risk.md)+
active-directory Datawiza Azure Ad Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-jde.md
Title: Configure Azure AD Multi-Factor Authentication and SSO for Oracle JD Edwards applications using Datawiza Access Broker
-description: Enable Azure Active Directory Multi-Factor Authentication and SSO for Oracle JD Edwards application using Datawiza Access Broker
+ Title: Configure Azure AD Multi-Factor Authentication and SSO for Oracle JD Edwards applications using Datawiza Access Proxy
+description: Enable Azure AD MFA and SSO for Oracle JD Edwards application using Datawiza Access Proxy
Previously updated : 7/20/2022 Last updated : 01/24/2023 # Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards
-This tutorial shows how to enable Azure Active Directory (Azure AD) single sign-on (SSO) and Azure AD Multi-Factor Authentication for an Oracle JD Edwards (JDE) application using Datawiza Access Broker (DAB).
+In this tutorial, learn how to enable Azure Active Directory (Azure AD) single sign-on (SSO) and Azure AD Multi-Factor Authentication (MFA) for an Oracle JD Edwards (JDE) application using Datawiza Access Proxy (DAP).
-Benefits of integrating applications with Azure AD using DAB include:
+Learn more [Datawiza Access Proxy](https://www.datawiza.com/)
-- [Proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) through [Azure AD SSO](https://azure.microsoft.com/solutions/active-directory-sso/OCID=AIDcmm5edswduu_SEM_e13a1a1787ce1700761a78c235ae5906:G:s&ef_id=e13a1a1787ce1700761a78c235ae5906:G:s&msclkid=e13a1a1787ce1700761a78c235ae5906#features), [Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) and
- [Conditional Access](../conditional-access/overview.md).
+Benefits of integrating applications with Azure AD using DAP:
-- [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/). Use of web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, Oracle Peoplesoft, and home-grown apps.--- Use the [Datawiza Cloud Management Console](https://console.datawiza.com), to manage access to applications in public clouds and on-premises.
+* [Embrace proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) - a security model that adapts to modern environments and embraces hybrid workplace, while it protects people, devices, apps, and data
+* [Azure Active Directory single sign-on](https://azure.microsoft.com/solutions/active-directory-sso/#overview) - secure and seamless access for users and apps, from any location, using a device
+* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) - users are prompted during sign-in for forms of identification, such as a code on their cellphone or a fingerprint scan
+* [What is Conditional Access?](../conditional-access/overview.md) - policies are if-then statements, if a user wants to access a resource, then they must complete an action
+* [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/) - use web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, and home-grown apps
+* Use the [Datawiza Cloud Management Console](https://console.datawiza.com) (DCMC) - manage access to applications in public clouds and on-premises
## Scenario description This scenario focuses on Oracle JDE application integration using HTTP authorization headers to manage access to protected content.
-In legacy applications, due to the absence of modern protocol support, a direct integration with Azure AD SSO is difficult. Datawiza Access Broker (DAB) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning. DAB lowers integration overhead, saves engineering time, and improves application security.
+In legacy applications, due to the absence of modern protocol support, a direct integration with Azure AD SSO is difficult. DAP can bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning. DAP lowers integration overhead, saves engineering time, and improves application security.
## Scenario architecture The scenario solution has the following components: -- **Azure AD**: The Microsoft cloud-based identity and access management service, which helps users sign in and access external and internal resources.--- **Oracle JDE application**: Legacy application protected by Azure AD.--- **Datawiza Access Broker (DAB)**: A lightweight container-based reverse-proxy that implements OpenID Connect (OIDC), OAuth, or Security Assertion Markup Language (SAML) for user sign-in flow. It transparently passes identity to applications through HTTP headers.
+* **Azure AD** - identity and access management service that helps users sign in and access external and internal resources
+* **Oracle JDE application** - legacy application protected by Azure AD
+* **Datawiza Access Proxy (DAP)** - container-based reverse-proxy that implements OpenID Connect (OIDC), OAuth, or Security Assertion Markup Language (SAML) for user sign-in flow. It passes identity transparently to applications through HTTP headers.
+* **Datawiza Cloud Management Console (DCMC)** -a console to manage DAP. Administrators use UI and RESTful APIs to configure DAP and access control policies.
-- **Datawiza Cloud Management Console (DCMC)**: A centralized console to manage DAB. DCMC has UI and RESTful APIs for administrators to configure Datawiza Access Broker and access control policies.-
-Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication
-architecture](./datawiza-with-azure-ad.md#datawiza-with-azure-ad-authentication-architecture).
+Learn more: [Datawiza and Azure AD Authentication Architecture](./datawiza-with-azure-ad.md#datawiza-with-azure-ad-authentication-architecture)
## Prerequisites Ensure the following prerequisites are met. -- An Azure subscription. If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free)--- An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](../fundamentals/active-directory-access-create-new-tenant.md)
--- Docker and Docker Compose-
- - Go to docs.docker.com to [Get Docker](https://docs.docker.com/get-docker) and [Install Docker Compose](https://docs.docker.com/compose/install).
--- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.-
- - See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md).
--- An account with Azure AD and the Application administrator role-
- - See, [Azure AD built-in roles, all roles](../roles/permissions-reference.md#all-roles).
--- An Oracle JDE environment--- (Optional) An SSL web certificate to publish services over HTTPS. You can also use default Datawiza self-signed certs for testing.
+* An Azure subscription.
+ * If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free)
+* An Azure AD tenant linked to the Azure subscription
+ * See, [Quickstart: Create a new tenant in Azure Active Directory.](../fundamentals/active-directory-access-create-new-tenant.md)
+* Docker and Docker Compose
+ * Go to docs.docker.com to [Get Docker](https://docs.docker.com/get-docker) and [Install Docker Compose](https://docs.docker.com/compose/install)
+* User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory
+ * See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)
+* An account with Azure AD and the Application administrator role
+ * See, [Azure AD built-in roles, all roles](../roles/permissions-reference.md#all-roles)
+* An Oracle JDE environment
+* (Optional) An SSL web certificate to publish services over HTTPS. You can also use default Datawiza self-signed certs for testing.
## Getting started with DAB To integrate Oracle JDE with Azure AD: 1. Sign in to [Datawiza Cloud Management Console.](https://console.datawiza.com/)- 2. The Welcome page appears.- 3. Select the orange **Getting started** button.
- ![Screenshot that shows the getting started page.](media/datawiza-azure-ad-sso-oracle-jde/getting-started.png)
--
-4. In the **Name** and **Description** fields, enter the relevant information.
+ ![Screenshot of the Getting Started button.](media/datawiza-azure-ad-sso-oracle-jde/getting-started.png)
+4. In the **Name** and **Description** fields, enter information.
5. Select **Next**.
- ![Screenshot that shows the name and description fields.](media/datawiza-azure-ad-sso-oracle-jde/name-description-field.png)
--
-6. On the **Add Application** dialog, use the following values:
-
- | Property| Value|
- |:--|:-|
- | Platform | Web |
- | App Name | Enter a unique application name.|
- | Public Domain | For example: `https://jde-external.example.com`. <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the **Public Domain** port. |
- | Listen Port | The port that DAB listens on.|
- | Upstream Servers | The Oracle JDE implementation URL and port to be protected.|
-
-7. Select **Next**.
+ ![Screenshot of the Name field and Next button under Deployment Name.](media/datawiza-azure-ad-sso-oracle-jde/name-description-field.png)
- ![Screenshot that shows how to add application.](media/datawiza-azure-ad-sso-oracle-jde/add-application.png)
+6. On the **Add Application** dialog, for **Platform**, select **Web**.
+7. For **App Name**, enter a unique application name.
+8. For **Public Domain**, for example enter `https://jde-external.example.com`. For testing the configuration, you can use localhost DNS. If you aren't deploying DAP behind a load balancer, use the **Public Domain** port.
+9. For **Listen Port**, select the port that DAP listens on.
+10. For **Upstream Servers**, select the Oracle JDE implementation URL and port to be protected.
+11. Select **Next**.
+ ![Screenshot of Public Domain, Listen Port, and Upstream Server entries.](media/datawiza-azure-ad-sso-oracle-jde/add-application.png)
-8. On the **Configure IdP** dialog, enter the relevant information.
+12. On the **Configure IdP** dialog, enter information.
>[!Note]
- >DCMC has [one-click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html) to help complete Azure AD configuration. DCMC calls the Graph API to create an application registration on your behalf in your Azure AD tenant.
+ >Use DCMC one-click integration to help complete Azure AD configuration. DCMC calls the Graph API to create an application registration on your behalf in your Azure AD tenant. Go to docs.datawiza.com for [One Click Integration With Azure AD](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html).
-9. Select **Create**.
+13. Select **Create**.
- ![Screenshot that shows how to create I d P.](media/datawiza-azure-ad-sso-oracle-jde/configure-idp.png)
+ ![Screenshot of Protocol, Identity Provider, and Supported account types entries, also the Create button.](media/datawiza-azure-ad-sso-oracle-jde/configure-idp.png)
+14. The DAP deployment page appears.
+15. Make a note of the deployment Docker Compose file. The file includes the DAP image, Provisioning Key, and Provision Secret, which pulls the latest configuration and policies from DCMC.
-10. The DAB deployment page appears.
-
-11. Make a note of the deployment Docker Compose file. The file includes the DAB image, also the Provisioning Key and Provision Secret, which pulls the latest configuration and policies from DCMC.
-
- ![Screenshot that shows the docker compose file value.](media/datawiza-azure-ad-sso-oracle-jde/provision.png)
-
+ ![Screenshot of Docker entries.](media/datawiza-azure-ad-sso-oracle-jde/provision.png)
## SSO and HTTP headers
-DAB gets user attributes from IdP and passes them to the upstream application with a header or cookie.
+DAP gets user attributes from IdP and passes them to the upstream application with a header or cookie.
-For the Oracle JDE application to recognize the user correctly, there's another configuration step. Using a certain name, it instructs DAB to pass the values from the IdP to the application through the HTTP header.
+The Oracle JDE application needs to recognize the user: using a name, the application instructs DAP to pass the values from the IdP to the application through the HTTP header.
1. In Oracle JDE, from the left navigation, select **Applications**.- 2. Select the **Attribute Pass** subtab.
+3. For **Field**, select **Email**.
+4. For **Expected**, select **JDE_SSO_UID**.
+5. For **Type**, select **Header**.
-3. Use the following values.
-
- | Property| Value |
- |:--|:-|
- | Field | Email |
- | Expected | JDE_SSO_UID |
- | Type | Header |
-
- ![Screenshot that shows the attributes that need to be passed for the Oracle JDE application.](media/datawiza-azure-ad-sso-oracle-jde/add-new-attribute.png)
-
+ ![Screenshot of information on the Attribute Pass tab.](media/datawiza-azure-ad-sso-oracle-jde/add-new-attribute.png)
>[!Note]
- >This configuration uses the Azure AD user principal name as the sign in username used by Oracle JDE. To use another user identity, go to the **Mappings** tab.
-
- ![Screenshot that shows the user principal name field as the username.](media/datawiza-azure-ad-sso-oracle-jde/user-principal-name-mapping.png)
+ >This configuration uses the Azure AD user principal name as the sign-in username, used by Oracle JDE. To use another user identity, go to the **Mappings** tab.
+ ![Screenshot of the userPrincipalName entry.](media/datawiza-azure-ad-sso-oracle-jde/user-principal-name-mapping.png)
-4. Select the **Advanced** tab.
- ![Screenshot that shows the advanced fields.](media/datawiza-azure-ad-sso-oracle-jde/advanced-attributes.png)
+6. Select the **Advanced** tab.
+ ![Screenshot of information on the Advanced tab.](media/datawiza-azure-ad-sso-oracle-jde/advanced-attributes.png)
- ![Screenshot that shows the new attribute.](media/datawiza-azure-ad-sso-oracle-jde/add-new-attribute.png)
+ ![Screenshot of information on the Attribute Pass tab.](media/datawiza-azure-ad-sso-oracle-jde/add-new-attribute.png)
-5. Select **Enable SSL**.
-6. From the **Cert Type** dropdown, select a type.
+7. Select **Enable SSL**.
- ![Screenshot that shows the cert type dropdown.](media/datawiza-azure-ad-sso-oracle-jde/cert-type.png)
+8. From the **Cert Type** dropdown, select a type.
+ ![Screenshot that shows the cert type dropdown.](media/datawiza-azure-ad-sso-oracle-jde/cert-type-new.png)
-7. For testing purposes, we'll be providing a self-signed certificate.
- ![Screenshot that shows the enable SSL menu.](media/datawiza-azure-ad-sso-oracle-jde/enable-ssl.png)
+9. For testing purposes, we'll be providing a self-signed certificate.
+ ![Screenshot that shows the enable SSL menu.](media/datawiza-azure-ad-sso-oracle-jde/enable-ssl-new.png)
>[!NOTE] >You have the option to upload a certificate from a file.
- ![Screenshot that shows uploading cert from a file option.](media/datawiza-azure-ad-sso-oracle-jde/upload-cert.png)
+ ![Screenshot that shows uploading cert from a file option.](media/datawiza-azure-ad-sso-oracle-jde/cert-upload-new.png)
-
-8. Select **Save**.
+10. Select **Save**.
## Enable Azure AD Multi-Factor Authentication
-To provide an extra level of security for sign-ins, enforce multifactor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure portal](../authentication/tutorial-enable-azure-mfa.md).
+To provide more security for sign-ins, you can enforce MFA for user sign-in.
-1. Sign in to the Azure portal as a **Global Administrator**.
+See, [Tutorial: Secure user sign-in events with Azure AD MFA](../authentication/tutorial-enable-azure-mfa.md).
+1. Sign in to the Azure portal as a Global Administrator.
2. Select **Azure Active Directory** > **Manage** > **Properties**. - 3. Under **Properties**, select **Manage security defaults**. -
-4. Under **Enable Security defaults**, select **Yes** and then **Save**.
+4. Under **Enable Security defaults**, select **Yes**.
+5. Select **Save**.
## Enable SSO in the Oracle JDE EnterpriseOne Console To enable SSO in the Oracle JDE environment:
-1. Sign in to the Oracle JDE EnterpriseOne Server Manager Management Console as an **Administrator**.
-
+1. Sign in to the Oracle JDE EnterpriseOne Server Manager Management Console as an Administrator.
2. In **Select Instance**, select the option above **EnterpriseOne HTML Server**.-
-3. In the **Configuration** tile, select **View as Advanced**, and then select **Security**.
-
-4. Select the **Enable Oracle Access Manager** checkbox.
-
-5. In the **Oracle Access Manager Sign-Off URL** field, enter **datawiza/ab-logout**.
-
-6. In the **Security Server Configuration** section, select **Apply**.
-
-7. Select **Stop** to confirm you want to stop the managed instance.
+3. In the **Configuration** tile, select **View as Advanced**.
+4. Select **Security**.
+5. Select the **Enable Oracle Access Manager** checkbox.
+6. In the **Oracle Access Manager Sign-Off URL** field, enter **datawiza/ab-logout**.
+7. In the **Security Server Configuration** section, select **Apply**.
+8. Select **Stop**.
>[!NOTE]
- >If a message shows the web server configuration (jas.ini) is out-of-date, select **Synchronize Configuration**.
+ >If a message states the web server configuration (jas.ini) is out-of-date, select **Synchronize Configuration**.
-8. Select **Start** to confirm you want to start the managed instance.
+9. Select **Start**.
## Test an Oracle JDE-based application
-Testing validates the application behaves as expected for URIs. To test an Oracle JDE application, you validate application headers, policy, and overall testing. If needed, use header and policy simulation to validate header fields and policy execution.
+To test an Oracle JDE application, validate application headers, policy, and overall testing. If needed, use header and policy simulation to validate header fields and policy execution.
-To confirm Oracle JDE application access occurs correctly, a prompt appears to use an Azure AD account for sign-in. Credentials are checked and the Oracle JDE appears.
+To confirm Oracle JDE application access occurs, a prompt appears to use an Azure AD account for sign-in. Credentials are checked and the Oracle JDE appears.
## Next steps -- [Watch the video - Enable SSO/MFA for Oracle JDE with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90).--- [Configure Datawiza and Azure AD for secure hybrid access](./datawiza-with-azure-ad.md)--- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md)--- [Datawiza documentation](https://docs.datawiza.com/)
+* Video [Enable SSO and MFA for Oracle JDE) with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90)
+* [Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](./datawiza-with-azure-ad.md)
+* [Tutorial: Configure Azure AD B2C with Datawiza to provide secure hybrid access](../../active-directory-b2c/partner-datawiza.md)
+* Go to docs.datawiza.com for Datawiza [User Guides](https://docs.datawiza.com/)
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
Title: Configure Azure AD Multi-Factor Authentication and SSO for Oracle PeopleSoft applications using Datawiza Access Broker
-description: Enable Azure Active Directory Multi-Factor Authentication and SSO for Oracle PeopleSoft application using Datawiza Access Broker
+ Title: Configure Azure AD Multi-Factor Authentication and SSO for Oracle PeopleSoft applications using Datawiza Access Proxy
+description: Enable Azure AD MFA and SSO for Oracle PeopleSoft application using Datawiza Access Proxy
Previously updated : 9/12/2022 Last updated : 01/25/2023 # Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle PeopleSoft
-This tutorial shows how to enable Azure Active Directory (Azure AD) single sign-on (SSO) and Azure AD Multi-Factor Authentication for an
-Oracle PeopleSoft application using Datawiza Access Broker (DAB).
+In this tutorial, learn how to enable Azure Active Directory (Azure AD) single sign-on (SSO) and Azure AD Multi-Factor Authentication (MFA) for an
+Oracle PeopleSoft application using Datawiza Access Proxy (DAP).
-Benefits of integrating applications with Azure AD using DAB include:
+Learn more: [Datawiza Access Proxy](https://www.datawiza.com/)
-- [Proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) through [Azure AD SSO](https://azure.microsoft.com/solutions/active-directory-sso/OCID=AIDcmm5edswduu_SEM_e13a1a1787ce1700761a78c235ae5906:G:s&ef_id=e13a1a1787ce1700761a78c235ae5906:G:s&msclkid=e13a1a1787ce1700761a78c235ae5906#features), [Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) and
- [Conditional Access](../conditional-access/overview.md).
+Benefits of integrating applications with Azure AD using DAP:
-- [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/). Use of web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, and home-grown apps.--- Use the [Datawiza Cloud Management Console](https://console.datawiza.com), to manage access to applications in public clouds and on-premises.
+* [Embrace proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) - a security model that adapts to modern environments and embraces hybrid workplace, while it protects people, devices, apps, and data
+* [Azure Active Directory single sign-on](https://azure.microsoft.com/solutions/active-directory-sso/#overview) - secure and seamless access for users and apps, from any location, using a device
+* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) - users are prompted during sign-in for forms of identification, such as a code on their cellphone or a fingerprint scan
+* [What is Conditional Access?](../conditional-access/overview.md) - policies are if-then statements, if a user wants to access a resource, then they must complete an action
+* [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/) - use web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, and home-grown apps
+* Use the [Datawiza Cloud Management Console](https://console.datawiza.com) (DCMC) - manage access to applications in public clouds and on-premises
## Scenario description
-This scenario focuses on Oracle PeopleSoft application integration using
-HTTP authorization headers to manage access to protected content.
+This scenario focuses on Oracle PeopleSoft application integration using HTTP authorization headers to manage access to protected content.
-In legacy applications, due to the absence of modern protocol support, a
-direct integration with Azure AD SSO is difficult. Datawiza Access
-Broker (DAB) bridges the gap between the legacy application and the
-modern ID control plane, through protocol transitioning. DAB lowers
-integration overhead, saves engineering time, and improves application
-security.
+In legacy applications, due to the absence of modern protocol support, a direct integration with Azure AD SSO is difficult. Datawiza Access Proxy (DAP) bridges the gap between the legacy application and the modern ID control plane, through protocol transitioning. DAP lowers integration overhead, saves engineering time, and improves application security.
## Scenario architecture The scenario solution has the following components: -- **Azure AD**: The Microsoft cloud-based identity and access management service, which helps users sign in and access external and internal resources.--- **Datawiza Access Broker (DAB)**: A lightweight container-based reverse-proxy that implements OpenID Connect (OIDC), OAuth, or Security Assertion Markup Language (SAML) for user sign-in flow. It transparently passes identity to applications through HTTP headers.--- **Datawiza Cloud Management Console (DCMC)**: A centralized console to manage DAB. DCMC has UI and RESTful APIs for administrators to configure Datawiza Access Broker and access control policies.
+* **Azure AD** - identity and access management service that helps users sign in and access external and internal resources
+* **Datawiza Access Proxy (DAP)** - container-based reverse-proxy that implements OpenID Connect (OIDC), OAuth, or Security Assertion Markup Language (SAML) for user sign-in flow. It passes identity transparently to applications through HTTP headers.
+* **Datawiza Cloud Management Console (DCMC)** - administrators manage DAP with UI and RESTful APIs to configure DAP and access control policies
+* **Oracle PeopleSoft application** - legacy application to be protected by Azure AD and DAP
-- **Oracle PeopleSoft application**: Legacy application going to be protected by Azure AD and DAB.-
-Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](./datawiza-with-azure-ad.md#datawiza-with-azure-ad-authentication-architecture).
+Learn more: [Datawiza and Azure AD authentication architecture](./datawiza-with-azure-ad.md#datawiza-with-azure-ad-authentication-architecture)
## Prerequisites Ensure the following prerequisites are met. -- An Azure subscription. If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free)--- An Azure AD tenant linked to the Azure subscription.-
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](../fundamentals/active-directory-access-create-new-tenant.md)
--- Docker and Docker Compose-
- - Go to docs.docker.com to [Get Docker](https://docs.docker.com/get-docker) and [Install Docker Compose](https://docs.docker.com/compose/install).
--- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.-
- - See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md).
--- An account with Azure AD and the Application administrator role-
- - See, [Azure AD built-in roles, all roles](../roles/permissions-reference.md#all-roles).
--- An Oracle PeopleSoft environment--- (Optional) An SSL web certificate to publish services over HTTPS. You can also use default Datawiza self-signed certs for testing.-
-## Getting started with DAB
+* An Azure subscription
+ * If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free)
+* An Azure AD tenant linked to the Azure subscription
+ * See, [Quickstart: Create a new tenant in Azure Active Directory](../fundamentals/active-directory-access-create-new-tenant.md)
+* Docker and Docker Compose
+ * Go to docs.docker.com to [Get Docker](https://docs.docker.com/get-docker) and [Install Docker Compose](https://docs.docker.com/compose/install)
+* User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory
+ * See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)
+* An account with Azure AD and the Application Administrator role
+ * See, [Azure AD built-in roles, all roles](../roles/permissions-reference.md#all-roles)
+* An Oracle PeopleSoft environment
+* (Optional) An SSL web certificate to publish services over HTTPS. You can use default Datawiza self-signed certs for testing.
+
+## Getting started with DAP
To integrate Oracle PeopleSoft with Azure AD:
-1. Sign in to [Datawiza Cloud Management Console.](https://console.datawiza.com/)
-
+1. Sign in to [Datawiza Cloud Management Console](https://console.datawiza.com/) (DCMC).
2. The Welcome page appears.- 3. Select the orange **Getting started** button.
- ![Screenshot that shows the getting started page.](./media/access-oracle-peoplesoft-using-datawiza/getting-started-button.png)
+ ![Screenshot of the Getting Started button.](./media/access-oracle-peoplesoft-using-datawiza/getting-started-button.png)
-4. In the Name and Description fields, enter the relevant information.
+4. In the **Name** and **Description** fields, enter information.
- >![Screenshot that shows the name and description fields.](./media/access-oracle-peoplesoft-using-datawiza/deployment-details.png)
+ ![Screenshot of the Name field under Deployment Name.](./media/access-oracle-peoplesoft-using-datawiza/deployment-details.png)
5. Select **Next**.-
-6. On the Add Application dialog, use the following values:
-
- | Property | Value |
- |:--|:-|
- | Platform | Web |
- | App Name | Enter a unique application name|
- | Public Domain | For example: `https://ps-external.example.com` <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the Public Domain port. |
- | Listen Port | The port that DAB listens on. |
- | Upstream Servers | The Oracle PeopleSoft implementation URL and port to be protected.|
-
- ![Screenshot that shows how to add application.](./media/access-oracle-peoplesoft-using-datawiza/add-application.png)
+6. The Add Application dialog appears.
+7. For **Platform**, select **Web**.
+8. For **App Name**, enter a unique application name.
+9. For **Public Domain**, for example use `https://ps-external.example.com`. For testing, you can use localhost DNS. If you aren't deploying DAP behind a load balancer, use the Public Domain port.
+10. For **Listen Port**, select the port that DAP listens on.
+11. For **Upstream Servers**, select the Oracle PeopleSoft implementation URL and port to be protected.
+
+ ![Screenshot of entries under Add Application.](./media/access-oracle-peoplesoft-using-datawiza/add-application.png)
7. Select **Next**.-
-8. On the Configure IdP dialog, enter the relevant information.
+8. On the **Configure IdP** dialog, enter information.
>[!Note]
- >DCMC has [one-click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html) to help complete Azure AD configuration. DCMC calls the Microsoft Graph API to create an application registration on your behalf in your Azure AD tenant.
+ >DCMC has one-click integration to help complete Azure AD configuration. DCMC calls the Microsoft Graph API to create an application registration on your behalf in your Azure AD tenant. Learn more at docs.datawiza.com in [One Click Integration with Azure AD](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html#preview)
9. Select **Create**.
- ![Screenshot that shows how to configure idp.](./media/access-oracle-peoplesoft-using-datawiza/configure-idp.png)
+ ![Screenshot of entries under Configure IDP.](./media/access-oracle-peoplesoft-using-datawiza/configure-idp.png)
-10. The DAB deployment page appears.
+10. The DAP deployment page appears.
+11. Make a note of the deployment Docker Compose file. The file includes the DAP image, the Provisioning Key and Provision Secret, which pulls the latest configuration and policies from DCMC.
-11. Make a note of the deployment Docker Compose file. The file includes the DAB image, also the Provisioning Key and Provision Secret, which pulls the latest configuration and policies from DCMC.
-
- ![Screenshot that shows the docker compose file value.](./media/access-oracle-peoplesoft-using-datawiza/docker-compose-file.png)
+ ![Screenshot of three sets of Docker information.](./media/access-oracle-peoplesoft-using-datawiza/docker-compose-file.png)
## SSO and HTTP headers
-DAB gets user attributes from the Identity provider (IdP) and passes them to the upstream application with a header or cookie.
+DAP gets user attributes from the identity provider (IdP) and passes them to the upstream application with a header or cookie.
-For the Oracle PeopleSoft application to recognize the user correctly, there's another configuration step. Using a certain name, it instructs DAB to pass the values from the IdP to the application through the HTTP header.
+The Oracle PeopleSoft application needs to recognize the user. Using a name, the application instructs DAP to pass the values from the IdP to the application through the HTTP header.
1. In Oracle PeopleSoft, from the left navigation, select **Applications**.- 2. Select the **Attribute Pass** subtab.
+3. For **Field**, select **email**.
+4. For **Expected**, select **PS_SSO_UID**.
+5. For **Type**, select **Header**.
-3. Use the following values.
-
- | Property | Value |
- |:--|:--|
- |Field | Email|
- |Expected | PS_SSO_UID |
- |Type | Header|
-
- [ ![Screenshot that shows the attribute pass value.](./media/access-oracle-peoplesoft-using-datawiza/attribute-pass.png)](./media/access-oracle-peoplesoft-using-datawiza/attribute-pass.png#lightbox)
+ ![Screenshot of the Attribute Pass feature with Field, Expected and Type entries.](./media/access-oracle-peoplesoft-using-datawiza/attribute-pass.png)
>[!Note]
- >This configuration uses the Azure AD user principal name as the sign in username used by Oracle PeopleSoft. To use another user identity, go to the Mappings tab.
+ >This configuration uses Azure AD user principal name as the sign-in username for Oracle PeopleSoft. To use another user identity, go to the **Mappings** tab.
- ![Screenshot that shows the user principal name field as the username.](./media/access-oracle-peoplesoft-using-datawiza/user-principal-name.png)
+ ![Screenshot of user principal name.](./media/access-oracle-peoplesoft-using-datawiza/user-principal-name.png)
## SSL Configuration 1. Select the **Advanced tab**.
- [ ![Screenshot that shows the advanced tab.](./media/access-oracle-peoplesoft-using-datawiza/advanced-configuration.png)](./media/access-oracle-peoplesoft-using-datawiza/advanced-configuration.png#lightbox)
+ ![Screenshot of the Advanced tab under Application Detail.](./media/access-oracle-peoplesoft-using-datawiza/advanced-configuration.png)
2. Select **Enable SSL**.
+3. From the **Cert Type** dropdown, select a type.
-3. From the Cert Type dropdown, select a type.
-
- ![Screenshot that shows the cert type dropdown.](./media/access-oracle-peoplesoft-using-datawiza/cert-type.png)
+ ![Screenshot of the Cert Type dropdown with available options, Self-signed and Upload.](./media/access-oracle-peoplesoft-using-datawiza/cert-type-new.png)
-4. For testing purposes, we'll be providing a self-signed certificate.
+4. For testing the configuration, there's a self-signed certificate.
- ![Screenshot that shows the self-signed certificate.](./media/access-oracle-peoplesoft-using-datawiza/self-signed-cert.png)
+ ![Screenshot of the Cert Type option with Self Signed selected.](./media/access-oracle-peoplesoft-using-datawiza/self-signed-cert.png)
>[!Note]
- >You have the option to upload a certificate from a file.
+ >You can upload a certificate from a file.
- ![Screenshot that shows uploading cert from a file option.](./media/access-oracle-peoplesoft-using-datawiza/cert-upload.png)
+ ![Screenshot of the File Based entry for Select Option under Advanced Settings.](./media/access-oracle-peoplesoft-using-datawiza/cert-upload-new.png)
5. Select **Save**. ## Enable Azure AD Multi-Factor Authentication
-To provide an extra level of security for sign-ins, enforce multi-factor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure
-portal](../authentication/tutorial-enable-azure-mfa.md).
+To provide more security for sign-ins, you can enforce Azure AD Multi-Factor Authentication (MFA).
-1. Sign in to the Azure portal as a **Global Administrator**.
+Learn more: [Tutorial: Secure user sign-in events with Azure AD MFA](../authentication/tutorial-enable-azure-mfa.md)
+1. Sign in to the Azure portal as a Global Administrator.
2. Select **Azure Active Directory** > **Manage** > **Properties**.-
-3. Under Properties, select **Manage security defaults**.
-
-4. Under Enable Security defaults, select **Yes** and then **Save**.
+3. Under **Properties**, select **Manage security defaults**.
+4. Under **Enable Security defaults**, select **Yes**
+5. Select **Save**.
## Enable SSO in the Oracle PeopleSoft console To enable SSO in the Oracle PeopleSoft environment:
-1. Sign in PeopleSoft Consol `http://{your-peoplesoft-fqdn}:8000/psp/ps/?cmd=start` using Admin credentials, for example, PS/PS.
-
- [ ![Screenshot that shows Oracle PeopleSoft console.](./media/access-oracle-peoplesoft-using-datawiza/peoplesoft-console.png)](./media/access-oracle-peoplesoft-using-datawiza/peoplesoft-console.png#lightbox)
-
-2. Add a default public access user to PeopleSoft
-
- a. From the main menu, navigate to **PeopleTools > Security > User Profiles > User Profiles > Add a New Value**.
-
- b. Select **Add a new value**.
-
- c. Create user **PSPUBUSER** and enter the password.
-
- ![Screenshot that shows creating a username/password in the console.](./media/access-oracle-peoplesoft-using-datawiza/create-user.png)
-
- d. Select the **ID** tab and choose the type as **none**.
+1. Sign in to the PeopleSoft Consol `http://{your-peoplesoft-fqdn}:8000/psp/ps/?cmd=start` using Admin credentials, for example, PS/PS.
- ![Screenshot that shows the ID type.](./media/access-oracle-peoplesoft-using-datawiza/id-type.png)
+ ![Screenshot that shows Oracle PeopleSoft console.](./media/access-oracle-peoplesoft-using-datawiza/peoplesoft-console.png)
-3. Configure the web profile.
+2. Add a default public access user to PeopleSoft.
+3. From the main menu, navigate to **PeopleTools > Security > User Profiles > User Profiles > Add a New Value**.
+4. Select **Add a new value**.
+5. Create user **PSPUBUSER**.
+6. Enter the password.
- a. Navigate to **PeopleTools > Web Profile > Web Profile Configuration > Search > PROD > Security** to configure the user profile.
+ ![Screenshot of the PS PUBUSER User ID and change-password option.](./media/access-oracle-peoplesoft-using-datawiza/create-user.png)
- b. Select the **Allow Public Access** box and then enter the user ID **PSPUBUSER** and password.
+7. Select the **ID** tab.
+8. For **ID Type**, select **None**.
- ![Screenshot that shows the web profile configure.](./media/access-oracle-peoplesoft-using-datawiza/web-profile-config.png)
+ ![Screenshot of the None option for ID Type on the ID tab.](./media/access-oracle-peoplesoft-using-datawiza/id-type.png)
- c. Select **Save**.
+3. Navigate to **PeopleTools > Web Profile > Web Profile Configuration > Search > PROD > Security**.
+4. Under **Public Users**, select the **Allow Public Access** box.
+5. For **User ID**, enter **PSPUBUSER**.
+6. Enter the password.
-4. Enable SSO.
+ ![Screenshot of Allow Public Access, User ID, and Password options.](./media/access-oracle-peoplesoft-using-datawiza/web-profile-config.png)
- a. Navigate to **PeopleTools > Security > Security Objects > Signon PeopleCode**.
+7. Select **Save**.
+8. To enable SSO, navigate to **PeopleTools > Security > Security Objects > Signon PeopleCode**.
+9. Select the **Sign on PeopleCode** page.
+10. Enable **OAMSSO_AUTHENTICATION**.
+11. Select **Save**.
+12. To configure PeopleCode using the PeopleTools application designer, navigate to **File > Open > Definition: Record > Name: `FUNCLIB_LDAP`**.
+13. Open **FUNCLIB_LDAP**.
- b. Select the **Signon PeopleCode** page.
+ ![Screenshot of the Open Definition dialog.](./media/access-oracle-peoplesoft-using-datawiza/selection-criteria.png)
- c. Enable the `OAMSSO_AUTHENTICATION` and then select **Save**.
+14. Select the record.
+15. Select **LDAPAUTH > View PeopleCode**.
+16. Search for the `getWWWAuthConfig()` function `Change &defaultUserId = ""; to &defaultUserId = PSPUBUSER`.
+17. Confirm the user Header is `PS_SSO_UID` for `OAMSSO_AUTHENTICATION` function.
+18. Save the record definition.
-5. Configure PeopleCode using the PeopleTools application designer.
-
- a. Navigate to **File > Open > Definition: Record > Name: `FUNCLIB_LDAP`**.
-
- b. Open **FUNCLIB_LDAP**.
-
- ![Screenshot that shows the selection criteria.](./media/access-oracle-peoplesoft-using-datawiza/selection-criteria.png)
-
- c. Select the record.
-
- d. Select **LDAPAUTH > View PeopleCode**
-
- e. Search for the `getWWWAuthConfig()` function `Change &defaultUserId = ""; to &defaultUserId = PSPUBUSER`
-
- f. Double check the user Header is `PS_SSO_UID` for `OAMSSO_AUTHENTICATION` function. Save the record definition.
-
- ![Screenshot that shows the record definition.](./media/access-oracle-peoplesoft-using-datawiza/record-definition.png)
+ ![Screenshot of the record definition.](./media/access-oracle-peoplesoft-using-datawiza/record-definition.png)
## Test an Oracle PeopleSoft application
-Testing validates the application behaves as expected for URIs. To test an Oracle PeopleSoft application, you validate application headers, policy, and overall testing. If needed, use header and policy simulation to validate header fields and policy execution.
+To test an Oracle PeopleSoft application, validate application headers, policy, and overall testing. If needed, use header and policy simulation to validate header fields and policy execution.
To confirm Oracle PeopleSoft application access occurs correctly, a prompt appears to use an Azure AD account for sign-in. Credentials are checked and the Oracle PeopleSoft appears. ## Next steps -- [Watch the video - Enable SSO/MFA for Oracle PeopleSoft with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90).--- [Configure Datawiza and Azure AD for secure hybrid access](./datawiza-with-azure-ad.md)--- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md)--- [Datawiza documentation](https://docs.datawiza.com/)
+- Video: [Enable SSO and MFA for Oracle JD Edwards with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90)
+- [Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](./datawiza-with-azure-ad.md)
+- [Tutorial: Configure Azure AD B2C with Datawiza to provide secure hybrid access](../../active-directory-b2c/partner-datawiza.md)
+- Go to docs.datawiza.com for Datawiza [User Guides](https://docs.datawiza.com/)
active-directory Cross Tenant Synchronization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md
Using this feature requires Azure AD Premium P1 licenses. Each user who is synch
Which clouds can cross-tenant synchronization be used in? -- Cross-tenant synchronization is supported within the commercial and Azure Government clouds.
+- Cross-tenant synchronization is supported within the commercial cloud. It is not supported within Azure Government or Azure China.
- Synchronization is only supported between two tenants in the same cloud. - Cross-cloud (such as public cloud to Azure Government) isn't currently supported.
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md
Previously updated : 03/22/2022 Last updated : 01/25/2023
Use the [New-AzureADMSAdministrativeUnit](/powershell/module/azuread/new-azuread
New-AzureADMSAdministrativeUnit -Description "West Coast region" -DisplayName "West Coast" ```
+### Microsoft Graph PowerShell
+
+Use the [New-MgDirectoryAdministrativeUnit](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdirectoryadministrativeunit) command to create a new administrative unit.
+
+```powershell
+Import-Module Microsoft.Graph.Identity.DirectoryManagement
+$params = @{
+ DisplayName = "Seattle District Technical Schools"
+ Description = "Seattle district technical schools administration"
+ Visibility = "HiddenMembership"
+}
+New-MgDirectoryAdministrativeUnit -BodyParameter $params
+```
+ ### Microsoft Graph API Use the [Create administrativeUnit](/graph/api/administrativeunit-post-administrativeunits) API to create a new administrative unit.
active-directory Custom Enterprise App Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-app-permissions.md
To delegate create, read, update, and delete (CRUD) permissions for updating the
> [!div class="mx-tableFixed"] > | Permission | Description | > | - | -- |
-> | microsoft.directory/applicationPolicies/allProperties/read | Read all properties on application policies |
-> | microsoft.directory/applicationPolicies/allProperties/update | Update all properties on application policies |
+> | microsoft.directory/applicationPolicies/allProperties/read | Read all properties (including privileged properties) on application policies |
+> | microsoft.directory/applicationPolicies/allProperties/update | Update all properties (including privileged properties) on application policies |
> | microsoft.directory/applicationPolicies/basic/update | Update standard properties of application policies | > | microsoft.directory/applicationPolicies/create | Create application policies |
-> | microsoft.directory/applicationPolicies/createAsOwner | Create application policies. Creator is added as the first owner |
+> | microsoft.directory/applicationPolicies/createAsOwner | Create application policies, and creator is added as the first owner |
> | microsoft.directory/applicationPolicies/delete | Delete application policies | > | microsoft.directory/applicationPolicies/owners/read | Read owners on application policies | > | microsoft.directory/applicationPolicies/owners/update | Update the owner property of application policies | > | microsoft.directory/applicationPolicies/policyAppliedTo/read | Read application policies applied to objects list | > | microsoft.directory/applicationPolicies/standard/read | Read standard properties of application policies | > | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete servicePrincipals, and read and update all properties in Azure Active Directory |
-> | microsoft.directory/servicePrincipals/allProperties/read | Read all properties on servicePrincipals |
-> | microsoft.directory/servicePrincipals/allProperties/update | Update all properties on servicePrincipals |
+> | microsoft.directory/servicePrincipals/allProperties/read | Read all properties (including privileged properties) on servicePrincipals |
+> | microsoft.directory/servicePrincipals/allProperties/update | Update all properties (including privileged properties) on servicePrincipals |
> | microsoft.directory/servicePrincipals/appRoleAssignedTo/read | Read service principal role assignments | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/servicePrincipals/appRoleAssignments/read | Read role assignments assigned to service principals |
To delegate create, read, update, and delete (CRUD) permissions for updating the
> | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals | > | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals | > | microsoft.directory/servicePrincipals/create | Create service principals |
-> | microsoft.directory/servicePrincipals/createAsOwner | Create service principals. Creator is added as the first owner |
-> | microsoft.directory/servicePrincipals/credentials/update | Update credentials properties on service principals |
+> | microsoft.directory/servicePrincipals/createAsOwner | Create service principals, with creator as the first owner |
+> | microsoft.directory/servicePrincipals/credentials/update | Update credentials of service principals |
> | microsoft.directory/servicePrincipals/delete | Delete service principals | > | microsoft.directory/servicePrincipals/disable | Disable service principals | > | microsoft.directory/servicePrincipals/enable | Enable service principals | > | microsoft.directory/servicePrincipals/getPasswordSingleSignOnCredentials | Read password single sign-on credentials on service principals | > | microsoft.directory/servicePrincipals/managePasswordSingleSignOnCredentials | Manage password single sign-on credentials on service principals | > | microsoft.directory/servicePrincipals/oAuth2PermissionGrants/read | Read delegated permission grants on service principals |
-> | microsoft.directory/servicePrincipals/owners/read | Read owners on service principals |
-> | microsoft.directory/servicePrincipals/owners/update | Update owners on service principals |
+> | microsoft.directory/servicePrincipals/owners/read | Read owners of service principals |
+> | microsoft.directory/servicePrincipals/owners/update | Update owners of service principals |
> | microsoft.directory/servicePrincipals/permissions/update | Update permissions of service principals |
-> | microsoft.directory/servicePrincipals/policies/read | Read policies on service principals |
-> | microsoft.directory/servicePrincipals/policies/update | Update policies on service principals |
-> | microsoft.directory/servicePrincipals/standard/read | Read standard properties of service principals |
+> | microsoft.directory/servicePrincipals/policies/read | Read policies of service principals |
+> | microsoft.directory/servicePrincipals/policies/update | Update policies of service principals |
+> | microsoft.directory/servicePrincipals/standard/read | Read basic properties of service principals |
> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal |
-> | microsoft.directory/servicePrincipals/tag/update | Update tags property on service principals |
+> | microsoft.directory/servicePrincipals/tag/update | Update the tag property for service principals |
> | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates |
-> | microsoft.directory/auditLogs/allProperties/read | Read audit logs |
-> | microsoft.directory/signInReports/allProperties/read | Read sign-in reports |
-> | microsoft.directory/applications/applicationProxy/read | Read all application proxy properties of all types of applications |
-> | microsoft.directory/applications/applicationProxy/update | Update all application proxy properties of all types of applications |
-> | microsoft.directory/applications/applicationProxyAuthentication/update | Update application proxy authentication properties of all types of applications |
-> | microsoft.directory/applications/applicationProxyUrlSettings/update | Update application proxy internal and external URLs of all types of applications |
-> | microsoft.directory/applications/applicationProxySslCertificate/update | Update application proxy custom domains of all types of applications |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties |
+> | microsoft.directory/applications/applicationProxy/read | Read all application proxy properties |
+> | microsoft.directory/applications/applicationProxy/update | Update all application proxy properties |
+> | microsoft.directory/applications/applicationProxyAuthentication/update | Update authentication on all types of applications |
+> | microsoft.directory/applications/applicationProxyUrlSettings/update | Update URL settings for application proxy |
+> | microsoft.directory/applications/applicationProxySslCertificate/update | Update SSL certificate settings for application proxy |
> | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/connectorGroups/create | Create application proxy connector groups | > | microsoft.directory/connectorGroups/delete | Delete application proxy connector groups |
To delegate create, read, update, and delete (CRUD) permissions for updating the
> | microsoft.directory/connectorGroups/allProperties/update | Update all properties of application proxy connector groups | > | microsoft.directory/connectors/create | Create application proxy connectors | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors |
-> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Manage all aspects of job synchronization for service principal resources |
-> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with service principals |
-> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Manage all aspects of schema synchronization for service principal resources |
+> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs |
+> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with the application object |
+> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema |
> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | ## Next steps
active-directory Appdynamics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appdynamics-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/25/2023 # Tutorial: Azure Active Directory integration with AppDynamics
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<companyname>.saas.appdynamics.com?accountName=<companyname>`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<companyname>.saas.appdynamics.com/controller`
+ b. In the **Reply URL (Assertion Consumer Service URL)** text box, type a URL using the following pattern:
+ `https://<companyname>.saas.appdynamics.com/controller/saml-auth?accountName=<companyname>`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<companyname>.saas.appdynamics.com/?accountName=<companyname>`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [AppDynamics Client support team](https://www.appdynamics.com/support/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [AppDynamics Client support team](https://www.appdynamics.com/support/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure AppDynamics you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AppDynamics you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
Previously updated : 01/06/2023 Last updated : 01/23/2023 # Tutorial: Azure Active Directory SSO integration with Atlassian Cloud
To configure the integration of Atlassian Cloud into Azure AD, you need to add A
1. In the **Add from the gallery** section, type **Atlassian Cloud** in the search box. 1. Select **Atlassian Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about Microsoft 365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
## Configure and test Azure AD SSO
Configure and test Azure AD SSO with Atlassian Cloud using a test user called **
To configure and test Azure AD SSO with Atlassian Cloud, perform the following steps: 1. **[Configure Azure AD with Atlassian Cloud SSO](#configure-azure-ad-with-atlassian-cloud-sso)** - to enable your users to use Azure AD based SAML SSO with Atlassian Cloud.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Create Atlassian Cloud test user](#create-atlassian-cloud-test-user)** - to have a counterpart of B.Simon in Atlassian Cloud that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Setup configuration](common/setup-sso.png)
-1. If you want to setup Atlassian Cloud manually, log in to your Atlassian Cloud company site as an administrator and perform the following steps.
+1. If you want to set up Atlassian Cloud manually, log in to your Atlassian Cloud company site as an administrator and perform the following steps.
-1. Before you start go to your Atlassian product instance and copy/save the Instance URL.
- > [!NOTE]
- > URL should fit `https://<INSTANCE>.atlassian.net` pattern.
+1. In the **ATLASSIAN Admin** portal, navigate to **Security** > **Identity providers** > **Microsoft Azure AD**.
- ![Instance Name](./media/atlassian-cloud-tutorial/instance.png)
+ ![Screenshot shows the Instance Profile Name.](./media/atlassian-cloud-tutorial/name.png "Profile")
-1. Open the [Atlassian Admin Portal](https://admin.atlassian.com/) and click on your organization name.
+1. Enter the **Directory name** and click **Add** button.
- ![Admin Portal](./media/atlassian-cloud-tutorial/organization.png)
+ ![Screenshot shows the Directory for Admin Portal.](./media/atlassian-cloud-tutorial/directory.png "Add Directory")
-1. You need to verify your domain before going to configure single sign-on. For more information, see [Atlassian domain verification](https://confluence.atlassian.com/cloud/domain-verification-873871234.html) document.
+1. Select **Set up SAML single sign-on** button to connect your identity provider to Atlassian organization.
-1. In the **ATLASSIAN Admin** portal, navigate to **Security** tab, select **SAML single sign-on** and click **Add SAML configuration**.
-
- ![Security](./media/atlassian-cloud-tutorial/admin.png)
+ ![Screenshot shows the Security of identity provider.](./media/atlassian-cloud-tutorial/provider.png "Security")
1. In the Azure portal, on the **Atlassian Cloud** application integration page, find the **Manage** section and select **Set up single sign-on**.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Single Sign-On](./media/atlassian-cloud-tutorial/configure.png)
- b. Copy **Login URL** value from Azure portal, paste it in the **Identity Provider SSO URL** textbox in Atlassian.
+ b. Copy **Login URL** value from Azure portal, paste it in the **Identity provider SSO URL** textbox in Atlassian.
- c. Copy **Azure AD Identifier** value from Azure portal, paste it in the **Identity Provider Entity ID** textbox in Atlassian.
+ c. Copy **Azure AD Identifier** value from Azure portal, paste it in the **Identity provider Entity ID** textbox in Atlassian.
![Identity Provider SSO URL](./media/atlassian-cloud-tutorial/configuration-azure.png)
- ![Entity id](./media/atlassian-cloud-tutorial/login.png)
+ ![Screenshot shows the Configuration values.](./media/atlassian-cloud-tutorial/metadata.png "Azure values")
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![signing Certificate](./media/atlassian-cloud-tutorial/certificate.png)
- ![Certificate 1](./media/atlassian-cloud-tutorial/certificate-download.png)
+ ![Screenshot shows the Certificate in Azure.](./media/atlassian-cloud-tutorial/entity.png "Add Details")
-1. **Add** and **Save** the SAML Configuration in Atlassian.
+1. Save the SAML Configuration and click **Next** in Atlassian.
1. On the **Basic SAML Configuration** section, perform the following steps.
- a. Copy **SP Entity ID** value from Atlassian, paste it in the **Identifier (Entity ID)** box in Azure and set it as default.
-
- b. Copy **SP Assertion Consumer Service URL** value from Atlassian, paste it in the **Reply URL (Assertion Consumer Service URL)** box in Azure and set it as default.
+ a. Copy **Service provider entity URL** value from Atlassian, paste it in the **Identifier (Entity ID)** box in Azure and set it as default.
- c. Copy your **Instance URL** value, which you copied at step 4 and paste it in the **Relay State** box in Azure.
-
- ![Copy URLs](./media/atlassian-cloud-tutorial/values.png)
+ b. Copy **Service provider assertion consumer service URL** value from Atlassian, paste it in the **Reply URL (Assertion Consumer Service URL)** box in Azure and set it as default.
- ![Button](./media/atlassian-cloud-tutorial/edit-button.png)
+ c. Click **Next**.
+
+ ![Screenshot shows the Service provider images.](./media/atlassian-cloud-tutorial/steps.png "Page")
- ![URLs image](./media/atlassian-cloud-tutorial/image.png)
+ ![Screenshot shows the Service provider Values.](./media/atlassian-cloud-tutorial/provide.png "Provider Values")
1. Your Atlassian Cloud application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. You can edit the attribute mapping by clicking on **Edit** icon.
Follow these steps to enable Azure AD SSO in the Azure portal.
![image 4](./media/atlassian-cloud-tutorial/final-attributes.png)
-1. To enforce SAML single sign-on in an authentication policy, perform the following steps.
-
- a. From the **Atlassian Admin** Portal, select **Security** tab and click **Authentication policies**.
-
- b. Select **Edit** for the policy you want to enforce.
-
- c. In **Settings**, enable the **Enforce single sign-on** to their managed users for the successful SAML redirection.
-
- d. Click **Update**.
-
- ![Authentication policies](./media/atlassian-cloud-tutorial/policy.png)
-
- > [!NOTE]
- > The admins can test the SAML configuration by only enabling enforced SSO for a subset of users first on a separate authentication policy, and then enabling the policy for all users if there are no issues.
+1. Click **Stop and save SAML** button.
+
+ ![Screenshot shows the image of saving configuration.](./media/atlassian-cloud-tutorial/continue.png "Save configuration")
### Create an Azure AD test user
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Atlassian Cloud you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Atlassian Cloud you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
The following image shows the configuration screen in both Jira and Confluence:
* **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup. * For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](../app-proxy/what-is-application-proxy.md).-
+## Release Notes
+
+|Plugin Version | Release Notes | Supported JIRA versions |
+|--|-|-|
+| 1.0.20 | Bug Fixes: | Jira Core and Software: |
+| | JIRA SAML SSO add-on redirects to incorrect URL from mobile browser. | 7.0.0 to 9.5.0 |
+| | The mark log section after enabling the JIRA plugin. | |
+| | The last login date for a user doesn't update when user signs in via SSO | |
+| | | |
+| 1.0.19 | New Feature: | Jira Core and Software: |
+| | Application Proxy Support - Checkbox on the configure plugin screen to toggle the App Proxy mode so as to make the Reply URL editable as per the need to point the App Proxy mode so as to make the Reply URL editable as per the need to point it to the proxy server URL | 6.0 to 9.3.1 |
+| | | Jira Service Desk: 3.0.0 to 4.22.1 |
+| | | |
+| 1.0.18 | Bug Fixes: | Jira Core and Software: |
+| | Bug fix for the 405 error upon clicking on the Configure button of the Jira Azure AD SSO Plugin.| 6.0 to 9.1.0. |
+| | JIRA server isn't rendering the "Project Setting Page" correctly. | Jira Service Desk: 3.0.0 to 4.22.1. |
+| | JIRA isn't forcing Azure AD Login. An extra button click was required. | |
+| | We have now resolved the security fix in this version. This will protect you from user impersonation vulnerability.| |
+| | JIRA Service Desk logout issue is resolved. | |
+
+
+
## Troubleshooting * **You're getting multiple certificate errors**: Sign in to Azure AD and remove the multiple certificates that are available against the app. Ensure that only one certificate is present.
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/25/2023
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Slack supports **SP** initiated SSO.
+* Slack supports **SP (service provider)** initiated SSO.
* Slack supports **Just In Time** user provisioning. * Slack supports [**Automated** user provisioning](./slack-provisioning-tutorial.md).
To configure the integration of Slack into Azure AD, you need to add Slack from
1. In the **Add from the gallery** section, type **Slack** in the search box. 1. Select **Slack** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true). ## Configure and test Azure AD SSO for Slack
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Identifier (Entity ID)** text box, type the URL: `https://slack.com`
- c. For **Reply URL**, enter one of the following URL pattern:
+ c. For **Reply URL**, enter one of the following URL patterns:
| Reply URL| |-|
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Setup configuration](common/setup-sso.png)
-3. If you want to setup Slack manually, in a different web browser window, sign in to your Slack company site as an administrator.
+3. If you want to set up Slack manually, in a different web browser window, sign in to your Slack company site as an administrator.
2. click on your workspace name in the top left, then go to **Settings & administration** -> **Workspace settings**.
active-directory Valid8me Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/valid8me-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/25/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
-1. Click **Set additional URLs** and perform the following step, if you wish to configure the application in **SP** initiated mode:
+1. If you wish to configure the application in **SP** initiated mode:
- In the **Sign on URL** textbox, type a URL using one of the following patterns:
-
- | **Sign on URL** |
- ||
- | `https://login.valid8me.com` |
- | `https://login.valid8me.com/?idp=https://sts.windows.net/${TenantID}/` |
- | `https://<<client_name>>.valid8me.com` |
+ In the **Sign on URL (Optional)** text box, type a URL using the following pattern:
+ `https://login.valid8me.com/?idp=https://sts.windows.net/${TenantID}/`
> [!Note] > This value is not real. Update this value with the actual Sign on URL. Contact [valid8Me support team](mailto:support@valid8me.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
An `aksManagedAutoUpgradeSchedule` has the following properties:
There are currently three available schedule types: `Weekly`, `AbsoluteMonthly`, and `RelativeMonthly`. These schedule types are only applicable to `aksManagedClusterAutoUpgrade` configurations.
+> [!NOTE]
+> All of the fields shown for each respective schedule type are required.
+ #### Weekly schedule A `Weekly` schedule may look like *"every two weeks on Friday"*:
A `RelativeMonthly` schedule may look like *"every two months, on the last Monda
} ```
+Valid values for `weekIndex` are `First`, `Second`, `Third`, `Fourth`, and `Last`.
+ ## Add a maintenance window configuration with Azure CLI The following example shows a command to add a new `default` configuration that schedules maintenance to run from 1:00am to 2:00am every Monday:
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 12/13/2022 Last updated : 01/25/2023
Private cluster is available in public regions, Azure Government, and Azure Chin
## Prerequisites
-* The Azure CLI version 2.28.0 and higher.
-* The aks-preview extension 0.5.29 or higher.
+* The Azure CLI version 2.28.0 and higher. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The `aks-preview` extension 0.5.29 or higher.
* If using Azure Resource Manager (ARM) or the Azure REST API, the AKS API version must be 2021-05-01 or higher. * Azure Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported. * To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16]
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --lo
az aks update -n <private-cluster-name> -g <private-cluster-resource-group> --disable-public-fqdn ```
-## Configure Private DNS Zone
+## Configure private DNS zone
-The following parameters can be used to configure Private DNS Zone.
+The following parameters can be used to configure private DNS zone.
-- **system**, which is also the default value. If the `--private-dns-zone` argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.-- **none**, defaults to public DNS which means AKS will not create a Private DNS Zone. -- **CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID**, which requires you to create a Private DNS Zone in this format for Azure global cloud: `privatelink.<region>.azmk8s.io` or `<subzone>.privatelink.<region>.azmk8s.io`. You'll need the Resource ID of that Private DNS Zone going forward. Additionally, you need a user assigned identity or service principal with at least the `private dns zone contributor` and `network contributor` roles.
- - If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register the Azure provider **Microsoft.ContainerServices** in both subscriptions.
+- **system** - This is the default value. If the `--private-dns-zone` argument is omitted, AKS creates a Private DNS zone in the node resource group.
+- **none** - the default is public DNS. AKS won't create a private DNS zone.
+- **CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID**, requires you to create a private DNS zone only in the following format for Azure global cloud: `privatelink.<region>.azmk8s.io` or `<subzone>.privatelink.<region>.azmk8s.io`. You'll need the Resource ID of that private DNS zone going forward. Additionally, you need a user assigned identity or service principal with at least the [Private DNS Zone Contributor][private-dns-zone-contributor-role] and [Network Contributor][network-contributor-role] roles. When deploying using API server VNet integration, a private DNS zone additionally supports the naming format of `private.<region>.azmk8s.io` or `<subzone>.private.<region>.azmk8s.io`.
+ - If the private DNS zone is in a different subscription than the AKS cluster, you need to register the Azure provider **Microsoft.ContainerServices** in both subscriptions.
- "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
- > [!NOTE]
- > Deploying a private link-based AKS cluster only supports a Private DNS Zone using the following naming format `privatelink.<region>.azmk8s.io` or `<subzone>-privatelink.<region>.azmk8s.io`. When deploying using API server VNet integration, a Private DNS Zone additionally supports the naming format of `private.<region>.azmk8s.io` or `<subzone>-private.<region>.azmk8s.io`.
-
-### Create a private AKS cluster with Private DNS Zone
+### Create a private AKS cluster with private DNS zone
```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone [system|none] ```
-### Create a private AKS cluster with Custom Private DNS Zone or Private DNS SubZone
+### Create a private AKS cluster with custom private DNS zone or private DNS subzone
```azurecli-interactive # Custom Private DNS Zone name should be in format "<subzone>.privatelink.<region>.azmk8s.io" az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone or custom private dns subzone ResourceId> ```
-### Create a private AKS cluster with Custom Private DNS Zone and Custom Subdomain
+### Create a private AKS cluster with custom private DNS zone and custom subdomain
```azurecli-interactive # Custom Private DNS Zone name could be in formats "privatelink.<region>.azmk8s.io" or "<subzone>.privatelink.<region>.azmk8s.io"
For associated best practices, see [Best practices for network connectivity and
[create-aks-cluster-api-vnet-integration]: api-server-vnet-integration.md [azure-home]: ../azure-portal/azure-portal-overview.md#azure-home [operator-best-practices-network]: operator-best-practices-network.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[private-dns-zone-contributor-role]: ../role-based-access-control/built-in-roles.md#dns-zone-contributor
+[network-contributor-role]: ../role-based-access-control/built-in-roles.md#network-contributor
api-management Api Management Howto Manage Protocols Ciphers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-manage-protocols-ciphers.md
By default, API Management enables TLS 1.2 for client and backend connectivity a
> [!NOTE] > * If you're using the self-hosted gateway, see [self-hosted gateway security](self-hosted-gateway-overview.md#security) to manage TLS protocols and cipher suites.
+> * Currently, API Management doesn't support TLS 1.3.
> * The Consumption tier doesn't support changes to the default cipher configuration. ## Prerequisites
app-service Configure Authentication Customize Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md
Here's a simple sign-out link in a webpage:
<a href="/.auth/logout">Sign out</a> ```
-By default, a successful sign-out redirects the client to the URL `/.auth/logout/done`. You can change the post-sign-out redirect page by adding the `post_logout_redirect_uri` query parameter. For example:
+By default, a successful sign-out redirects the client to the URL `/.auth/logout/complete`. You can change the post-sign-out redirect page by adding the `post_logout_redirect_uri` query parameter. For example:
``` GET /.auth/logout?post_logout_redirect_uri=/https://docsupdatetracker.net/index.html
app-service Configure Authentication Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-google.md
To complete the procedure in this topic, you must have a Google account that has
## <a name="register"> </a>Register your application with Google
-1. Follow the Google documentation at [Google Sign-In for server-side apps](https://developers.google.com/identity/sign-in/web/server-side-flow) to create a client ID and client secret. There's no need to make any code changes. Just use the following information:
+1. Follow the Google documentation at [Sign In with Google for Web - Setup](https://developers.google.com/identity/gsi/web/guides/get-google-api-clientid) to create a client ID and client secret. There's no need to make any code changes. Just use the following information:
- For **Authorized JavaScript Origins**, use `https://<app-name>.azurewebsites.net` with the name of your app in *\<app-name>*. - For **Authorized Redirect URI**, use `https://<app-name>.azurewebsites.net/.auth/login/google/callback`. 1. Copy the App ID and the App secret values.
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnet-framework.md
In App Service, the Windows instances already have all the supported .NET Framew
For CLR 4 runtime versions (.NET Framework 4 and above): ```CMD
-ls "D:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\"
+ls "D:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework"
``` Latest .NET Framework version may not be immediately available.
Latest .NET Framework version may not be immediately available.
For CLR 2 runtime versions (.NET Framework 3.5 and below): ```CMD
-ls "D:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\"
+ls "D:\Program Files (x86)\Reference Assemblies\Microsoft\Framework"
``` ## Show current .NET Framework runtime version
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
You can interact or debug the Java Key Tool by [opening an SSH connection](confi
## Configure APM platforms
-This section shows how to connect Java applications deployed on Azure App Service with Azure Monitor application insights, NewRelic, and AppDynamics application performance monitoring (APM) platforms.
+This section shows how to connect Java applications deployed on Azure App Service with Azure Monitor Application Insights, NewRelic, and AppDynamics application performance monitoring (APM) platforms.
### Configure Application Insights
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
name: Node.js
env: AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: 'my-app-path' # set this to the path to your web app project, defaults to the repository root
NODE_VERSION: '14.x' # set this to the node version to use jobs:
name: Node.js
env: AZURE_WEBAPP_NAME: my-app # set this to your application's name
+ AZURE_WEBAPP_PACKAGE_PATH: 'my-app-path' # set this to the path to your web app project, defaults to the repository root
NODE_VERSION: '14.x' # set this to the node version to use jobs:
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
When you perform a swap with preview, App Service performs the same [swap operat
If you cancel the swap, App Service reapplies configuration elements to the source slot.
+> [!NOTE]
+> Swap with preview can't be used when one of the slots has site authentication enabled.
+>
+ To swap with preview: 1. Follow the steps in [Swap deployment slots](#Swap) but select **Perform swap with preview**.
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
Samples:
- [Tutorial: Add authentication to your web app running on Azure App Service](scenario-secure-app-authentication-app-service.md) - [Tutorial: Authenticate and authorize users end-to-end in Azure App Service (Windows or Linux)](tutorial-auth-aad.md) - [.NET Core integration of Azure AppService EasyAuth (3rd party)](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth)-- [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication)
+- [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication)
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
Advance to the next article to learn how to create a .NET Core app and connect i
> [!div class="nextstepaction"] > [Tutorial: ASP.NET Core app with SQL database](tutorial-dotnetcore-sqldb-app.md)
+> [!div class="nextstepaction"]
+> [App Template: ASP.NET Core app with SQL database and App Insights deployed using CI/CD GitHub Actions](https://github.com/Azure-Samples/app-templates-dotnet-azuresql-appservice)
+ > [!div class="nextstepaction"] > [Configure ASP.NET Core app](configure-language-dotnetcore.md)
Advance to the next article to learn how to create a .NET Framework app and conn
> [!div class="nextstepaction"] > [Tutorial: ASP.NET app with SQL database](app-service-web-tutorial-dotnet-sqldatabase.md)-
+>
> [!div class="nextstepaction"] > [Configure ASP.NET Framework app](configure-language-dotnet-framework.md)
app-service Troubleshoot Performance Degradation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-performance-degradation.md
This option enables you to find out if your application is having any issues. In
Some of the metrics that you might want to monitor for your app are * Average memory working set
-* Average response time
+* Response time
* CPU time * Memory working set * Requests
You can also manage your app using Azure PowerShell. For more information, see
## More resources
-[Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
+[Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row::: :::column span="2"::: **Step 1.** Back in the App Service page, in the left menu, select **SSH**.
+ 1. Select **Go**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-flask-1.png":::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row::: :::column span="2"::: **Step 1.** Back in the App Service page, in the left menu, select **SSH**.
+ 1. Select **Go**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-1.png":::
applied-ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/release-notes.md
# Immersive Reader JavaScript SDK Release Notes
+## Version 1.3.0
+
+This release contains new features, security vulnerability fixes, and updates to code samples.
+
+#### New Features
+
+* Added the capability for the Immersive Reader iframe to request microphone permissions for Reading Coach
+
+#### Improvements
+
+* Update code samples to use v1.3.0
+* Update code samples to demonstrate the usage of latest options from v1.2.0
+ ## Version 1.2.0 This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
-## Version 1.26 - January 2022
+## Version 1.26 - January 2023
> [!NOTE] > Version 1.26 is only available for Linux operating systems. The most recent Windows agent version is 1.25.
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Defender for Endpoint extension (MDE.Linux) on Linux to improve installation reliability
-## Version 1.25 - January 2022
+## Version 1.25 - January 2023
### New features
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Previously updated : 09/19/2022 Last updated : 01/23/2023 # Configure data persistence for a Premium Azure Cache for Redis instance
The following list contains answers to commonly asked questions about Azure Cach
- [What is a rewrite and how does it affect my cache?](#what-is-a-rewrite-and-how-does-it-affect-my-cache) - [What should I expect when scaling a cache with AOF enabled?](#what-should-i-expect-when-scaling-a-cache-with-aof-enabled) - [How is my AOF data organized in storage?](#how-is-my-aof-data-organized-in-storage)
+- [Can I have AOF persistence enabled if I have more than one replica?](#can-i-have-aof-persistence-enabled-if-i-have-more-than-one-replica)
### Can I enable persistence on a previously created cache?
Using managed identity adds the cache instance to the [trusted services list](..
### Can I have AOF persistence enabled if I have more than one replica?
-No, AOF persistence cannot be enabled with replicas (i.e replica count >= 2).
+No, you can't use Append-only File (AOF) persistence with multiple replicas (more than one replica).
## Next steps Learn more about Azure Cache for Redis features. - [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- [Add replicas to Azure Cache for Redis](cache-how-to-multi-replicas.md)
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
To create a cache, follow these steps:
1. Configure your settings for clustering and/or RDB persistence.
- > [!NOTE]
- > Zone redundancy doesn't support AOF persistence with 2 or more replicas or work with geo-replication currently.
- >
+ > [!NOTE]
+ > Zone redundancy doesn't support Append-only File (AOF) persistence with multiple replicas (more than one replica).
+ > Zone redundancy doesn't work with geo-replication currently.
+ >
1. Select **Create**.
A Premium cache has one primary and one replica node by default. To configure zo
### Can I update my existing Premium cache to use zone redundancy?
-No, this isn't supported currently.
+No, updating an existing Premium cache to use zone redundancy isn't supported currently.
### How much does it cost to replicate my data across Azure Availability Zones?
-When using zone redundancy configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other node(s) in another zone(s). The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+When your cache uses zone redundancy configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other node(s) in another zone(s). The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
## Next Steps
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
+
+ Title: How to create Azure Maps applications using the Java REST SDK (preview)
+
+description: How to develop applications that incorporate Azure Maps using the Java REST SDK Developers Guide.
++ Last updated : 01/25/2023+++++
+# Java REST SDK Developers Guide (preview)
+
+The Azure Maps Java SDK can be integrated with Java applications and libraries to build maps-related and location-aware applications. The Azure Maps Java SDK contains APIs for Search, Route, Render, Elevation, Geolocation, Traffic, Timezone, and Weather. These APIs support operations such as searching for an address, routing between different coordinates, obtaining the geo-location of a specific IP address etc.
+
+> [!NOTE]
+> Azure Maps Java SDK is baselined on Java 8, with testing and forward support up until the latest Java long-term support release (currently Java 18). For the list of Java versions for download, see [Java Standard Versions].
+
+## Prerequisites
+
+- [Azure Maps account].
+- [Subscription key] or other form of [authentication].
+- [Java Version 8] or above  
+- Maven (any version). For more information, see [Get started with Azure SDK and Apache Maven][maven].
+
+> [!TIP]
+> You can create an Azure Maps account programmatically, Here's an example using the Azure CLI:
+>
+> ```azurecli
+> az maps account create --kind "Gen2" --account-name "myMapAccountName" --resource-group "<resource group>" --sku "G2"
+> ```
+
+## Create a Maven project
+
+The following PowerShell code snippet demonstrates how to use PowerShell to create a maven project. First we'll run maven command to create maven project:
+
+```powershell
+mvn archetype:generate "-DgroupId=groupId" "-DartifactId=DemoProject" "-DarchetypeArtifactId=maven-archetype-quickstart" "-DarchetypeVersion=1.4" "-DinteractiveMode=false" 
+```
+
+| Parameter | Description |
+|-|--|
+| `-DGroupId` | Group ID uniquely identifies your project across all projects|
+| `-DartifactId` | Project name. It will be created as a new folder. |
+| `-DarchetypeArtifactId` | project type. `maven-archetype-quickstart` results in a sample project. |
+| `-DinteractiveMode` | Setting to `false` results in a blank Java project with default options. |
+
+### Install the packages
+
+To use the Azure Maps Java SDK, you will need to install all required packages. Each service in Azure Maps is available in its own package. Services include Search, Render, Traffic, Weather, etc. You only need to install the packages for the service or services you will be using in your project.
+
+After creating the maven project, there should be a `pom.xml` file with basic information such as group ID, name, artifact ID. This is where you will add a dependency for each of the Azure Maps services, as shown below:
+
+```xml
+<dependency> 
+  <groupId>com.azure</groupId> 
+  <artifactId>azure-maps-search</artifactId> 
+  <version>1.0.0-beta.1</version> 
+</dependency> 
+<dependency> 
+  <groupId>com.azure</groupId> 
+  <artifactId>azure-maps-route</artifactId> 
+  <version>1.0.0-beta.1</version> 
+</dependency> 
+<dependency> 
+  <groupId>com.azure</groupId> 
+  <artifactId>azure-maps-render</artifactId> 
+  <version>1.0.0-beta.1</version> 
+</dependency> 
+<dependency> 
+  <groupId>com.azure</groupId> 
+  <artifactId>azure-maps-traffic</artifactId> 
+  <version>1.0.0-beta.1</version> 
+</dependency> 
+<dependency> 
+  <groupId>com.azure</groupId> 
+  <artifactId>azure-maps-weather</artifactId> 
+  <version>1.0.0-beta.1</version> 
+</dependency> 
+<dependency> 
+  <groupId>com.azure</groupId> 
+  <artifactId>azure-maps-timezone</artifactId> 
+  <version>1.0.0-beta.1</version> 
+</dependency> 
+<dependency> 
+  <groupId>com.azure</groupId> 
+  <artifactId>azure-maps-elevation</artifactId> 
+  <version>1.0.0-beta.1</version> 
+</dependency> 
+```
+
+Run `mvn clean install` on your project, then create a java file named `demo.java` and import what you need from Azure maps into the file:
+
+```powershell
+cd DemoProject
+New-Item demo.java
+```
+
+> [!TIP]
+> If running `mvn clean install` results in an error, try running `mvn clean install -U`.
+
+### Azure Maps services
+
+| Service Name  | Maven package  | Samples  |
+||-|--|
+| [Search][java search readme] | [azure-maps-search][java search package] | [search samples][java search sample] |
+| [Routing][java routing readme] | [azure-maps-routing][java routing package] | [routing samples][java routing sample] |
+| [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] |
+| [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] |
+| [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |
+| [Elevation][java elevation readme] | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
+
+## Create and authenticate a MapsSearchClient
+
+The client object used to access the Azure Maps Search APIs require either an `AzureKeyCredential` object to authenticate when using an Azure Maps subscription key or a TokenCredential object with the Azure Maps client ID when authenticating using Azure Active Directory (Azure AD). For more information on authentication, see [Authentication with Azure Maps][authentication].
+
+### Using an Azure AD credential
+
+You can authenticate with Azure AD using the [Azure Identity library][Identity library]. To use the [DefaultAzureCredential] provider, you'll need to add the mvn dependency in the `pom.xml` file:
+
+```xml
+<dependency>
+  <groupId>com.azure</groupId>
+  <artifactId>azure-identity</artifactId>
+</dependency>
+```
+
+You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps.
+
+Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resource's client ID as environment variables:
+
+| Environment Variable | Description |
+|-||
+| AZURE_CLIENT_ID | Application (client) ID in your registered application |
+| AZURE_CLIENT_SECRET | The value of the client secret in your registered application |
+| AZURE_TENANT_ID | Directory (tenant) ID in your registered application |
+| MAPS_CLIENT_ID | The client ID in your Azure Map account |
+
+Now you can create environment variables in PowerShell to store these values:
+
+```powershell
+$Env:AZURE_CLIENT_ID="<client-id>"
+A$Env:AZURE_CLIENT_SECRET="<client-secret>"
+$Env:AZURE_TENANT_ID="<tenant-id>"
+$Env:MAPS_CLIENT_ID="<maps-client-id>"
+```
+
+After setting up the environment variables, you can use them in your program to instantiate the `AzureMapsSearch` client:
+
+```java
+import com.azure.identity.DefaultAzureCredential;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.maps.search.MapsSearchClient;
+import com.azure.maps.search.MapsSearchClientBuilder;
+
+public class Demo {
+ public static void main( String[] args) {
+ MapsSearchClientBuilder builder = new MapsSearchClientBuilder();
+ DefaultAzureCredential tokenCredential = new DefaultAzureCredentialBuilder().build();
+ builder.credential(tokenCredential);
+ builder.mapsClientId(System.getenv("MAPS_CLIENT_ID"));
+ MapsSearchClient client = builder.buildClient();
+ }
+}
+```
+
+> [!IMPORTANT]
+> The other environment variables created above, while not used in the code sample here, are required by `DefaultAzureCredential()`. If you do not set these environment variables correctly, using the same naming conventions, you will get run-time errors. For example, if your `AZURE_CLIENT_ID` is missing or invalid you will get an `InvalidAuthenticationTokenTenant` error.
+
+### Using a subscription key credential
+
+You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot:
++
+Now you can create environment variables in PowerShell to store the subscription key: 
+
+```powershell
+$Env:SUBSCRIPTION_KEY="<subscription-key>"
+```
+
+Once your environment variable is created, you can access it in your code:
+
+```java
+import com.azure.core.credential.AzureKeyCredential;
+import com.azure.maps.search.MapsSearchClient;
+import com.azure.maps.search.MapsSearchClientBuilder;
+
+public class Demo {
+ public static void main( String[] args) {
+
+ // Use Azure Maps subscription key authentication
+ MapsSearchClientBuilder builder = new MapsSearchClientBuilder();
+ AzureKeyCredential keyCredential = new AzureKeyCredential(System.getenv("SUBSCRIPTION_KEY"));
+ builder.credential(keyCredential);
+ MapsSearchClient client = builder.buildClient();
+ }
+}
+```
+
+## Fuzzy search an entity
+
+The following code snippet demonstrates how, in a simple console application, to import the `azure-maps-search` package and perform a fuzzy search on "Starbucks" near Seattle:
+
+```java
+import java.io.IOException;
+import com.azure.core.credential.AzureKeyCredential;
+import com.azure.core.models.GeoPosition;
+// Enable the 2 imports below if you want to use AAD authentication 
+// import com.azure.identity.DefaultAzureCredential;
+// import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.maps.search.MapsSearchClient;
+import com.azure.maps.search.MapsSearchClientBuilder;
+import com.azure.maps.search.models.FuzzySearchOptions;
+import com.azure.maps.search.models.SearchAddressResult;
+import com.azure.maps.search.models.SearchAddressResultItem;
+
+public class Demo {
+ public static void main( String[] args) throws IOException {
+ MapsSearchClientBuilder builder = new MapsSearchClientBuilder();
+
+ // Instantiate with key credential. Get SUBSCRIPTION_KEY from environment variable: 
+ AzureKeyCredential keyCredential = new AzureKeyCredential(System.getenv("SUBSCRIPTION_KEY"));
+ builder.credential(keyCredential);
+
+ // Or you can also instantiate with token credential: 
+ // DefaultAzureCredential tokenCredential = new DefaultAzureCredentialBuilder().build();
+ // builder.credential(tokenCredential);
+ // builder.mapsClientId(System.getenv("MAPS_CLIENT_ID"));
+ MapsSearchClient client = builder.buildClient();
+
+ // Fuzzy search with options: 
+ SearchAddressResult results = client.fuzzySearch(new FuzzySearchOptions("starbucks", new GeoPosition(-122.34255, 47.61010)));
+
+ // Print the search results:
+ for (SearchAddressResultItem item : results.getResults()) {
+         MapsSearchAddress address = item.getAddress();
+         GeoPosition coordinate = item.getPosition();
+         System.out.format(
+             "* %s, %s\\n" +
+             "  %s %s %s\\n" +
+             "  Coordinate: (%.4f, %.4f)\\n",
+             address.getStreetNumber(), address.getStreetName(),
+             address.getMunicipality(), address.getCountryCode(), address.getPostalCode(),
+             coordinate.getLatitude(), coordinate.getLongitude());
+ }
+ }
+}
+```
+
+This code snippet demonstrates how to create a `MapsSearchClient` object using Azure credentials. Start by instantiating `AzureKeyCredential` using your Azure Maps subscription key, then passes the credentials to instantiate `MapsSearchClient`. `MapsSearchClient` methods such as `FuzzySearch` can use the point of interest (POI) name "Starbucks" and coordinates GeoPosition(-122.31, 47.61).
+
+Execute the program from the project folder in the command line:
+
+```powershell
+java .\demo.java
+```
+
+You should see a list of Starbucks address and coordinate results:
+
+```text
+* 1912, Pike Place
+  Seattle US 98101
+  Coordinate: (47.6102, -122.3425)
+* 2118, Westlake Avenue
+  Seattle US 98121
+  Coordinate: (47.6173, -122.3378)
+* 2601, Elliott Avenue
+  Seattle US 98121
+  Coordinate: (47.6143, -122.3526)
+* 1730, Howell Street
+  Seattle US 98101
+  Coordinate: (47.6172, -122.3298)
+* 220, 1st Avenue South
+  Seattle US 98104
+  Coordinate: (47.6003, -122.3338)
+* 400, Occidental Avenue South
+  Seattle US 98104
+  Coordinate: (47.5991, -122.3328)
+* 1600, East Olive Way
+  Seattle US 98102
+  Coordinate: (47.6195, -122.3251)
+* 500, Mercer Street
+  Seattle US 98109
+  Coordinate: (47.6250, -122.3469)
+* 505, 5Th Ave S
+  Seattle US 98104
+  Coordinate: (47.5977, -122.3285)
+* 425, Queen Anne Avenue North
+  Seattle US 98109
+  Coordinate: (47.6230, -122.3571)
+```
+
+## Search an address
+
+Call the `SearchAddress` method to get the coordinate of an address. Modify the Main program from the sample as follows:
+
+```java
+import java.io.IOException;
+import com.azure.core.credential.AzureKeyCredential;
+import com.azure.core.models.GeoPosition;
+// Enable the 2 imports below if you want to use AAD authentication 
+// import com.azure.identity.DefaultAzureCredential;
+// import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.maps.search.MapsSearchClient;
+import com.azure.maps.search.MapsSearchClientBuilder;
+import com.azure.maps.search.models.SearchAddressOptions;
+import com.azure.maps.search.models.SearchAddressResult;
+import com.azure.maps.search.models.SearchAddressResultItem;
+
+public class Demo {
+ public static void main( String[] args) throws IOException {
+ MapsSearchClientBuilder builder = new MapsSearchClientBuilder();
+
+ // Instantiate with key credential: 
+ AzureKeyCredential keyCredential = new  
+ AzureKeyCredential(System.getenv("SUBSCRIPTION_KEY"));
+ builder.credential(keyCredential);
+
+ // Or you can also instantiate with token credential: 
+ // DefaultAzureCredential tokenCredential = new DefaultAzureCredentialBuilder().build();
+ // builder.credential(tokenCredential);
+ // builder.mapsClientId(System.getenv("MAPS_CLIENT_ID"));
+
+ MapsSearchClient client = builder.buildClient();
+ client.searchAddress(new SearchAddressOptions("15127 NE 24th Street, Redmond, WA 98052"));
+
+ // Search address with options and return top 5 results: 
+ SearchAddressResult results = client.searchAddress(new SearchAddressOptions("1  
+ Main Street").setCoordinates(new GeoPosition(-74.011454,  
+ 40.706270)).setRadiusInMeters(40000).setTop(5));
+
+ // Print results: 
+ if (results.getResults().size() > 0) {
+ SearchAddressResultItem item = results.getResults().get(0);
+ System.out.format("The coordinates is (%.4f, %.4f)", 
+     item.getPosition().getLatitude(), item.getPosition().getLongitude());
+ }
+ }
+}
+```
+
+In this sample, the `client.SearchAddress` method returns results ordered by confidence score and prints the coordinates of the first result.
+
+## Batch reverse search
+
+Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically as demonstrated in the batch reverse search method:
+
+```java
+import java.util.ArrayList;
+import java.util.List;
+import com.azure.core.credential.AzureKeyCredential;
+import com.azure.core.models.GeoPosition;
+// Enable the 2 imports below if you want to use AAD authentication
+// import com.azure.identity.DefaultAzureCredential;
+// import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.maps.search.MapsSearchClient;
+import com.azure.maps.search.MapsSearchClientBuilder;
+import com.azure.maps.search.models.BatchReverseSearchResult;
+import com.azure.maps.search.models.ReverseSearchAddressBatchItem;
+import com.azure.maps.search.models.ReverseSearchAddressOptions;
+import com.azure.maps.search.models.ReverseSearchAddressResultItem;
+
+public class Demo{
+ public static void main( String[] args) throws IOException {
+ MapsSearchClientBuilder builder = new MapsSearchClientBuilder();
+
+ // Instantiate with key credential:
+ AzureKeyCredential keyCredential = new 
+ AzureKeyCredential(System.getenv("SUBSCRIPTION_KEY"));
+ builder.credential(keyCredential);
+
+ // Or you can also instantiate with token credential: 
+ // DefaultAzureCredential tokenCredential = new DefaultAzureCredentialBuilder().build();
+ // builder.credential(tokenCredential);
+ // builder.mapsClientId(System.getenv("MAPS_CLIENT_ID"));
+
+ MapsSearchClient client = builder.buildClient();
+ List<ReverseSearchAddressOptions> reverseOptionsList = new ArrayList<>();
+ reverseOptionsList.add(new ReverseSearchAddressOptions(new GeoPosition(2.294911, 48.858561)));
+ reverseOptionsList.add(new ReverseSearchAddressOptions(new GeoPosition(-122.34255, 47.61010)));
+ reverseOptionsList.add(new ReverseSearchAddressOptions(new GeoPosition(-122.33817, 47.61559)).setRadiusInMeters(5000));
+ BatchReverseSearchResult batchReverseSearchResult = 
+ client.beginReverseSearchAddressBatch(reverseOptionsList).getFinalResult();
+ for (ReverseSearchAddressBatchItem item : batchReverseSearchResult.getBatchItems()) {
+ ΓÇ» ΓÇ» for (ReverseSearchAddressResultItem result : item.getResult().getAddresses()) {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» System.out.println(result.getAddress().getFreeformAddress());
+ ΓÇ» ΓÇ» }
+ }
+ }
+}
+```
+
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[authentication]: azure-maps-authentication.md
+
+[Java Standard Versions]: https://www.oracle.com/java/technologies/downloads/
+[Java Version 8]: /azure/developer/java/fundamentals/?view=azure-java-stable
+[maven]: /azure/developer/java/sdk/get-started-maven
+[Identity library]: /java/api/overview/azure/identity-readme?source=recommendations&view=azure-java-stable
+[defaultazurecredential]: /azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential
+[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources
+
+<!-- Java SDK Developers Guide >
+[java search package]: https://repo1.maven.org/maven2/com/azure/azure-maps-search
+[java search readme]: https://github.com/Azure/azure-sdk-for-jav
+[java search sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-search/src/samples/java/com/azure/maps/search/samples
+[java routing package]: https://repo1.maven.org/maven2/com/azure/azure-maps-route
+[java routing readme]: https://github.com/Azure/azure-sdk-for-jav
+[java routing sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-route/src/samples/java/com/azure/maps/route/samples
+[java rendering package]: https://repo1.maven.org/maven2/com/azure/azure-maps-render
+[java rendering readme]: https://github.com/Azure/azure-sdk-for-jav
+[java rendering sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-render/src/samples/java/com/azure/maps/render/samples
+[java geolocation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-geolocation
+[java geolocation readme]: https://github.com/Azure/azure-sdk-for-jav
+[java geolocation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-geolocation/src/samples/java/com/azure/maps/geolocation/samples
+[java timezone package]: https://repo1.maven.org/maven2/com/azure/azure-maps-timezone
+[java timezone readme]: https://github.com/Azure/azure-sdk-for-jav
+[java timezone sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-timezone/src/samples/java/com/azure/maps/timezone/samples
+[java elevation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-elevation
+[java elevation readme]: https://github.com/Azure/azure-sdk-for-jav
+[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
mapsDemo
||-|--| | [Search][search readme] | [@azure/maps-search][search package] | [search samples][search sample] | | [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
+| [Render][js render readme] | [@azure-rest/maps-render][js render package]|[render sample][js render sample] |
+| [Geolocation][js geolocation readme]|[@azure-rest/maps-geolocation][js geolocation package]|[geolocation sample][js geolocation sample] |
## Create and authenticate a MapsSearchClient
main().catch((err) => {
[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md [js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
-[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+
+[js render readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-render-rest/README.md
+[js render package]: https://www.npmjs.com/package/@azure-rest/maps-render
+[js render sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-render-rest/samples/v1-beta
+
+[js geolocation readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-geolocation-rest/README.md
+[js geolocation package]: https://www.npmjs.com/package/@azure-rest/maps-geolocation
+[js geolocation sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-geolocation-rest/samples/v1-beta
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps C# SDK supports any .NET version that is compatible with [.NET standa
For more information, see the [C# SDK Developers Guide](how-to-dev-guide-csharp-sdk.md). ## Python SDK- Azure Maps Python SDK supports Python version 3.7 or later. Check the [Azure SDK for Python policy planning][Python-version-support-policy] for more details on future Python versions. | Service Name  | PyPi package  | Samples  |
Azure Maps JavaScript/TypeScript SDK supports LTS versions of [Node.js][Node.js]
||-|--| | [Search][js search readme] | [@azure/maps-search][js search package] | [search samples][js search sample] | | [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
+| [Render][js render readme] | [@azure-rest/maps-render][js render package]|[render sample][js render sample] |
+| [Geolocation][js geolocation readme]|[@azure-rest/maps-geolocation][js geolocation package]|[geolocation sample][js geolocation sample] |
For more information, see the [JavaScript/TypeScript SDK Developers Guide](how-to-dev-guide-js-sdk.md).
Azure Maps Java SDK supports [Java 8][Java 8] or above.
| [Routing][java routing readme] | [azure-maps-routing][java routing package] | [routing samples][java routing sample] | | [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] | | [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] |
-| [TimeZone][java timezone readme] | [azure-maps-TimeZone][java timezone package] | [TimeZone samples][java timezone sample] |
-| [Elevation][java elevation readme] | [azure-maps-Elevation][java elevation package] | [Elevation samples][java elevation sample] |
+| [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |
+| [Elevation][java elevation readme] | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
-<!--For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java-sdk.md).-->
+For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java-sdk.md).
<!-- C# SDK Developers Guide > [Rest API]: /rest/api/maps/
Azure Maps Java SDK supports [Java 8][Java 8] or above.
[js route package]: https://www.npmjs.com/package/@azure-rest/maps-route [js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
-[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md
-[js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
-[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+[js render readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-render-rest/README.md
+[js render package]: https://www.npmjs.com/package/@azure-rest/maps-render
+[js render sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-render-rest/samples/v1-beta
+
+[js Geolocation readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-geolocation-rest/README.md
+[js Geolocation package]: https://www.npmjs.com/package/@azure-rest/maps-geolocation
+[js Geolocation sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-geolocation-rest/samples/v1-beta
<!-- Java SDK Developers Guide > [Java 8]: https://www.java.com/en/download/java8_update.jsp
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
# Understanding Azure Maps Transactions
-When you useΓÇ»[Azure Maps Services](index.yml), the API requests you make generate transactions. Your transaction usage is available for review in yourΓÇ»[Azure portal]( https://portal.azure.com) Metrics report. For additional information, see [View Azure Maps API usage metrics](how-to-view-api-usage.md). These transactions can be either billable or non-billable usage, depending on the service and the feature. ItΓÇÖs important to understand which usage generates a billable transaction and how itΓÇÖs calculated so you can plan and budget for the costs associated with using Azure Maps. Billable transactions will show up in your Cost Analysis report within the Azure portal.
+When you useΓÇ»[Azure Maps Services](index.yml), the API requests you make generate transactions. Your transaction usage is available for review in yourΓÇ»[Azure portal]( https://portal.azure.com) Metrics report. For more information, see [View Azure Maps API usage metrics](how-to-view-api-usage.md). These transactions can be either billable or non-billable usage, depending on the service and the feature. ItΓÇÖs important to understand which usage generates a billable transaction and how itΓÇÖs calculated so you can plan and budget for the costs associated with using Azure Maps. Billable transactions will show up in your Cost Analysis report within the Azure portal.
-Below is a summary of which Azure Maps services generate transactions, billable and non-billable, along with any notable aspects that are helpful to understand in how the number of transactions are calculated.
+The following table summarizes the Azure Maps services that generate transactions, billable and non-billable, along with any notable aspects that are helpful to understand in how the number of transactions are calculated.
## Azure Maps Transaction information by service | Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-|
-| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
+| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
| [Elevation (DEM)](/rest/api/maps/elevation)| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>| | [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>|
-| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the Creator table below. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
-| [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
-| [Search v1](/rest/api/maps/search)<br>[Search v2](/rest/api/maps/search-v2) | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
-| [Spatial](/rest/api/maps/spatial) | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch` which are non-billable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> |
+| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
+| [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
+| [Search v1](/rest/api/maps/search)<br>[Search v2](/rest/api/maps/search-v2) | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
+| [Spatial](/rest/api/maps/spatial) | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are non-billable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> |
| [Timezone](/rest/api/maps/timezone) | Yes | One request = 1 transaction | <ul><li>Location Insights Timezone (Gen2 pricing)</li><li>Standard S1 Time Zones Transactions (Gen1 S1 pricing)</li><li>Standard Time Zones Transactions (Gen1 S0 pricing)</li></ul> | | [Traffic](/rest/api/maps/traffic) | Yes | One request = 1 transaction (except tiles)<br>15 tiles = 1 transaction | <ul><li>Location Insights Traffic (Gen2 pricing)</li><li>Standard S1 Traffic Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li></ul> | | [Weather](/rest/api/maps/weather) | Yes | One request = 1 transaction | <ul><li>Location Insights Weather (Gen2 pricing)</li><li>Standard S1 Weather Transactions (Gen1 S1 pricing)</li><li>Standard Weather Transactions (Gen1 S0 pricing)</li></ul> |
Below is a summary of which Azure Maps services generate transactions, billable
| Azure Maps Creator | Billable | Transaction Calculation | Meter | |-|-||-| | [Alias](/rest/api/maps/v2/alias) | No | One request = 1 transaction | Not applicable |
-| [Conversion](/rest/api/maps/v2/conversion) | Are part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing) |
-| [Dataset](/rest/api/maps/v2/dataset) | Are part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing)|
+| [Conversion](/rest/api/maps/v2/conversion) | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing) |
+| [Dataset](/rest/api/maps/v2/dataset) | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing)|
| [Feature State](/rest/api/maps/v2/feature-state) | Yes, except for `FeatureState.CreateStateset`, `FeatureState.DeleteStateset`, `FeatureState.GetStateset`, `FeatureState.ListStatesets`, `FeatureState.UpdateStatesets` | One request = 1 transaction | Azure Maps Creator Feature State (Gen2 pricing) | | [Render v2](/rest/api/maps/render-v2) | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For everything else for Render v2, see Render v2 section in the above table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) |
-| [Tileset](/rest/api/maps/v2/tileset) | Are part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning    (Gen2 pricing) |
+| [Tileset](/rest/api/maps/v2/tileset) | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning    (Gen2 pricing) |
| [WFS](/rest/api/maps/v2/wfs) | Yes| One request = 1 transaction | Azure Maps Creator Web Feature (WFS) (Gen2 pricing) | <!--
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 1/23/2023 Last updated : 1/24/2023
In addition to the generally available data collection listed above, Azure Monit
| [Change Tracking](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
+| [SQL Best Practices Assessment](/sql/sql-server/azure-arc/assess/) | Generally available | | [Configure best practices assessment using Azure Monitor Agent](/sql/sql-server/azure-arc/assess#enable-best-practices-assessment) |
+ ## Supported regions
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Microsoft Defender for Cloud | X (Public preview) | X | | | | Update Management | X (Public preview, independent of monitoring agents) | X | | | | Change Tracking | X (Public preview) | X | |
+| | SQL Best Practices Assessment | X | | |
### Linux agents
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
Title: Release annotations for Application Insights | Microsoft Docs description: Learn how to create annotations to track deployment or other significant events with Application Insights. Previously updated : 07/20/2021 Last updated : 01/24/2023
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 11/15/2022 Last updated : 01/24/2023 ms.devlang: csharp, java, javascript, vb
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 11/14/2022 Last updated : 01/24/2023 # Application Insights overview
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
description: Monitor ASP.NET Core web applications for availability, performance
ms.devlang: csharp Previously updated : 10/27/2022 Last updated : 01/24/2023 # Application Insights for ASP.NET Core applications
azure-monitor Automate Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-custom-reports.md
Title: Automate custom reports with Application Insights data description: Automate custom daily, weekly, and monthly reports with Azure Monitor Application Insights data. Previously updated : 05/20/2019 Last updated : 01/24/2023
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Azure AD authentication for Application Insights description: Learn how to enable Azure Active Directory (Azure AD) authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Previously updated : 11/14/2022 Last updated : 01/10/2023 ms.devlang: csharp, java, javascript, python
azure-monitor Data Model Metric Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-metric-telemetry.md
Title: Data model for metric telemetry - Azure Application Insights description: Application Insights data model for metric telemetry Previously updated : 04/25/2017 Last updated : 01/24/2023
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
Title: Application Insights logging with .NET description: Learn how to use Application Insights with the ILogger interface in .NET. Previously updated : 05/20/2021 Last updated : 01/24/2023 ms.devlang: csharp
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor | Microsoft Docs description: This article discusses server firewall exceptions that are required by Azure Monitor Previously updated : 11/15/2022 Last updated : 01/10/2023
You need to open some outgoing ports in your server's firewall to allow the Appl
> [!NOTE] > As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Application Insights connection-string based regional telemetry endpoints only support TLS 1.2. Global telemetry endpoints continue to support TLS 1.0 and TLS 1.1. >
-> If you're using an older version of TLS, Application Insights will not ingest any telemetry. For applications based on .NET Framework see [Transport Layer Security (TLS) best practices with the .NET Framework](https://learn.microsoft.com/dotnet/framework/network-programming/tls) to support the newer TLS version.
+> If you're using an older version of TLS, Application Insights will not ingest any telemetry. For applications based on .NET Framework see [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) to support the newer TLS version.
## Status Monitor
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md
ibiza Previously updated : 11/14/2022 Last updated : 01/10/2023 ms.devlang: javascript
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live Metrics is currently supported for ASP.NET, ASP.NET Core, Azure Functions,
> [!NOTE] > Live Metrics is enabled by default when you onboard it by using the recommended instructions for .NET applications.
-To manually set up Live Metrics:
+To manually configure Live Metrics:
1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector). 1. The following sample console app code shows setting up Live Metrics:
If you want to monitor a particular server role instance, you can filter by serv
## Secure the control channel
-Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in the Azure portal. The filters criteria is sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information, such as the customer ID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options:
+Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in the Azure portal. The filters criteria are sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information, such as the customer ID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options:
- **Recommended:** Secure the Live Metrics channel by using [Azure Active Directory (Azure AD) authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication). - **Legacy (no longer recommended):** Set up an authenticated channel by configuring a secret API key as explained in the "Legacy option" section.
Basic metrics include request, dependency, and exception rate. Performance metri
PerfCounters support varies slightly across versions of .NET Core that don't target the .NET Framework: - PerfCounters metrics are supported when running in Azure App Service for Windows (ASP.NET Core SDK version 2.4.1 or higher).-- PerfCounters are supported when the app is running in *any* Windows machines, like VM, Azure Cloud Service, or on-premises (ASP.NET Core SDK version 2.7.1 or higher), but only for apps that target .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+- PerfCounters are supported when the app is running on *any* Windows machine for apps that target .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
- PerfCounters are supported when the app is running *anywhere* (such as Linux, Windows, app service for Linux, or containers) in the latest versions, but only for apps that target .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher. ## Troubleshooting
As described in the [Azure TLS 1.2 migration announcement](https://azure.microso
### "Data is temporarily inaccessible" status message
-When navigating to Live Metrics you may see a banner with the status message: "Data is temporarily inaccessible. The updates on our status are posted here https://aka.ms/aistatus"
+When navigating to Live Metrics, you may see a banner with the status message: "Data is temporarily inaccessible. The updates on our status are posted here https://aka.ms/aistatus"
Verify if any firewalls or browser extensions are blocking access to Live Metrics. For example, some popular ad-blocker extensions block connections to \*.monitor.azure.com. In order to use the full capabilities of Live Metrics, either disable the ad-blocker extension or add an exclusion rule for the domain \*.livediagnostics.monitor.azure.com to your ad-blocker, firewall, etc.
+### Unexpected large number of requests to livediagnostics.monitor.azure.com
+
+Heavier traffic is expected while the LiveMetrics pane is open. Navigate away from the LiveMetrics pane to restore normal traffic flow of traffic.
+Application Insights SDKs poll QuickPulse endpoints with REST API calls once every five seconds to check if the LiveMetrics pane is being viewed.
+
+The SDKs will send new metrics to QuickPulse every one second while the LiveMetrics pane is open.
+ ## Next steps * [Monitor usage with Application Insights](./usage-overview.md)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 11/15/2022 Last updated : 01/10/2023 ms.devlang: csharp, javascript, typescript, python
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: OpenTelemetry with Azure Monitor overview description: This article provides an overview of how to use OpenTelemetry with Azure Monitor. Previously updated : 10/11/2021 Last updated : 01/10/2023
azure-monitor Status Monitor V2 Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-api-reference.md
Title: Azure Application Insights .Net Agent API reference description: Application Insights Agent API reference. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Previously updated : 04/23/2019 Last updated : 01/10/2023 # Azure Monitor Application Insights Agent API Reference
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 11/15/2022 Last updated : 01/24/2023
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
Title: Best practices for autoscale description: Autoscale patterns in Azure for Web Apps, virtual machine scale sets, and Cloud Services++ Last updated 09/13/2022 -+ # Best practices for Autoscale Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
azure-monitor Autoscale Common Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-metrics.md
Title: Autoscale common metrics description: Learn which metrics are commonly used for autoscaling your cloud services, virtual machines, and web apps.++ Last updated 04/22/2022 -+
azure-monitor Autoscale Custom Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md
Last updated 06/22/2022-+ # Customer intent: As a user or dev ops administrator, I want to use the portal to set up autoscale so I can scale my resources.
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
Title: Get started with autoscale in Azure description: "Learn how to scale your resource web app, cloud service, virtual machine, or virtual machine scale set in Azure."++ Last updated 04/05/2022 -+ # Get started with autoscale in Azure
azure-monitor Autoscale Resource Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-resource-log-schema.md
Title: Azure autoscale log events schema description: Format of logs for monitoring and troubleshooting autoscale actions++ Last updated 11/14/2019 -+ # Azure Monitor autoscale actions resource log schema
azure-monitor Autoscale Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-troubleshoot.md
Title: Troubleshooting Azure Monitor autoscale description: Tracking down problems with Azure Monitor autoscaling used in Service Fabric, Virtual Machines, Web Apps, and cloud services.++ Last updated 11/4/2019 -+
azure-monitor Autoscale Webhook Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-webhook-email.md
Title: Use autoscale to send email and webhook alert notifications description: Learn how to use autoscale actions to call web URLs or send email notifications in Azure Monitor.++ Last updated 04/03/2017 -+ # Use autoscale actions to send email and webhook alert notifications in Azure Monitor This article shows you how set up triggers so that you can call specific web URLs or send emails based on autoscale actions in Azure.
azure-monitor Tutorial Autoscale Performance Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/tutorial-autoscale-performance-schedule.md
Title: Autoscale Azure resources based on data or schedule description: Create an autoscale setting for an app service plan using metric data and a schedule-++ - Last updated 12/11/2017- -+ # Create an Autoscale Setting for Azure resources based on performance data or a schedule
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|WarmStorageUsedProperties |Yes |Warm Storage Used Properties |Count |Maximum |Number of properties used by the environment for S1/S2 SKU and number of properties used by Warm Store for PAYG SKU |No Dimensions |
-## Microsoft.VoiceServices/CommunicationsGateways
-<!-- Data source : naam-->
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ActiveCallFailures |No |Active Call Failures |Percent |Average |Percentage of active call failures |PerimetaRegion |
-|ActiveCalls |No |Active Calls |Count |Average |Count of the total number of active calls (signaling sessions) |PerimetaRegion |
-|ActiveEmergencyCalls |No |Active Emergency Calls |Count |Average |Count of the total number of active emergency calls |PerimetaRegion |
-- ## Microsoft.Web/containerapps <!-- Data source : naam-->
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
Title: Azure Video Indexer accounts description: This article gives an overview of Azure Video Indexer accounts and provides links to other articles for more details. Previously updated : 12/21/2022 Last updated : 01/25/2023
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
The easiest way to begin using service tags with your Azure Video Indexer accoun
1. From the **Source** drop-down list, select **Service Tag**. 1. From the **Source service tag** drop-down list, select **VideoIndexer**. This tag contains the IP addresses of Azure Video Indexer services for all regions where available. The tag will ensure that your resource can communicate with the Azure Video Indexer services no matter where it's created.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 11/22/2022 Last updated : 01/24/2023
To stay up-to-date with the most recent Azure Video Indexer developments, this a
## January 2023
+### Notification experience
+
+The [Azure Video Indexer website](https://www.videoindexer.ai/) now has a notification panel where you can stay informed of important product updates, such as service impacting events, new releases, and more.
+ ### Textual logo detection
-A **textual logo detection** insight is an OCR-based textual detection which matches a specific predefined text. For example, if a user created a textual logo: "Microsoft", different appearances of the word *Microsoft* will be detected as the "Microsoft" logo. For more information, see [Detect textual logo](detect-textual-logo.md).
+**Textual logo detection** enables you to customize text logos to be detected within videos. For more information, see [Detect textual logo](detect-textual-logo.md).
+
+### Switching directories
+
+You can now switch Azure AD directories and manage Azure Video Indexer accounts across tenants using the [Azure Video Indexer website](https://www.videoindexer.ai/).
### Language support
azure-video-indexer Switch Tenants Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/switch-tenants-portal.md
Title: Switch between tenants on the Azure Video Indexer website description: This article shows how to switch between tenants in the Azure Video Indexer website. Previously updated : 08/26/2022 Last updated : 01/24/2023 # Switch between multiple tenants
-This article shows how to switch between multiple tenants on the Azure Video Indexer website. When you create an Azure Resource Manager (ARM)-based account, the new account may not show up on the Azure Video Indexer website. So you need to make sure to sign in with the correct domain.
+When working with multiple tenants/directories in the Azure environment user might need to switch between the different directories.
-The article shows how to sign in with the correct domain name into the Azure Video Indexer website:
+When logging in the Azure Video Indexer website, a default directory will load and the relevant accounts and list them in the **Account list**.
-1. Sign into the [Azure portal](https://portal.azure.com/) with the same subscription where your Video Indexer ARM account was created.
-1. Get the domain name of the current Azure subscription tenant.
-1. Sign in with the correct domain name on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+> [!Note]
+> Trial accounts and Classic accounts are global and not tenant-specific. Hence, the tenant switching described in this article only applies to your ARM accounts.
+>
+> The option to switch directories is available only for users using Azure Active Directory (Azure AD) to log in.
-## Get the domain name from the Azure portal
+This article shows two options to solve the same problem - how to switch tenants:
+
+- When starting from within the Azure Video Indexer website.
+- When starting from outside of the Azure Video Indexer website.
+
+## Switch tenants from within the Azure Video Indexer website
+
+1. To switch between directories in the [Azure Video Indexer](https://www.videoindexer.ai/), open the **User menu** > select **Switch directory**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of a user name.](./media/switch-directory/avi-user-switch.png)
+
+ Here user can view all detected directories listed. The current directory will be marked, once a different directory is selected the **Switch directory** button will be available.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of a tenant list.](./media/switch-directory/tenants.png)
+
+ Once clicked, the logged-in credentials will be used to relog-in to the Azure Video Indexer website with the new directory.
+
+## Switch tenants from outside the Azure Video Indexer website
+
+This section shows how to get the domain name from the Azure portal. You can then sign in with it into th the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+
+### Get the domain name
1. In the [Azure portal](https://portal.azure.com/), sign in with the same subscription tenant in which your Azure Video Indexer Azure Resource Manager (ARM) account was created. 1. Hover over your account name (in the right-top corner).
The article shows how to sign in with the correct domain name into the Azure Vid
If you want to see domains for all of your directories and switch between them, see [Switch and manage directories with the Azure portal](../azure-portal/set-preferences.md#switch-and-manage-directories).
-## Sign in with the correct domain name on the AVI website
+### Sign in with the correct domain name on the AVI website
1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website. 1. Press **Sign out** after pressing the button in the top-right corner.
If you want to see domains for all of your directories and switch between them,
> [!div class="mx-imgBorder"] > ![Sign in to an organization.](./media/switch-directory/sign-in-organization.png)
-1. Enter the domain name you copied in the [Get the domain name from the Azure portal](#get-the-domain-name-from-the-azure-portal) section.
+1. Enter the domain name you copied in the [Get the domain name from the Azure portal](#get-the-domain-name) section.
> [!div class="mx-imgBorder"] > ![Find the organization.](./media/switch-directory/find-your-organization.png)
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
Title: Embed Azure Video Indexer widgets in your apps description: Learn how to embed Azure Video Indexer widgets in your apps. Previously updated : 04/15/2022 Last updated : 01/10/2023
A Cognitive Insights widget includes all visual insights that were extracted fro
|Name|Definition|Description| ||||
-|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: people, animatedCharacters, keywords, audioEffects, labels, sentiments, emotions, topics, keyframes, transcript, ocr, speakers, scenes, spokenLanguage, observedPeople and namedEntities.|
-|`controls`|Strings separated by comma|Allows you to control the controls that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?controls=search,download` renders only search option and download button.<br/>Available options: search, download, presets, language.|
+|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: `people`, `animatedCharacters`, `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `spokenLanguage`, `observedPeople`, `namedEntities`.|
+|`controls`|Strings separated by comma|Allows you to control the controls that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?controls=search,download` renders only search option and download button.<br/>Available options: `search`, `download`, `presets`, `language`.|
|`language`|A short language code (language name)|Controls insights language.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=es-es` <br/>or `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=spanish`| |`locale` | A short language code | Controls the language of the UI. The default value is `en`. <br/>Example: `locale=de`.| |`tab` | The default selected tab | Controls the **Insights** tab that's rendered by default. <br/>Example: `tab=timeline` renders the insights with the **Timeline** tab selected.| |`search` | String | Allows you to control the initial search term.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?search=azure` renders the insights filtered by the word "Azure". |
-|`sort` | Strings separated by comma | Allows you to control the sorting of an insight.<br/>Each sort consists of 3 values: widget name, property and order, connected with '_' `sort=name_property_order`<br/>Available options:<br/>widgets: keywords, audioEffects, labels, sentiments, emotions, keyframes, scenes, namedEntities and spokenLanguage.<br/>property: startTime, endTime, seenDuration, name and ID.<br/>order: asc and desc.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?sort=labels_id_asc,keywords_name_desc` renders the labels sorted by ID in ascending order and keywords sorted by name in descending order.|
+|`sort` | Strings separated by comma | Allows you to control the sorting of an insight.<br/>Each sort consists of 3 values: widget name, property and order, connected with '_' `sort=name_property_order`<br/>Available options:<br/>widgets: `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `keyframes`, `scenes`, `namedEntities` and `spokenLanguage`.<br/>property: `startTime`, `endTime`, `seenDuration`, `name` and `ID`.<br/>order: asc and desc.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?sort=labels_id_asc,keywords_name_desc` renders the labels sorted by ID in ascending order and keywords sorted by name in descending order.|
|`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.| ### Player widget
If you embed Azure Video Indexer insights with your own [Azure Media Player](htt
You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`.
-The possible values are: `people`, `animatedCharacters` , `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, and `namedEntities`.
+The possible values are: `people`, `animatedCharacters` , `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `namedEntities`, `logos`.
For example, if you want to embed a widget that contains only people and keywords insights, the iframe embed URL will look like this:
See the [code samples](https://github.com/Azure-Samples/media-services-video-ind
For more information, see [supported browsers](video-indexer-get-started.md#supported-browsers).
-## Embed and customize Azure Video Indexer widgets in your app using npm package
+## Embed and customize Azure Video Indexer widgets in your app using npmjs package
-Using our [@azure/video-analyzer-for-media-widgets](https://www.npmjs.com/package/@azure/video-analyzer-for-media-widgets) NPM package, you can add the insights widgets to your app and customize it according to your needs.
+Using our [@azure/video-analyzer-for-media-widgets](https://www.npmjs.com/package/@azure/video-analyzer-for-media-widgets) package, you can add the insights widgets to your app and customize it according to your needs.
Instead of adding an iframe element to embed the insights widget, with this new package you can easily embed & communicate between our widgets. Customizing your widget is only supported in this package - all in one place.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 01/13/2023 Last updated : 01/25/2023
Under **Manage**, select **Storage**.
:::image type="content" source="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png" alt-text="Image shows the navigation to Connect Azure NetApp Files volume pop-up window." lightbox="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png"::: 1. Verify the protocol is NFS. You'll need to verify the virtual network and subnet to ensure connectivity to the Azure VMware Solution private cloud.
-1. Under **Associated cluster**, select the **Client cluster** to associate the NFS volume as a datastore
+1. Under **Associated cluster**, in the **Client cluster** field, select one or more clusters to associate the volume as a datastore.
1. Under **Data store**, create a personalized name for your **Datastore name**. 1. When the datastore is created, you should see all of your datastores in the **Storage**. 2. You'll also notice that the NFS datastores are added in vCenter.
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **How are the datastores charged, is there an additional charge?** Azure NetApp Files NFS volumes that are used as datastores will be billed following the [capacity pool based billing model](../azure-netapp-files/azure-netapp-files-cost-model.md). Billing will depend on the service level. There's no extra charge for using Azure NetApp Files NFS volumes as datastores.+
+- **Can a single Azure NetApp Files datastore be added to multiple clusters within the same Azure VMware Solution SDDC?**
+
+ Yes, you can select multiple datastores at the time of datastore creation. Additional clusters may be added or removed after the initial creation as well.
azure-web-pubsub Reference Json Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-reliable-webpubsub-subprotocol.md
Last updated 01/09/2023
# Azure Web PubSub Reliable JSON WebSocket subprotocol
-The JSON WebSocket subprotocol, `json.reliable.webpubsub.azure.v1`, enables the highly reliable exchange of publish/subscribe messages directly between clients even during network issues.
+The JSON WebSocket subprotocol, `json.reliable.webpubsub.azure.v1`, enables the highly reliable exchange of publish/subscribe messages directly between clients through the service without a round trip to the upstream server.
-This document describes the subprotocol json.reliable.webpubsub.azure.v1.
+This document describes the subprotocol `json.reliable.webpubsub.azure.v1`.
> [!NOTE] > Reliable protocols are still in preview. Some changes are expected in the future.
To overcome intermittent network issues and maintain reliable message delivery,
A *Reliable PubSub WebSocket client* can:
-* reconnect a dropped connection.
+* recover a connection from intermittent network issues.
* recover from message loss. * join a group using [join requests](#join-groups).
+* leave a group using [leave requests](#leave-groups).
* publish messages directly to a group using [publish requests](#publish-messages). * route messages directly to upstream event handlers using [event requests](#send-custom-events).
azure-web-pubsub Reference Json Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-webpubsub-subprotocol.md
Last updated 01/09/2023
# Azure Web PubSub supported JSON WebSocket subprotocol
-The JSON WebSocket subprotocol, `json.webpubsub.azure.v1`, enables the exchange of publish/subscribe messages directly between clients. A WebSocket connection using the `json.webpubsub.azure.v1` subprotocol is called a *PubSub WebSocket client*.
+The JSON WebSocket subprotocol, `json.webpubsub.azure.v1`, enables the exchange of publish/subscribe messages between clients through the service without a round trip to the upstream server. A WebSocket connection using the `json.webpubsub.azure.v1` subprotocol is called a *PubSub WebSocket client*.
## Overview
-In a simple WebSocket client, a *server* role is required to handle events from clients. A simple WebSocket connection triggers a `message` event when it sends messages and relies on the server-side to process messages and do other operations.
+A simple WebSocket connection triggers a `message` event when it sends messages and relies on the server-side to process messages and do other operations.
With the `json.webpubsub.azure.v1` subprotocol, you can create *PubSub WebSocket clients* that can: * join a group using [join requests](#join-groups). * publish messages directly to a group using [publish requests](#publish-messages).
-* route messages directly to upstream event handlers using [event requests](#send-custom-events).
+* route messages to different upstream event handlers using [event requests](#send-custom-events).
For example, you can create a *PubSub WebSocket client* with the following JavaScript code:
Message types received by the client can be:
* ack - The response to a request containing an `ackId`. * message - Messages from the group or server.
-* system - Responses from the Web PubSub service to system related client requests.
+* system - Messages from the Web PubSub service.
### Ack response
If the REST API is sending a string `Hello World` using `application/json` conte
### System response
-The Web PubSub service sends system related responses to client requests.
+The Web PubSub service sends system-related messages to clients.
#### Connected
-The response to the client connect request:
+The message sent to the client when the client successfully connects:
```json {
The response to the client connect request:
#### Disconnected
-The response when the server closes the connection, or when the service declines the client.
+The message sent to the client when the server closes the connection, or when the service declines the client.
```json {
backup Backup Azure Alternate Dpm Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-alternate-dpm-server.md
Title: Recover data from an Azure Backup Server description: Recover the data you've protected to a Recovery Services vault from any Azure Backup Server registered to that vault.- Previously updated : 07/09/2019+ Last updated : 01/24/2023++++ + # Recover data from Azure Backup Server
-You can use Azure Backup Server to recover the data you've backed up to a Recovery Services vault. The process for doing so is integrated into the Azure Backup Server management console, and is similar to the recovery workflow for other Azure Backup components.
+This article describes how to recover data from Azure Backup Server.
+
+You can use Azure Backup Server to recover the data you've backed-up to a Recovery Services vault. The process for doing so is integrated into the Azure Backup Server management console, and is similar to the recovery workflow for other Azure Backup components.
> [!NOTE] > This article is applicable for [System Center Data Protection Manager 2012 R2 with UR7 or later](https://support.microsoft.com/kb/3065246), combined with the [latest Azure Backup agent](https://aka.ms/azurebackup_agent).
->
->
-To recover data from an Azure Backup Server:
+## Recover the data
+
+To recover data from an Azure Backup Server, follow these steps:
-1. From the **Recovery** tab of the Azure Backup Server management console, select **'Add External DPM'** (at the top left of the screen).
+1. On the **Recovery** tab of the Azure Backup Server management console, select **'Add External DPM'** (at the top left of the screen).
- ![Add External DPM](./media/backup-azure-alternate-dpm-server/add-external-dpm.png)
+ ![Screenshot shows how to add external DPM.](./media/backup-azure-alternate-dpm-server/add-external-dpm.png)
2. Download new **vault credentials** from the vault associated with the **Azure Backup Server** where the data is being recovered, choose the Azure Backup Server from the list of Azure Backup Servers registered with the Recovery Services vault, and provide the **encryption passphrase** associated with the server whose data is being recovered.
- ![External DPM Credentials](./media/backup-azure-alternate-dpm-server/external-dpm-credentials.png)
+ ![Screenshot shows how to download the external DPM credentials.](./media/backup-azure-alternate-dpm-server/external-dpm-credentials.png)
> [!NOTE] > Only Azure Backup Servers associated with the same registration vault can recover each otherΓÇÖs data.
- >
- >
+ Once the External Azure Backup Server is successfully added, you can browse the data of the external server and the local Azure Backup Server from the **Recovery** tab. 3. Browse the available list of production servers protected by the external Azure Backup Server and select the appropriate data source.
- ![Browse External DPM Server](./media/backup-azure-alternate-dpm-server/browse-external-dpm.png)
+ ![Screenshot shows how to browse external DPM server.](./media/backup-azure-alternate-dpm-server/browse-external-dpm.png)
4. Select **the month and year** from the **Recovery points** drop down, select the required **Recovery date** for when the recovery point was created, and select the **Recovery time**. A list of files and folders appears in the bottom pane, which can be browsed and recovered to any location.
- ![External DPM Server Recovery Points](./media/backup-azure-alternate-dpm-server/external-dpm-recoverypoint.png)
+ ![Screenshot shows the external DPM Server recovery points.](./media/backup-azure-alternate-dpm-server/external-dpm-recoverypoint.png)
5. Right-click the appropriate item and select **Recover**.
- ![External DPM recovery](./media/backup-azure-alternate-dpm-server/recover.png)
+ ![Screenshot shows how to start external DPM recovery.](./media/backup-azure-alternate-dpm-server/recover.png)
6. Review the **Recover Selection**. Verify the data and time of the backup copy being recovered, as well as the source from which the backup copy was created. If the selection is incorrect, select **Cancel** to navigate back to recovery tab to select appropriate recovery point. If the selection is correct, select **Next**.
- ![External DPM recovery summary](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-summary.png)
+ ![Screenshot shows the external DPM recovery summary.](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-summary.png)
7. Select **Recover to an alternate location**. **Browse** to the correct location for the recovery.
- ![External DPM recovery alternate location](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-alternate-location.png)
+ ![Screenshot shows how to start the external DPM recovery to an alternate location.](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-alternate-location.png)
8. Choose the option related to **create copy**, **Skip**, or **Overwrite**. * **Create copy** - creates a copy of the file if there's a name collision.
To recover data from an Azure Backup Server:
Identify whether a **Notification** is sent, once the recovery successfully completes.
- ![External DPM Recovery Notifications](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-notifications.png)
+ ![Screenshot shows how to view the external DPM recovery notifications.](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-notifications.png)
9. The **Summary** screen lists the options chosen so far. Once you select **Recover**, the data is recovered to the appropriate on-premises location.
- ![External DPM Recovery Options Summary](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-options-summary.png)
+ ![Screenshot shows how to view the external DPM recovery options summary.](./media/backup-azure-alternate-dpm-server/external-dpm-recovery-options-summary.png)
> [!NOTE] > The recovery job can be monitored in the **Monitoring** tab of the Azure Backup Server.
- >
- >
- ![Monitoring Recovery](./media/backup-azure-alternate-dpm-server/monitoring-recovery.png)
+
+ ![Screenshot shows how to monitor the recovery.](./media/backup-azure-alternate-dpm-server/monitoring-recovery.png)
10. You can select **Clear External DPM** on the **Recovery** tab of the DPM server to remove the view of the external DPM server.
- ![Clear External DPM](./media/backup-azure-alternate-dpm-server/clear-external-dpm.png)
+ ![Screenshot shows how to clear external DPM.](./media/backup-azure-alternate-dpm-server/clear-external-dpm.png)
-## Troubleshooting error messages
+## Troubleshoot error messages
-| No. | Error Message | Troubleshooting steps |
-|::|: |: |
-| 1. |This server is not registered to the vault specified by the vault credential. |**Cause:** This error appears when the vault credential file selected does not belong to the Recovery Services vault associated with Azure Backup Server on which the recovery is attempted. <br> **Resolution:** Download the vault credential file from the Recovery Services vault to which the Azure Backup Server is registered. |
-| 2. |Either the recoverable data is not available or the selected server is not a DPM server. |**Cause:** There are no other Azure Backup Servers registered to the Recovery Services vault, or the servers have not yet uploaded the metadata, or the selected server is not an Azure Backup Server (using Windows Server or Windows Client). <br> **Resolution:** If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed. <br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job will upload the metadata for all the protected backups to cloud. The data will be available for recovery. |
-| 3. |No other DPM server is registered to this vault. |**Cause:** There are no other Azure Backup Servers that are registered to the vault from which the recovery is being attempted.<br>**Resolution:** If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed.<br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job uploads the metadata for all protected backups to cloud. The data will be available for recovery. |
-| 4. |The encryption passphrase provided does not match with passphrase associated with the following server: **\<server name>** |**Cause:** The encryption passphrase used in the process of encrypting the data from the Azure Backup ServerΓÇÖs data that's being recovered doesn't match the encryption passphrase provided. The agent is unable to decrypt the data, and so the recovery fails.<br>**Resolution:** Provide the exact same encryption passphrase associated with the Azure Backup Server whose data is being recovered. |
+| Error Message | Cause | Resolution |
+|: |: |: |
+|This server is not registered to the vault specified by the vault credential. | This error appears when the vault credential file selected doesn't belong to the Recovery Services vault associated with Azure Backup Server on which the recovery is attempted. | Download the vault credential file from the Recovery Services vault to which the Azure Backup Server is registered. |
+|Either the recoverable data isn't available or the selected server isn't a DPM server. | There are no other Azure Backup Servers registered to the Recovery Services vault, or the servers haven't yet uploaded the metadata, or the selected server isn't an Azure Backup Server (using Windows Server or Windows Client). | If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed. <br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job will upload the metadata for all the protected backups to cloud. The data will be available for recovery. |
+|No other DPM server is registered to this vault. | There are no other Azure Backup Servers that are registered to the vault from which the recovery is being attempted. | If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed.<br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job uploads the metadata for all protected backups to cloud. The data will be available for recovery. |
+|The encryption passphrase provided does not match with passphrase associated with the following server: **\<server name>** | The encryption passphrase used in the process of encrypting the data from the Azure Backup ServerΓÇÖs data that's being recovered doesn't match the encryption passphrase provided. The agent is unable to decrypt the data, and so the recovery fails. | Provide the exact same encryption passphrase associated with the Azure Backup Server whose data is being recovered. |
## Next steps
cognitive-services Gaming Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/gaming-concepts.md
Previously updated : 01/20/2023 Last updated : 01/25/2023
For an example, see the [Speech translation quickstart](get-started-speech-trans
## Next steps
+* [Azure gaming documentation](/gaming/azure/)
* [Text-to-speech quickstart](get-started-text-to-speech.md) * [Speech-to-text quickstart](get-started-speech-to-text.md) * [Speech translation quickstart](get-started-speech-translation.md)
cognitive-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md
Previously updated : 11/29/2022 Last updated : 01/25/2023 ms.devlang: csharp
The name of each `operationId` in version 3.1 is prefixed with the object name.
|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)| |`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)| |`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
-|`/healthstatus`|GET|[ServiceHealth_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/ServiceHealth_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)| |`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)| |`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Health status provides insights about the overall health of the service and sub-
|Path|Method|Version 3.1|Version 3.0| |||||
-|`/healthstatus`|GET|[ServiceHealth_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/ServiceHealth_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
## Models
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
Previously updated : 06/29/2022 Last updated : 01/25/2023 zone_pivot_groups: usage-custom-language-features
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/quickstart.md
Previously updated : 09/28/2022 Last updated : 01/25/2023 zone_pivot_groups: usage-custom-language-features
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 09/29/2022 Last updated : 01/25/2023 # Dapr integration with Azure Container Apps
There are a few approaches supported in container apps to securely establish con
For Azure-hosted services, Dapr can use the managed identity of the scoped container apps to authenticate to the backend service provider. When using managed identity, you don't need to include secret information in a component manifest. Using managed identity is preferred as it eliminates storage of sensitive input in components and doesn't require managing a secret store.
+> [!NOTE]
+> The `azureClientId` metadata field (the client ID of the managed identity) is **required** for any component authenticating with user-assigned managed identity.
+ #### Using a Dapr secret store component reference When you create Dapr components for non-AD enabled services, certain metadata fields require sensitive input values. The recommended approach for retrieving these secrets is to reference an existing Dapr secret store component that securely accesses secret information.
metadata:
value: [your_keyvault_name] - name: azureEnvironment value: "AZUREPUBLICCLOUD"
- - name: azureClientId # Only required if using user-assigned managed identity
+ - name: azureClientId # Only required for authenticating user-assigned managed identity
value: [your_managed_identity_client_id] scopes: - publisher-app
container-registry Container Registry Dedicated Data Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-dedicated-data-endpoints.md
+
+ Title: Mitigate data exfiltration with dedicated data endpoints
+description: Azure Container Registry is introducing dedicated data endpoints available to mitigate data-exfiltration concerns.
+++ Last updated : 12/22/2022++
+# Azure Container Registry mitigating data exfiltration with dedicated data endpoints
+
+Azure Container Registry introduces dedicated data endpoints. The feature enables tightly scoped client firewall rules to specific registries, minimizing data exfiltration concerns.
+
+Dedicated data endpoints feature is available in **Premium** service tier. For pricing information, see [container-registry-pricing.](https://azure.microsoft.com/pricing/details/container-registry/)
+
+Pulling content from a registry involves two endpoints:
+
+*Registry endpoint*, often referred to as the login URL, used for authentication and content discovery. A command like docker pulls `contoso.azurecr.io/hello-world` makes a REST request, which authenticates and negotiates the layers, which represent the requested artifact.
+*Data endpoints* serve blobs representing content layers.
++++
+## Registry managed storage accounts
+
+Azure Container Registry is a multi-tenant service. The registry service manages the data endpoint storage accounts. The benefits of the managed storage accounts, include load balancing, contentious content splitting, multiple copies for higher concurrent content delivery, and multi-region support with [geo-replication.](container-registry-geo-replication.md)
+
+## Azure Private Link virtual network support
+
+The [Azure Private Link virtual network support](container-registry-private-link.md) enables the private endpoints for the managed registry service from Azure Virtual Networks. In this case, both the registry and data endpoints are accessible from within the virtual network, using private IPs.
+
+Once the managed registry service and storage accounts are both secured to access from within the virtual network, then the public endpoints are removed.
++++
+Unfortunately, virtual network connection isnΓÇÖt always an option.
+
+> [!IMPORTANT]
+>[Azure Private Link](container-registry-private-link.md) is the most secure way to control network access between clients and the registry as network traffic is limited to the Azure Virtual Network, using private IPs. When Private Link isnΓÇÖt an option, dedicated data endpoints can provide secure knowledge in what resources are accessible from each client.
+
+## Client firewall rules and data exfiltration risks
+
+Client firewall rules limits access to specific resources. The firewall rules apply while connecting to a registry from on-premises hosts, IoT devices, custom build agents. The rules also apply when the Private Link support isn't an option.
++++
+As customers locked down their client firewall configurations, they realized they must create a rule with a wildcard for all storage accounts, raising concerns for data-exfiltration. A bad actor could deploy code that would be capable of writing to their storage account.
++++
+So, to address the data-exfiltration concerns, Azure Container Registry is making dedicated data endpoints available.
+
+## Dedicated data endpoints
+
+Dedicated data endpoints, help retrieve layers from the Azure Container Registry service, with fully qualified domain names representing the registry domain.
+
+As any registry may become geo-replicated, a regional pattern is used: `[registry].[region].data.azurecr.io`.
+
+For the Contoso example, multiple regional data endpoints are added supporting the local region with a nearby replica.
+
+With dedicated data endpoints, the bad actor is blocked from writing to other storage accounts.
++++
+## Enabling dedicated data endpoints
+
+> [!NOTE]
+> Switching to dedicated data-endpoints will impact clients that have configured firewall access to the existing `*.blob.core.windows.net` endpoints, causing pull failures. To assure clients have consistent access, add the new data-endpoints to the client firewall rules. Once completed, existing registries can enable dedicated data-endpoints through the `az cli`.
+
+To use the Azure CLI steps in this article, Azure CLI version 2.4.0 or later is required. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli) or run in [Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+* Run the [az acr update](/cli/azure/acr#az-acr-update) command to enable dedicated data endpoint.
+
+```azurecli-interactive
+az acr update --name contoso --data-endpoint-enabled
+```
+
+* Run the [az acr show](/cli/azure/acr#az-acr-show-endpoints) command to view the data endpoints, including regional endpoints for geo-replicated registries.
+
+```azurecli-interactive
+az acr show-endpoints --name contoso
+```
+
+Sample output:
+
+```json
+{
+ "loginServer": "contoso.azurecr.io",
+ "dataEndpoints": [
+ {
+ "region": "eastus",
+ "endpoint": "contoso.eastus.data.azurecr.io",
+ },
+ {
+ "region": "westus",
+ "endpoint": "contoso.westus.data.azurecr.io",
+ }
+ ]
+}
+
+```
+
+## Next steps
+
+* Configure to access an Azure container registry from behind a [firewall rules.](container-registry-firewall-access-rules.md)
+* Connect Azure Container Registry using [Azure Private Link](container-registry-private-link.md)
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/zone-redundancy.md
Copy the following contents to a new file and save it using a filename such as `
{ "comments": "Container registry for storing docker images", "type": "Microsoft.ContainerRegistry/registries",
- "apiVersion": "2020-11-01-preview",
+ "apiVersion": "2020-11-01",
"name": "[parameters('acrName')]", "location": "[parameters('location')]", "sku": {
Copy the following contents to a new file and save it using a filename such as `
}, { "type": "Microsoft.ContainerRegistry/registries/replications",
- "apiVersion": "2020-11-01-preview",
+ "apiVersion": "2020-11-01",
"name": "[concat(parameters('acrName'), '/', parameters('acrReplicaLocation'))]", "location": "[parameters('acrReplicaLocation')]", "dependsOn": [
Copy the following contents to a new file and save it using a filename such as `
], "outputs": { "acrLoginServer": {
- "value": "[reference(resourceId('Microsoft.ContainerRegistry/registries',parameters('acrName')),'2019-12-01-preview').loginServer]",
+ "value": "[reference(resourceId('Microsoft.ContainerRegistry/registries',parameters('acrName')),'2019-12-01').loginServer]",
"type": "string" } }
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
Title: Quickstart - Azure Cosmos DB for NoSQL client library for Python
-description: Learn how to build a .NET app to manage Azure Cosmos DB for NoSQL account resources and data in this quickstart.
+description: Learn how to build a Python app to manage Azure Cosmos DB for NoSQL account resources and data.
Get started with the Azure Cosmos DB client library for Python to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks. > [!NOTE]
-> The [example code snippets](https://github.com/azure-samples/cosmos-db-nosql-python-samples) are available on GitHub as a .NET project.
+> The [example code snippets](https://github.com/azure-samples/cosmos-db-nosql-python-samples) are available on GitHub as a Python project.
[API reference documentation](/python/api/azure-cosmos/azure.cosmos) | [Library source code](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos) | [Package (PyPI)](https://pypi.org/project/azure-cosmos) | [Samples](samples-python.md)
Get started with the Azure Cosmos DB client library for Python to create databas
### Prerequisite check -- In a terminal or command window, run ``python --version`` to check that the .NET SDK is version 3.7 or later.
+- In a command shell, run `python --version` to check that the version is 3.7 or later.
- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed. ## Setting up
-This section walks you through creating an Azure Cosmos DB account and setting up a project that uses Azure Cosmos DB for NoSQL client library for .NET to manage resources.
+This section walks you through creating an Azure Cosmos DB account and setting up a project that uses the Azure Cosmos DB for NoSQL client library for Python to manage resources.
### Create an Azure Cosmos DB account
This section walks you through creating an Azure Cosmos DB account and setting u
Create a new Python code file (*app.py*) in an empty folder using your preferred integrated development environment (IDE).
-### Install the package
+### Install packages
-Add the [`azure-cosmos`](https://pypi.org/project/azure-cosmos) PyPI package to the Python app. Use the `pip install` command to install the package.
+Use the `pip install` command to install packages you'll need in the quickstart.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+Add the [`azure-cosmos`](https://pypi.org/project/azure-cosmos) and [`azure-identity`](https://pypi.org/project/azure-identity) PyPI packages to the Python app.
+
+```bash
+pip install azure-cosmos
+pip install azure-identity
+```
+
+### [Connection String](#tab/connection-string)
+
+Add the [`azure-cosmos`](https://pypi.org/project/azure-cosmos) PyPI package to the Python app.
```bash pip install azure-cosmos ``` +++ ### Configure environment variables [!INCLUDE [Create environment variables for key and endpoint](./includes/environment-variables.md)]
You'll use the following Python classes to interact with these resources:
- [Get an item](#get-an-item) - [Query items](#query-items)
-The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+The sample code described in this article creates a database named ``cosmicworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
For this sample code, the container will use the category as a logical partition key. ### Authenticate the client
-From the project directory, open the *app.py* file. In your editor, import the `os` and `json` modules. Then, import the `CosmosClient` and `PartitionKey` classes from the `azure.cosmos` module.
-#### [Sync](#tab/sync)
+## [Passwordless (Recommended)](#tab/passwordless)
-#### [Async](#tab/async)
+#### Authenticate using DefaultAzureCredential
-
-Create constants for the `COSMOS_ENDPOINT` and `COSMOS_KEY` environment variables using `os.environ`.
+From the project directory, open the *app.py* file. In your editor, add modules to work with Cosmos DB and authenticate to Azure. You'll authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` from the [`azure-identity`](https://pypi.org/project/azure-identity/) package. `DefaultAzureCredential` will automatically discover and use the account you signed-in with previously.
-#### [Sync / Async](#tab/sync+async)
+Create an environment variable that specifies your Cosmos DB endpoint.
- Create constants for the database and container names.
-#### [Sync / Async](#tab/sync+async)
+Create a new client instance using the [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) class constructor and the `DefaultAzureCredential` object.
-
-Create a new client instance using the [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) class constructor and the two variables you created as parameters.
+## [Connection String](#tab/connection-string)
-#### [Sync](#tab/sync)
+From the project directory, open the *app.py* file. In your editor, import the `os` and `json` modules. Then, import the `CosmosClient` and `PartitionKey` classes from the `azure.cosmos` module.
-#### [Async](#tab/async)
+Create constants for the `COSMOS_ENDPOINT` and `COSMOS_KEY` environment variables using `os.environ`.
+
-> [!IMPORTANT]
-> Please the client instance in a coroutine function named `manage_cosmos`. Within the coroutine function, define the new client with the `async with` keywords. Outside of the coroutine function, use the `asyncio.run` function to execute the coroutine asynchronously.
+Create constants for the database and container names.
+
+Create a new client instance using the [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) class constructor and the two variables you created as parameters.
### Create a database
-Use the [`CosmosClient.create_database_if_not_exists`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient#azure-cosmos-cosmos-client-cosmosclient-create-database-if-not-exists) method to create a new database if it doesn't already exist. This method will return a [`DatabaseProxy`](/python/api/azure-cosmos/azure.cosmos.databaseproxy) reference to the existing or newly created database.
+## [Passwordless (Recommended)](#tab/passwordless)
-#### [Sync](#tab/sync)
+The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations, such as creating and deleting databases, you must use RBAC through one of the following options:
+> - [Azure CLI scripts](manage-with-cli.md)
+> - [Azure PowerShell scripts](manage-with-powershell.md)
+> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)
+> - [Azure Resource Manager .NET client library](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDB/)
-#### [Async](#tab/async)
+The Azure CLI approach is used in for this quickstart and passwordless access. Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) command to create a Cosmos DB for NoSQL database.
+
+```azurecli
+# Create a SQL API database `
+az cosmosdb sql database create `
+ --account-name <cosmos-db-account-name> `
+ --resource-group <resource-group-name> `
+ --name cosmicworks
+```
+
+The command line to create a database is for PowerShell, shown on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line.
+## [Connection String](#tab/connection-string)
+
+Use the [`CosmosClient.create_database_if_not_exists`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient#azure-cosmos-cosmos-client-cosmosclient-create-database-if-not-exists) method to create a new database if it doesn't already exist. This method will return a [`DatabaseProxy`](/python/api/azure-cosmos/azure.cosmos.databaseproxy) reference to the existing or newly created database.
+ ### Create a container
-The [`PartitionKey`](/python/api/azure-cosmos/azure.cosmos.partitionkey) class defines a partition key path that you can use when creating a container.
+## [Passwordless (Recommended)](#tab/passwordless)
-#### [Sync](#tab/sync)
+The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations such as creating and deleting databases you must use RBAC through one of the following options:
+> - [Azure CLI scripts](manage-with-cli.md)
+> - [Azure PowerShell scripts](manage-with-powershell.md)
+> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)
+> - [Azure Resource Manager .NET client library](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDB/)
+The Azure CLI approach is used in this example. Use the [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) command to create a Cosmos DB container.
-#### [Async](#tab/async)
+```azurecli
+# Create a SQL API container
+az cosmosdb sql container create `
+ --account-name <cosmos-db-account-name> `
+ --resource-group <resource-group-name> `
+ --database-name cosmicworks `
+ --partition-key-path "/categoryId" `
+ --name products
+```
+
+The command line to create a container is for PowerShell, on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line. For Bash, you'll also need to add `MSYS_NO_PATHCONV=1` before the command so that Bash deals with the partition key parameter correctly.
+
+After the resources have been created, use classes from the `Microsoft.Azure.Cosmos` client libraries to connect to and query the database.
+## [Connection String](#tab/connection-string)
+
+The [`PartitionKey`](/python/api/azure-cosmos/azure.cosmos.partitionkey) class defines a partition key path that you can use when creating a container.
cosmos-db Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-columnar.md
Previously updated : 05/23/2022 Last updated : 01/25/2023
storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archivin
## Limitations
-This feature still has significant limitations. See [limits and
-limitations](reference-limits.md#columnar-storage).
+This feature still has significant limitations:
+
+* Compression is on disk, not in memory
+* Append-only (no UPDATE/DELETE support)
+* No space reclamation (for example, rolled-back transactions may still consume
+ disk space)
+* No index support, index scans, or bitmap index scans
+* No tidscans
+* No sample scans
+* No TOAST support (large values supported inline)
+* No support for ON CONFLICT statements (except DO NOTHING actions with no
+ target specified).
+* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
+* No support for serializable isolation level
+* Support for PostgreSQL server versions 12+ only
+* No support for foreign keys, unique constraints, or exclusion constraints
+* No support for logical decoding
+* No support for intra-node parallel scans
+* No support for AFTER ... FOR EACH ROW triggers
+* No UNLOGGED columnar tables
+* No TEMPORARY columnar tables
## Next steps
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
Previously updated : 02/25/2022 Last updated : 01/25/2023 # Azure Cosmos DB for PostgreSQL limits and limitations
cluster, the `citus` database. Creating another
database is currently not allowed, and the CREATE DATABASE command will fail with an error.
-### Columnar storage
-
-Azure Cosmos DB for PostgreSQL currently has these limitations with [columnar
-tables](concepts-columnar.md):
-
-* Compression is on disk, not in memory
-* Append-only (no UPDATE/DELETE support)
-* No space reclamation (for example, rolled-back transactions may still consume
- disk space)
-* No index support, index scans, or bitmap index scans
-* No tidscans
-* No sample scans
-* No TOAST support (large values supported inline)
-* No support for ON CONFLICT statements (except DO NOTHING actions with no
- target specified).
-* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
-* No support for serializable isolation level
-* Support for PostgreSQL server versions 12+ only
-* No support for foreign keys, unique constraints, or exclusion constraints
-* No support for logical decoding
-* No support for intra-node parallel scans
-* No support for AFTER ... FOR EACH ROW triggers
-* No UNLOGGED columnar tables
-* No TEMPORARY columnar tables
- ## Next steps * Learn how to [create a cluster in the
cost-management-billing Understand Usage Details Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/understand-usage-details-fields.md
The cost details file itself doesnΓÇÖt uniquely identify individual records with
Some fields might differ in casing and spacing between account types. Older versions of pay-as-you-go cost details files have separate sections for the statement and daily cost.
-### List of terms from older APIs
-
-The following table maps terms used in older APIs to the new terms. Refer to the preceding table for descriptions.
-
-| Old term | New term |
-| | |
-| ConsumedQuantity | Quantity |
-| IncludedQuantity | N/A |
-| InstanceId | ResourceId |
-| Rate | EffectivePrice |
-| Unit | UnitOfMeasure |
-| UsageDate | Date |
-| UsageEnd | Date |
-| UsageStart | Date |
- ## Next steps - Get an overview of how to [ingest cost data](automation-ingest-usage-details-overview.md).
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
It's the same experience as the public portal, except with new improvements and
We encourage you to try out the preview features available in Cost Management Labs and share your feedback. It's your chance to influence the future direction of Cost Management. To provide feedback, use the **Report a bug** link in the Try preview menu. It's a direct way to communicate with the Cost Management engineering team.
+<a name="rememberpreviews"></a>
+
+## Remember preview features across sessions
+
+Cost Management now remembers preview features across sessions in the preview portal. Select the preview features you're interested in from the Try preview menu and you'll see them enabled by default the next time you visit the portal. No need to enable this option ΓÇô preview features will be remembered automatically.
++
+<a name="totalkpitooltip"></a>
+
+## Total KPI tooltip
+
+View additional details about what costs are included and not included in the Cost analysis preview. You can enable this option from the Try Preview menu.
+
+The Total KPI tooltip can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) menu in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview.
++
+<a name="customersview"></a>
+
+Cloud Solution Provider (CSP) partners can view a breakdown of costs by customer and subscription in the Cost analysis preview. Note this view is only available for Microsoft Partner Agreement (MPA) billing accounts and billing profiles.
+
+The Customers view can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) menu in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview.
++ <a name="anomalyalerts"></a> ## Anomaly detection alerts
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-transfers.md
Previously updated : 05/23/2022 Last updated : 01/25/2023
When you request to transfer an entire enterprise enrollment to an enrollment, t
### Effective transfer date
-The effective transfer day can be on or after the start date of the target enrollment. Transfers can only be backdated till the first day of the month in which request is made. Additionally, if individual subscriptions are deleted or transferred in the current month, then the deletion/transfer date becomes the new earliest possible effective transfer date.
+The effective transfer day can be on or after the start date of the target enrollment. The effective transfer date is the date that you want to transfer the old source enrollment to the new one. The date can be backdated to the first date of the current month, but not before it. For example, if todayΓÇÖs date is January 25, 2023 the enrollment transfer can be backdated to January 1, 2023 but not before it.
+
+Additionally, if individual subscriptions are deleted or transferred in the current month, then the deletion/transfer date becomes the new earliest possible effective transfer date.
The source enrollment usage is charged against Azure Prepayment or as overage. Usage that occurs after the effective transfer date is transferred to the new enrollment and charged.
When you request an enrollment transfer, provide the following information:
- For the source enrollment, the enrollment number. - For the target enrollment, the enrollment number to transfer to.-- For the enrollment transfer effective date, it can be a date on or after the start date of the target enrollment but no earlier than the first day of the month in which the request is made. The chosen date can't affect usage for any overage invoice already issued.
+- Choose an enrollment transfer effective date.
+ - The date must be or after the start date of the new target enrollment.
+ - If you have an overage invoice that was already issued, the date that you choose doesnΓÇÖt affect usage.
Other points to keep in mind before an enrollment transfer:
data-factory Compute Optimized Data Flow Retire https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-optimized-data-flow-retire.md
Previously updated : 06/29/2021 Last updated : 01/25/2023 # Retirement of data flow compute optimized option
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
The new Change Data Capture resource in ADF allows for full fidelity change data
* Avro * Azure SQL Database
-* Azure Synapse Analytics
* Delimited Text * Delta * JSON
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-cassandra.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from Cassandra using Azure Data Factory or Synapse Analytics
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-greenplum.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from Greenplum using Azure Data Factory or Synapse Analytics
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hbase.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from HBase using Azure Data Factory or Synapse Analytics
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-impala.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from Impala using Azure Data Factory or Synapse Analytics
data-factory Connector Mongodb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-legacy.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from MongoDB using Azure Data Factory or Synapse Analytics (legacy)
data-factory Connector Oracle Responsys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-responsys.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from Oracle Responsys using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Paypal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-paypal.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from PayPal using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-phoenix.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from Phoenix using Azure Data Factory or Synapse Analytics
data-factory Connector Troubleshoot Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-table-storage.md
Previously updated : 10/01/2021 Last updated : 01/25/2023
data-factory Connector Troubleshoot Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-hive.md
Previously updated : 10/13/2021 Last updated : 01/25/2023
data-factory Connector Troubleshoot Orc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-orc.md
Previously updated : 10/01/2021 Last updated : 01/25/2023
data-factory Connector Troubleshoot Xml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-xml.md
Previously updated : 10/01/2021 Last updated : 01/25/2023
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-vertica.md
Previously updated : 09/09/2021 Last updated : 01/25/2023 # Copy data from Vertica using Azure Data Factory or Synapse Analytics
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
This article describes how you can create and configure a self-hosted IR.
## Considerations for using a self-hosted IR - You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).-- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories or Synapse workspaces that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory or Synapse workspace.
+- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory or Synapse workspace. Synapse workspace does not support Integration Runtime Sharing.
- The self-hosted integration runtime doesn't need to be on the same machine as the data source. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source. We recommend that you install the self-hosted integration runtime on a machine that differs from the one that hosts the on-premises data source. When the self-hosted integration runtime and data source are on different machines, the self-hosted integration runtime doesn't compete with the data source for resources. - You can have multiple self-hosted integration runtimes on different machines that connect to the same on-premises data source. For example, if you have two self-hosted integration runtimes that serve two data factories, the same on-premises data source can be registered with both data factories. - Use a self-hosted integration runtime to support data integration within an Azure virtual network.
data-factory Deploy Azure Ssis Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/deploy-azure-ssis-integration-runtime-powershell.md
Previously updated : 10/22/2021 Last updated : 01/25/2023 # PowerShell script - deploy Azure-SSIS integration runtime
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
na
Last updated 01/17/2023 -+ # What is Azure DDoS Protection?
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
To edit or delete an existing rule:
## Configure alert forwarding rule actions
-This section describes how to configure settings for supported forwarding rule actions.
+This section describes how to configure settings for supported forwarding rule actions, on either an OT sensor or the on-premises management console.
### Email address action
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
This procedure describes how to download a diagnostics log to send to support in
This feature is supported for the following sensor versions: - **22.1.1** - Download a diagnostic log from the sensor console-- **22.1.3** - For locally managed sensors, [upload a diagnostics log](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview) from the **Sites and sensors** page in the Azure portal. This file is automatically sent to support when you open a ticket on a cloud-connected sensor.
+- **22.1.3** - For locally managed sensors, [upload a diagnostics log](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support) from the **Sites and sensors** page in the Azure portal. This file is automatically sent to support when you open a ticket on a cloud-connected sensor.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
This feature is supported for the following sensor versions:
:::image type="content" source="media/release-notes/support-ticket-diagnostics.png" alt-text="Screenshot of the Backup & Restore pane showing the Support Ticket Diagnostics option." lightbox="media/release-notes/support-ticket-diagnostics.png":::
-1. For a locally managed sensor, version 22.1.3 or higher, continue with [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview).
+1. For a locally managed sensor, version 22.1.3 or higher, continue with [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support).
## Retrieve forensics data stored on the sensor
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Details about each sensor are listed in the following columns:
|**Zone**| Displays the zone that contains this sensor.| |**Subscription name**| Displays the name of the Microsoft Azure account subscription that this sensor belongs to. | |**Sensor version**| Displays the OT monitoring software version installed on your sensor. |
-|**Sensor health**| Displays a [sensor health message](sensor-health-messages.md). For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview).|
+|**Sensor health**| Displays a [sensor health message](sensor-health-messages.md). For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health).|
|**Last connected (UTC)**| Displays how long ago the sensor was last connected.| |**Threat Intelligence version**| Displays the [Threat Intelligence version](how-to-work-with-threat-intelligence-packages.md) installed on an OT sensor. The name of the version is based on the day the package was built by Defender for IoT. | |**Threat Intelligence mode**| Displays whether the Threat Intelligence update mode is manual or automatic. If it's manual that means that you can [push newly released packages directly to sensors](how-to-work-with-threat-intelligence-packages.md) as needed. Otherwise, the new packages will be automatically installed on all OT, cloud-connected sensors. |
Use the options on the **Sites and sensor** page and a sensor details page to do
|:::image type="icon" source="medi#install-enterprise-iot-sensor-software). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. |
-| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support (Public preview)](#upload-a-diagnostics-log-for-support-public-preview).|
+| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support](#upload-a-diagnostics-log-for-support).|
| **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md).| | **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). | |<a name="endpoint"></a> **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
Make sure that you've started with the relevant updates steps for this update. F
> > For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
-## Understand sensor health (Public preview)
+## Understand sensor health
This procedure describes how to view sensor health data from the Azure portal. Sensor health includes data such as whether traffic is stable, the sensor is overloaded, notifications about sensor software versions, and more.
This procedure describes how to view sensor health data from the Azure portal. S
For more information, see our [Sensor health message reference](sensor-health-messages.md).
-## Upload a diagnostics log for support (Public preview)
+## Upload a diagnostics log for support
If you need to open a support ticket for a locally managed sensor, upload a diagnostics log to the Azure portal for the support team.
If you need to open a support ticket for a locally managed sensor, upload a diag
1. In Defender for IoT in the Azure portal, go to the **Sites and sensors** page and select the locally managed sensor that's related to your support ticket.
-1. For your selected sensor, select the **...** options menu on the right > **Send diagnostic files to support (Preview)**. For example:
+1. For your selected sensor, select the **...** options menu on the right > **Send diagnostic files to support**. For example:
:::image type="content" source="media/how-to-manage-sensors-on-the-cloud/upload-diagnostics-log.png" alt-text="Screenshot of the send diagnostic files to support option." lightbox="media/how-to-manage-sensors-on-the-cloud/upload-diagnostics-log.png":::
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
Title: Set up high availability description: Increase the resiliency of your Defender for IoT deployment by installing an on-premises management console high availability appliance. High availability deployments ensure your managed sensors continuously report to an active on-premises management console. Previously updated : 06/12/2022 Last updated : 01/24/2023 # About high availability
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - | | **22.3** | | | |
+| 22.3.5 | 01/2023 | Patch | 12/2023 |
| 22.3.4 | 01/2023 | Major | 12/2023 | | **22.2** | | | |
+| 22.2.9 | 01/2023 | Patch | 12/2023 |
| 22.2.8 | 11/2022 | Patch | 10/2023 | | 22.2.7| 10/2022 | Patch | 09/2023 | | 22.2.6|09/2022 |Patch | 04/2023|
To understand whether a feature is supported in your sensor version, check the r
## Versions 22.3.x
+### 22.3.5
+
+**Release date**: 01/2023
+
+**Supported until**: 12/2023
+
+This version includes bug fixes for stability improvements.
+ ### 22.3.4 **Release date**: 01/2021
To update to 22.2.x versions:
For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
+### 22.2.9
+
+**Release date**: 01/2023
+
+**Supported until**: 12/2023
+
+This version includes bug fixes for stability improvements.
+ ### 22.2.8 **Release date**: 11/2022
This version includes the following new updates and fixes:
- [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md) - [Bi-directional alert synch between OT sensors and the Azure portal](alerts.md#managing-ot-alerts-in-a-hybrid-environment) - [Sensor connections restored after certificate rotation](how-to-deploy-certificates.md)-- [Upload diagnostic logs for support tickets from the Azure portal](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)
+- [Upload diagnostic logs for support tickets from the Azure portal](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support)
- [Improved security for uploading protocol plugins](resources-manage-proprietary-protocols.md) - [Sensor names shown in browser tabs](how-to-manage-individual-sensors.md) - [Site-based access control on the Azure portal](manage-users-portal.md#manage-site-based-access-control-public-preview)
This version includes the following new updates and fixes:
- [Diagnostic logs automatically available to support for cloud-connected sensors](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support) - [Rockwell protocol: Device inventory shows PLC operating mode key state, run state, and security mode](how-to-manage-device-inventory-for-organizations.md) - [Automatic CLI session timeouts](references-work-with-defender-for-iot-cli-commands.md)-- [Sensor health widgets in the Azure portal](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview)
+- [Sensor health widgets in the Azure portal](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health)
### 22.1.1
defender-for-iot Sensor Health Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/sensor-health-messages.md
This article lists the sensor health messages displayed in the **Sites and sensors** page in the Azure portal.
-For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview).
+For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health).
## Critical messages
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
When the **Sensor version** column for your sensors reads :::image type="icon" s
When you're ready, select **Update now** > **Confirm update**. In the grid, the **Sensor version** value changes to :::image type="icon" source="media/update-ot-software/installing.png" border="false"::: **Installing** until the update is complete, when the value switches to the new sensor version number instead.
-If a sensor fails to update for any reason, the software reverts back to the previous version installed, and a sensor health alert is triggered. For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview) and [Sensor health message reference](sensor-health-messages.md).
+If a sensor fails to update for any reason, the software reverts back to the previous version installed, and a sensor health alert is triggered. For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health) and [Sensor health message reference](sensor-health-messages.md).
# [From an OT sensor UI](#tab/sensor)
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For OT sensor versions 22.1.3 and higher, you can use the new sensor health widg
We've also added a sensor details page, where you drill down to a specific sensor from the Azure portal. On the **Sites and sensors** page, select a specific sensor name. The sensor details page lists basic sensor data, sensor health, and any sensor settings applied.
-For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview) and [Sensor health message reference](sensor-health-messages.md).
+For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health) and [Sensor health message reference](sensor-health-messages.md).
## July 2022
Now, for locally managed sensors, you can upload that diagnostic log directly on
For more information, see: - [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)-- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)
+- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support)
### Improved security for uploading protocol plugins
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md
description: Learn about the data ingress and egress requirements for integrating Azure Digital Twins with other services. Previously updated : 07/01/2022 Last updated : 01/12/2023
Azure Digital Twins can be driven with data and events from any serviceΓÇö[IoT H
Instead of having a built-in IoT Hub behind the scenes, Azure Digital Twins allows you to "bring your own" IoT Hub to use with the service. You can use an existing IoT Hub you currently have in production, or deploy a new one to be used for this purpose. This functionality gives you full access to all of the device management capabilities of IoT Hub.
-To ingest data from any source into Azure Digital Twins, use an [Azure function](../azure-functions/functions-overview.md). Learn more about this pattern in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md), or try it out yourself in the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md).
+To ingest data from any source into Azure Digital Twins, you can use an [Azure function](../azure-functions/functions-overview.md). Learn more about this pattern in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md), or try it out yourself in the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md).
-You can also learn how to connect Azure Digital Twins to a Logic Apps trigger in [Integrate with Logic Apps](how-to-integrate-logic-apps.md).
+You can also integrate Azure Digital Twins into a [Microsoft Power Platform](/power-platform) or [Azure Logic Apps](../logic-apps/logic-apps-overview.md) flow, using the [Azure Digital Twins Power Platform connector](how-to-use-power-platform-logic-apps-connector.md). For more information about connectors, see [Connectors overview](/connectors/connectors).
## Data egress
digital-twins How To Integrate Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-logic-apps.md
-
-# Mandatory fields.
Title: Integrate with Logic Apps-
-description: Learn how to connect Logic Apps to Azure Digital Twins, using a custom connector
-- Previously updated : 02/22/2022---
-# Optional fields. Don't forget to remove # if you need a field.
-#
-
-#
--
-# Integrate with Logic Apps using a custom connector
-
-In this article, you'll use the [Azure portal](https://portal.azure.com) to create a *custom connector* that can be used to connect Logic Apps to an Azure Digital Twins instance. You'll then create a *logic app* that uses this connection for an example scenario, in which events triggered by a timer will automatically update a twin in your Azure Digital Twins instance.
-
-[Azure Logic Apps](../logic-apps/logic-apps-overview.md) is a cloud service that helps you automate workflows across apps and services. By connecting Logic Apps to the Azure Digital Twins APIs, you can create such automated flows around Azure Digital Twins and their data.
-
-Azure Digital Twins doesn't currently have a certified (pre-built) connector for Logic Apps. Instead, the current process for using Logic Apps with Azure Digital Twins is to create a [custom Logic Apps connector](../logic-apps/custom-connector-overview.md), using a [custom Azure Digital Twins Swagger](/samples/azure-samples/digital-twins-custom-swaggers/azure-digital-twins-custom-swaggers/) definition file that has been modified to work with Logic Apps.
-
-> [!NOTE]
-> There are multiple versions of the Swagger definition file contained in the custom Swagger sample linked above. The latest version will be found in the subfolder with the most recent date, but earlier versions contained in the sample are also still supported.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-Sign in to the [Azure portal](https://portal.azure.com) with this account.
-
-You also need to complete the following items as part of prerequisite setup. The rest of this section will walk you through these steps:
-- Set up an Azure Digital Twins instance-- Add a digital twin-- Set up an Azure Active Directory (Azure AD) app registration-
-### Set up Azure Digital Twins instance
--
-### Add a digital twin
-
-This article uses Logic Apps to update a twin in your Azure Digital Twins instance. To continue, you should add at least one twin in your instance.
-
-You can add twins using the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins.core-readme), or the [Azure Digital Twins CLI](/cli/azure/dt). For detailed steps on how to create twins using these methods, see [Manage digital twins](how-to-manage-twin.md).
-
-You'll need the Twin ID of a twin in your instance that you've created.
-
-### Set up app registration
--
-## Create custom Logic Apps connector
-
-Now, you're ready to create a [custom Logic Apps connector](../logic-apps/custom-connector-overview.md) for the Azure Digital Twins APIs. Doing so will let you hook up Azure Digital Twins when creating a logic app in the next section.
-
-Navigate to the [Logic Apps Custom Connector](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Web%2FcustomApis) page in the Azure portal (you can use this link or search for it in the portal search bar). Select **+ Create**.
--
-In the **Create logic apps custom connector** page that follows, select your subscription and resource group, and a name and deployment region for your new connector. Select **Review + create**.
-
->[!IMPORTANT]
-> The custom connector and the logic app that you'll create later will need to be in the same deployment region.
--
-Doing so will take you to the **Review + create** tab, where you can select **Create** at the bottom to create your custom connector.
--
-You'll be taken to the deployment page for the connector. When it's finished deploying, select the **Go to resource** button to view the connector's details in the portal.
-
-### Configure connector for Azure Digital Twins
-
-Next, you'll configure the connector you've created to reach Azure Digital Twins.
-
-First, download a custom Azure Digital Twins Swagger that has been modified to work with Logic Apps. Navigate to the sample at [Azure Digital Twins custom Swaggers (Logic Apps connector) sample](/samples/azure-samples/digital-twins-custom-swaggers/azure-digital-twins-custom-swaggers/) and select the **Browse code** button underneath the title to go to the GitHub repo for the sample. Get the sample on your machine by selecting the **Code** button followed by **Download ZIP**.
--
-Navigate to the downloaded folder and unzip it.
-
-The custom Swagger for this tutorial is located in the *digital-twins-custom-swaggers-main\LogicApps* folder. This folder contains subfolders called *stable* and *preview*, both of which hold different versions of the Swagger organized by date. The folder with the most recent date will contain the latest copy of the Swagger definition file. Whichever version you select, the Swagger file is named *digitaltwins.json*.
-
-> [!NOTE]
-> Unless you're working with a preview feature, it's generally recommended to use the most recent stable version of the Swagger file. However, earlier versions and preview versions of the Swagger file are also still supported.
-
-Next, go to your connector's Overview page in the [Azure portal](https://portal.azure.com) and select **Edit**.
--
-In the **Edit Logic Apps Custom Connector** page that follows, configure this information:
-* **Custom connectors**
- - **API Endpoint**: **REST** (leave default)
- - **Import mode**: **OpenAPI file** (leave default)
- - **File**: This configuration will be the custom Swagger file you downloaded earlier. Select **Import**, locate the file on your machine (*digital-twins-custom-swaggers-main\LogicApps\...\digitaltwins.json*), and select **Open**.
-* **General information**
- - **Icon**: If you want, upload an icon.
- - **Icon background color**: If you want, enter a background color.
- - **Description**: If you want, customize a description for your connector.
- - **Connect via on-premises data gateway**: Toggled off (leave default)
- - **Scheme**: **HTTPS** (leave default)
- - **Host**: The host name of your Azure Digital Twins instance.
- - **Base URL**: */* (leave default)
-
-Then, select the **Security** button at the bottom of the window to continue to the next configuration step.
--
-In the Security step, select **Edit** and configure this information:
-* **Authentication type**: **OAuth 2.0**
-* **OAuth 2.0**:
- - **Identity provider**: **Azure Active Directory**
- - **Client ID**: The application (client) ID for the Azure AD app registration you created in [Prerequisites](#prerequisites)
- - **Client secret**: The client secret value from the app registration
- - **Login URL**: `https://login.windows.net` (leave default)
- - **Tenant ID**: The directory (tenant) ID from the app registration
- - **Resource URL**: *0b07f429-9f4b-4714-9392-cc5e8e80c8b0*
- - **Enable on-behalf-of login**: *false* (leave default)
- - **Scope**: *Directory.AccessAsUser.All*
-
-The **Redirect URL** field says **Save the custom connector to generate the redirect URL**. Generate it now by selecting **Update connector** across the top of the pane to confirm your connector settings.
--
-Return to the **Redirect URL** field and copy the value that has been generated. You'll use it in the next step.
--
-Now you've entered all the information that is required to create your connector (no need to continue past **Security** to the **Definition** step). You can close the **Edit Logic Apps Custom Connector** pane.
-
->[!NOTE]
->Back on your connector's Overview page where you originally selected **Edit**, if you select Edit again, it will restart the entire process of entering your configuration choices. It will not populate your values from the last time you went through it, so if you want to save an updated configuration with any changed values, you must re-enter all the other values as well to keep them from being overwritten by the defaults.
-
-### Grant connector permissions in the Azure AD app
-
-Next, use the custom connector's **Redirect URL** value you copied in the last step to grant the connector permissions in your Azure AD app registration.
-
-Navigate to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) page in the Azure portal and select your registration from the list.
-
-Under **Authentication** from the registration's menu, add a URI.
--
-Enter the custom connector's redirect URL into the new field, and select **Save** at the bottom of the page.
-
-Now you're done setting up a custom connector that can access the Azure Digital Twins APIs.
-
-## Create logic app
-
-Next, you'll create a logic app that will use your new connector to automate Azure Digital Twins updates.
-
-In the [Azure portal](https://portal.azure.com), search for *Logic apps* in the portal search bar. Selecting it should take you to the **Logic apps** page. Select **+ Add** to create a new logic app.
--
-In the **Create Logic App** page that follows, enter your subscription, resource group, and a name and region for your logic app. Choose whether you want to enable or disable log analytics. Under **Plan**, select a **Consumption** plan type.
-
->[!IMPORTANT]
-> The logic app should be in the same deployment region as the custom connector you created earlier.
-
-Select the **Review + create** button.
-
-Doing so will take you to the **Review + create** tab, where you can review your details and select **Create** at the bottom to create your logic app.
-
-You'll be taken to the deployment page for the logic app. When it's finished deploying, select the **Go to resource** button to continue to the **Logic Apps Designer**, where you'll fill in the logic of the workflow.
-
-### Design workflow
-
-In the Logic Apps Designer, under **Start with a common trigger**, select **Recurrence**.
--
-In the Logic Apps Designer page that follows, change the Recurrence **Frequency** to **Second**, so that the event is triggered every 3 seconds. Selecting this frequency will make it easy to see the results later without having to wait long.
-
-Select **+ New step**.
-
-Doing so will open a box to choose an operation. Switch to the **Custom** tab. You should see your custom connector from earlier in the top box.
-
-Select it to display the list of APIs contained in that connector. Use the search bar or scroll through the list to select **DigitalTwins_Update**. (The **DigitalTwins_Update** action is the API call used in this article, but you could also select any other API as a valid choice for a Logic Apps connection).
--
-You may be asked to sign in with your Azure credentials to connect to the connector. If you get a **Permissions requested** dialogue, follow the prompts to grant consent for your app and accept.
-
-In the new **DigitalTwins Update** box, fill the fields as follows:
-* **id**: Fill the *twin ID* of the digital twin in your instance that you want the Logic App to update.
-* **Item - 1**: This field is for the body of the **DigitalTwins Update** API request. Enter JSON Patch code to update one of the fields on your twin. For more information about creating JSON Patch to update your twin, see [Update a digital twin](how-to-manage-twin.md#update-a-digital-twin).
-* **api-version**: Select the latest API version.
-
->[!TIP]
->You can add additional operations to the logic app by selecting **+ New step** from this page.
-
-Select **Save** in the Logic Apps Designer.
--
-## Query twin to see the update
-
-Now that your logic app has been created, the twin update event you defined in the Logic Apps Designer should occur on a recurrence of every three seconds. This configured frequency means that after three seconds, you should be able to query your twin and see your new patched values reflected.
-
-There are many ways to query for your twin information, including the Azure Digital Twins [APIs and SDKs](concepts-apis-sdks.md), [CLI commands](concepts-cli.md), or [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md). For more information about querying your Azure Digital Twins instance, see [Query the twin graph](how-to-query-graph.md).
-
-## Next steps
-
-In this article, you created a logic app that regularly updates a twin in your Azure Digital Twins instance with a patch that you provided. You can try out selecting other APIs in the custom connector to create Logic Apps for various actions on your instance.
-
-To read more about the APIs operations available and the details they require, visit [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md).
digital-twins How To Use Power Platform Logic Apps Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-power-platform-logic-apps-connector.md
+
+# Mandatory fields.
+ Title: Integrate with Power Platform and Logic Apps
+
+description: Learn how to connect Power Platform and Logic Apps to Azure Digital Twins using the connector
++ Last updated : 01/25/2023+++++
+# Integrate with Power Platform and Logic Apps using the Azure Digital Twins connector
+
+You can integrate Azure Digital Twins into a [Microsoft Power Platform](/power-platform) or [Azure Logic Apps](../logic-apps/logic-apps-overview.md) flow, using the *Azure Digital Twins connector*.
+
+The connector is a wrapper around the Azure Digital Twins [data plane APIs](concepts-apis-sdks.md#data-plane-apis) for twin, model and query operations, which allows the underlying service to talk to [Microsoft Power Automate](/power-automate/getting-started), [Microsoft Power Apps](/power-apps/powerapps-overview), and [Azure Logic Apps](../logic-apps/logic-apps-overview.md). The connector provides a way for users to connect their accounts and leverage a set of prebuilt actions to build their apps and workflows.
+
+For more information about the Azure Digital Twins Power Platform connector, including a complete list of the connector's actions and their parameters, see the [Azure Digital Twins connector reference documentation](/connectors/azuredigitaltwins).
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+Sign in to the [Azure portal](https://portal.azure.com) with your account.
++
+Lastly, you'll need to set up any [Power Platform](/power-platform) services where you want to use the connector.
+
+## Set up the connector
+
+For Power Automate and Power Apps, set up the connection first before creating a flow. Follow the steps below to add the connection in Power Automate and Power Apps.
+1. Select **Connections** from the left navigation menu (in Power Automate, it's under the **Data** heading). On the Connections page, select **+ New connection**.
+1. Search for *Azure Digital Twins*, and select the **Azure Digital Twins (preview)** connector.
+1. Where the connector asks for **ADT Instance Name**, enter the [host name of your instance](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values).
+1. Enter your authentication details when requested to finish setting up the connection.
+1. To verify that the connection has been created, look for it on the Connections page.
+ :::image type="content" source="media/how-to-use-power-platform-logic-apps-connector/power-connection.png" alt-text="Screenshot of Power Automate, showing the Azure Digital Twins connection on the Connections page." lightbox="media/how-to-use-power-platform-logic-apps-connector/power-connection.png":::
+
+For Logic Apps, you can use the Azure Digital Twins built-in connection when you [create a flow](#create-a-flow) in the next section. For more information on built-in connectors, see [Built-in connectors in Azure Logic Apps](../connectors/built-in.md).
+
+## Create a flow
+
+You can incorporate Azure Digital Twins into Power Automate flows, Logic Apps flows, or Power Apps applications. Using the Azure Digital Twins connector and over 700 other Power Platform connectors, you can ingest data from other systems into your twins, or respond to system events.
+
+Follow the steps below to create a sample flow with the connector in Power Automate.
+1. In Power Automate, select **My flows** from the left navigation menu. Select **+ New flow** and **Instant cloud flow**.
+1. Enter a **Flow name** and select **Manually trigger a flow** from the list of triggers. **Create** the flow.
+1. Add a step to the flow, and search for *Azure Digital Twins* to find the connection. Select the Azure Digital Twins connection.
+ :::image type="content" source="media/how-to-use-power-platform-logic-apps-connector/power-automate-action-1.png" alt-text="Screenshot of Power Automate, showing the Azure Digital Twins connector in a new flow." lightbox="media/how-to-use-power-platform-logic-apps-connector/power-automate-action-1-big.png":::
+1. You'll see a list of all the [actions](/connectors/azuredigitaltwins) that are available with the connector. Pick one of them to interact with the [Azure Digital Twins APIs](/rest/api/azure-digitaltwins/).
+ :::image type="content" source="media/how-to-use-power-platform-logic-apps-connector/power-automate-action-2.png" alt-text="Screenshot of Power Automate, showing all the actions for the Azure Digital Twins connector." lightbox="media/how-to-use-power-platform-logic-apps-connector/power-automate-action-2-big.png":::
+1. You can continue to edit or add more steps to your workflow, using other connectors to build out your integration scenario.
+ :::image type="content" source="media/how-to-use-power-platform-logic-apps-connector/power-automate-action-3.png" alt-text="Screenshot of Power Automate, showing a Get twin by ID action from the Azure Digital Twins connector in a flow." lightbox="media/how-to-use-power-platform-logic-apps-connector/power-automate-action-3.png":::
+
+Follow the steps below to create a sample flow with the connector in Power Apps.
+1. In Power Apps, select **+ Create** from the left navigation menu. Select **Blank app** and follow the prompts to create a new app.
+1. In the app builder, select **Data** from the left navigation menu. Select **Add data** and search for *Azure Digital Twins* to find the data connection. Select the Azure Digital Twins connection.
+ :::image type="content" source="media/how-to-use-power-platform-logic-apps-connector/power-apps-action-1.png" alt-text="Screenshot of Power Apps, showing the Azure Digital Twins connector as a data source." lightbox="media/how-to-use-power-platform-logic-apps-connector/power-apps-action-1.png":::
+1. Now, the [actions](/connectors/azuredigitaltwins) from the Azure Digital Twins connector will be available as functions to use in your app.
+ :::image type="content" source="media/how-to-use-power-platform-logic-apps-connector/power-apps-action-2.png" alt-text="Screenshot of Power Apps, showing the Get twin by ID action being used in a function." lightbox="media/how-to-use-power-platform-logic-apps-connector/power-apps-action-2.png":::
+1. You can continue to build out your application with access to Azure Digital Twins data. For more information about building Power Apps, see [Overview of creating apps in Power Apps](/power-apps/maker/).
+
+Follow the steps below to create a sample flow with the connector in Logic Apps.
+1. Navigate to your logic app in the [Azure portal](https://portal.azure.com). Select **Workflows** from the left navigation menu, and **+ Add**. Follow the prompts to create a new workflow.
+1. Select your new flow and enter into the **Designer**.
+1. Add a trigger to your app.
+1. Select **Choose an operation** to add an action from the Azure Digital Twins connector. Search for *Azure Digital Twins* on the **Azure** tab to find the data connection. Select the Azure Digital Twins connection.
+ :::image type="content" source="media/how-to-use-power-platform-logic-apps-connector/logic-apps-action.png" alt-text="Screenshot of Logic Apps, showing the Azure Digital Twins connector." lightbox="media/how-to-use-power-platform-logic-apps-connector/logic-apps-action.png":::
+1. You'll see a list of all the [actions](/connectors/azuredigitaltwins) that are available with the connector. Pick one of them to interact with the [Azure Digital Twins APIs](/rest/api/azure-digitaltwins/).
+1. After selecting an action from the Azure Digital Twins connector, you'll be asked to enter authentication details to create the connection.
+1. You can continue to edit or add more steps to your workflow, using other connectors to build out your integration scenario.
+
+## Limitations and suggestions
+
+Here are some limitations of the connector and suggestions for working with them.
+
+* Some connector actions (such as Add Model) require input in the form of a literal string that starts with *@*. In these cases, escape the *@* character by using *@@* instead. This will keep the literal value from being interpreted as a JSON expression.
+* Since Azure Digital Twins deals with dynamic schema responses, you should parse the JSON received from the APIs before consuming it in your application. For example, here's a set of calls that parse the data before extracting the `dtId` value: `Set(jsonVal, AzureDigitalTwins.GetTwinById("your_twin_id").result); Set(parsedResp, ParseJSON(jsonVal)); Set( DtId, Text(parsedResp.'$dtId'));`.
+
+## Next steps
+
+For more information about Power Platform connectors, including how to use them in workflows across multiple products, see the [Power Platform and Azure Logic Apps connectors documentation](/connectors/connectors).
expressroute Expressroute Howto Coexist Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-coexist-classic.md
This article helps you configure ExpressRoute and Site-to-Site VPN connections t
* **Basic SKU gateway is not supported.** You must use a non-Basic SKU gateway for both the [ExpressRoute gateway](expressroute-about-virtual-network-gateways.md) and the [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). * **Only route-based VPN gateway is supported.** You must use a route-based [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). * **Static route should be configured for your VPN gateway.** If your local network is connected to both ExpressRoute and a Site-to-Site VPN, you must have a static route configured in your local network to route the Site-to-Site VPN connection to the public Internet.
-* **ExpressRoute gateway must be configured first.** You must create the ExpressRoute gateway first before you add the Site-to-Site VPN gateway.
## Configuration designs ### Configure a Site-to-Site VPN as a failover path for ExpressRoute
If the gateway subnet is /27 or larger and the virtual network is connected via
6. At this point, you'll have a VNet with no gateways. To create new gateways and complete your connections, you can proceed with [Step 4 - Create an ExpressRoute gateway](#gw), found in the preceding set of steps. ## Next steps
-For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md)
+For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md)
firewall Firewall Network Rule Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-network-rule-logging.md
+
+ Title: Azure network rule name logging (preview)
+description: Learn about Azure network rule name logging (preview)
++++ Last updated : 01/25/2023+++
+# Azure network rule name logging (preview)
++
+> [!IMPORTANT]
+> This feature is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Currently, a network rule hit event shows the following attributes in the logs:
+
+ - Source and destination IP/port
+ - Action (allow, or deny)
+
+ With this new feature, the event logs for network rules also show the following attributes:
+ - Policy name
+ - Rule collection group
+ - Rule collection
+ - Rule name
+
+## Enable/disable network rule name logging
+
+To enable the Network Rule name Logging feature, the following commands need to be run in Azure PowerShell. For the feature to immediately take effect, an operation needs to be run on the firewall. This operation can be a rule change (least intrusive), a setting change, or a stop/start operation. Otherwise, the firewall/s is updated with the feature within several days.
+
+Run the following Azure PowerShell commands to configure Azure Firewall network rule name logging:
+
+```azurepowershell
+Connect-AzAccount
+Select-AzSubscription -Subscription "subscription_id or subscription_name"
+Register-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
+Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+```
+
+Run the following Azure PowerShell command to turn off this feature:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
+```
+
+## Next steps
++
+- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md)
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
The following features are available in preview.
### Network rule name logging (preview)
-Currently, a network rule hit event shows the following attributes in the logs:
-
- - Source and destination IP/port
- - Action (allow, or deny)
-
- With this new feature, the event logs for network rules also show the following attributes:
+With this new feature, the event logs for network rules adds the following attributes:
- Policy name - Rule collection group - Rule collection - Rule name
-To enable the Network Rule name Logging feature, the following commands need to be run in Azure PowerShell. For the feature to immediately take effect, an operation needs to be run on the firewall. This can be a rule change (least intrusive), a setting change, or a stop/start operation. Otherwise, the firewall/s is updated with the feature within several days.
-
-Run the following Azure PowerShell commands to configure Azure Firewall network rule name logging:
-
-```azurepowershell
-Connect-AzAccount
-Select-AzSubscription -Subscription "subscription_id or subscription_name"
-Register-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
-Register-AzResourceProvider -ProviderNamespace Microsoft.Network
-```
-
-Run the following Azure PowerShell command to turn off this feature:
-
-```azurepowershell
-Unregister-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
-```
-
-### Structured firewall logs (preview)
-
-Today, the following diagnostic log categories are available for Azure Firewall:
-- Application rule log-- Network rule log-- DNS proxy log-
-These log categories use [Azure diagnostics mode](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode). In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
-
-With this new feature, you'll be able to choose to use [Resource Specific Tables](../azure-monitor/essentials/resource-logs.md#resource-specific) instead of the existing [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. In case both sets of logs are required, at least two diagnostic settings need to be created per firewall.
+For more information, see [Azure network rule name logging (preview)](firewall-network-rule-logging.md).
-In **Resource specific** mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. This method is recommended since it:
-- makes it much easier to work with the data in log queries-- makes it easier to discover schemas and their structure-- improves performance across both ingestion latency and query times-- allows you to grant Azure RBAC rights on a specific table
+### Structured Firewall Logs (preview)
-New resource specific tables are now available in Diagnostic setting that allows you to utilize the following newly added categories:
+With Structured Firewall Logs, you'll be able to choose to use Resource Specific tables instead of an existing AzureDiagnostics table. Structured Firewall Logs is required for Policy Analytics. This new method helps you with better log querying and is recommended because:
-- [Network rule log](/azure/azure-monitor/reference/tables/azfwnetworkrule) - Contains all Network Rule log data. Each match between data plane and network rule creates a log entry with the data plane packet and the matched rule's attributes.-- [NAT rule log](/azure/azure-monitor/reference/tables/azfwnatrule) - Contains all DNAT (Destination Network Address Translation) events log data. Each match between data plane and DNAT rule creates a log entry with the data plane packet and the matched rule's attributes.-- [Application rule log](/azure/azure-monitor/reference/tables/azfwapplicationrule) - Contains all Application rule log data. Each match between data plane and Application rule creates a log entry with the data plane packet and the matched rule's attributes.-- [Threat Intelligence log](/azure/azure-monitor/reference/tables/azfwthreatintel) - Contains all Threat Intelligence events.-- [IDPS log](/azure/azure-monitor/reference/tables/azfwidpssignature) - Contains all data plane packets that were matched with one or more IDPS signatures.-- [DNS proxy log](/azure/azure-monitor/reference/tables/azfwdnsquery) - Contains all DNS Proxy events log data.-- [Internal FQDN resolve failure log](/azure/azure-monitor/reference/tables/azfwinternalfqdnresolutionfailure) - Contains all internal Firewall FQDN resolution requests that resulted in failure.-- [Application rule aggregation log](/azure/azure-monitor/reference/tables/azfwapplicationruleaggregation) - Contains aggregated Application rule log data for Policy Analytics.-- [Network rule aggregation log](/azure/azure-monitor/reference/tables/azfwnetworkruleaggregation) - Contains aggregated Network rule log data for Policy Analytics.-- [NAT rule aggregation log](/azure/azure-monitor/reference/tables/azfwnatruleaggregation) - Contains aggregated NAT rule log data for Policy Analytics.-
-By default, the new resource specific tables are disabled.
-
-Run the following Azure PowerShell commands to enable Azure Firewall Structured logs:
-
-```azurepowershell
-Connect-AzAccount
-Select-AzSubscription -Subscription "subscription_id or subscription_name"
-Register-AzProviderFeature -FeatureName AFWEnableStructuredLogs -ProviderNamespace Microsoft.Network
-Register-AzResourceProvider -ProviderNamespace Microsoft.Network
-```
-
-Run the following Azure PowerShell command to turn off this feature:
-
-```azurepowershell
-Unregister-AzProviderFeature -FeatureName AFWEnableStructuredLogs -ProviderNamespace Microsoft.Network
-```
-
-In addition, when setting up your log analytics workspace, you must select whether you want to work with the AzureDiagnostics table (default) or with Resource Specific Tables.
-
-Additional KQL log queries were added to query structured firewall logs.
-
-> [!NOTE]
-> Existing Workbooks and any Sentinel integration will be adjusted to support the new structured logs when **Resource Specific** mode is selected.
+- It's easier to work with the data in the log queries
+- It's easier to discover schemas and their structure
+- It improves performance across both ingestion latency and query times
+- It allows you to grant Azure RBAC rights on a specific table
-For more information, see [Exploring the New Resource Specific Structured Logging in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/exploring-the-new-resource-specific-structured-logging-in-azure/ba-p/3620530).
+For more information, see [Azure Structured Firewall Logs (preview)](firewall-structured-logs.md).
### Policy Analytics (preview)
You can now refine and update Firewall rules and policies with confidence in jus
#### Pricing
-Enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the [Azure Firewall Manager pricing](https://azure.microsoft.com/pricing/details/firewall-manager/) page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no additional cost.
+Enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the [Azure Firewall Manager pricing](https://azure.microsoft.com/pricing/details/firewall-manager/) page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no added cost.
#### Key Policy Analytics features
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
### Single click upgrade/downgrade (preview)
-You can now easily upgrade your existing Firewall Standard SKU to Premium SKU as well as downgrade from Premium to Standard SKU. The process is fully automated and has no service impact (zero service downtime).
+You can now easily upgrade your existing Firewall Standard SKU to Premium SKU and downgrade from Premium to Standard SKU. The process is fully automated and has no service impact (zero service downtime).
In the upgrade process, you can select the policy to be attached to the upgraded Premium SKU. You can select an existing Premium Policy or an existing Standard Policy. You can use your existing Standard policy and let the system automatically duplicate, upgrade to Premium Policy, and then attach it to the newly created Premium Firewall.
-This new capability is available through the Azure portal as shown here, as well as via PowerShell and Terraform simply by changing the sku_tier attribute.
+This new capability is available through the Azure portal as shown here, and via PowerShell and Terraform simply by changing the sku_tier attribute.
:::image type="content" source="media/premium-features/upgrade.png" alt-text="Screenshot showing SKU upgrade" lightbox="media/premium-features/upgrade.png":::
firewall Firewall Structured Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-structured-logs.md
+
+ Title: Azure Structured Firewall Logs (preview)
+description: Learn about Azure Structured Firewall Logs (preview)
++++ Last updated : 01/25/2023+++
+# Azure Structured Firewall Logs (preview)
++
+> [!IMPORTANT]
+> This feature is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Currently, the following diagnostic log categories are available for Azure Firewall:
+- Application rule log
+- Network rule log
+- DNS proxy log
+
+These log categories use [Azure diagnostics mode](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode). In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
+
+With this new feature, you'll be able to choose to use [Resource Specific Tables](../azure-monitor/essentials/resource-logs.md#resource-specific) instead of the existing [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. In case both sets of logs are required, at least two diagnostic settings need to be created per firewall.
+
+## Resource specific mode
+
+In **Resource specific** mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. This method is recommended since it:
+- makes it much easier to work with the data in log queries
+- makes it easier to discover schemas and their structure
+- improves performance across both ingestion latency and query times
+- allows you to grant Azure RBAC rights on a specific table
+
+New resource specific tables are now available in Diagnostic setting that allows you to utilize the following newly added categories:
+
+- [Network rule log](/azure/azure-monitor/reference/tables/azfwnetworkrule) - Contains all Network Rule log data. Each match between data plane and network rule creates a log entry with the data plane packet and the matched rule's attributes.
+- [NAT rule log](/azure/azure-monitor/reference/tables/azfwnatrule) - Contains all DNAT (Destination Network Address Translation) events log data. Each match between data plane and DNAT rule creates a log entry with the data plane packet and the matched rule's attributes.
+- [Application rule log](/azure/azure-monitor/reference/tables/azfwapplicationrule) - Contains all Application rule log data. Each match between data plane and Application rule creates a log entry with the data plane packet and the matched rule's attributes.
+- [Threat Intelligence log](/azure/azure-monitor/reference/tables/azfwthreatintel) - Contains all Threat Intelligence events.
+- [IDPS log](/azure/azure-monitor/reference/tables/azfwidpssignature) - Contains all data plane packets that were matched with one or more IDPS signatures.
+- [DNS proxy log](/azure/azure-monitor/reference/tables/azfwdnsquery) - Contains all DNS Proxy events log data.
+- [Internal FQDN resolve failure log](/azure/azure-monitor/reference/tables/azfwinternalfqdnresolutionfailure) - Contains all internal Firewall FQDN resolution requests that resulted in failure.
+- [Application rule aggregation log](/azure/azure-monitor/reference/tables/azfwapplicationruleaggregation) - Contains aggregated Application rule log data for Policy Analytics.
+- [Network rule aggregation log](/azure/azure-monitor/reference/tables/azfwnetworkruleaggregation) - Contains aggregated Network rule log data for Policy Analytics.
+- [NAT rule aggregation log](/azure/azure-monitor/reference/tables/azfwnatruleaggregation) - Contains aggregated NAT rule log data for Policy Analytics.
+
+## Enable/disable structured logs
+
+By default, the new resource specific tables are disabled.
+
+Run the following Azure PowerShell commands to enable Azure Firewall Structured logs:
+
+```azurepowershell
+Connect-AzAccount
+Select-AzSubscription -Subscription "subscription_id or subscription_name"
+Register-AzProviderFeature -FeatureName AFWEnableStructuredLogs -ProviderNamespace Microsoft.Network
+Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+```
+
+Run the following Azure PowerShell command to turn off this feature:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName AFWEnableStructuredLogs -ProviderNamespace Microsoft.Network
+```
+
+In addition, when setting up your log analytics workspace, you must select whether you want to work with the AzureDiagnostics table (default) or with Resource Specific Tables.
+
+Additional KQL log queries were added to query structured firewall logs.
+
+> [!NOTE]
+> Existing Workbooks and any Sentinel integration will be adjusted to support the new structured logs when **Resource Specific** mode is selected.
+
+## Next steps
+
+- For more information, see [Exploring the New Resource Specific Structured Logging in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/exploring-the-new-resource-specific-structured-logging-in-azure/ba-p/3620530).
++
+- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md)
firewall Tutorial Protect Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-protect-firewall.md
+
+ Title: 'Tutorial: Deploy a firewall with Azure DDoS Protection Standard'
+description: In this tutorial, you learn how to deploy and configure Azure Firewall and policy rules using the Azure portal with Azure DDoS protection.
++++ Last updated : 01/24/2022++
+#Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
++
+# Tutorial: Deploy a firewall with Azure DDoS Protection Standard
+
+This article helps you create an Azure Firewall with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your firewall from large scale DDoS attacks.
+
+> [!IMPORTANT]
+> Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
+
+For this tutorial, you create a simplified single VNet with two subnets for easy deployment. Azure DDoS Protection Standard is enabled for the virtual network.
+
+* **AzureFirewallSubnet** - the firewall is in this subnet.
+* **Workload-SN** - the workload server is in this subnet. This subnet's network traffic goes through the firewall.
+
+![Tutorial network infrastructure](media/tutorial-firewall-deploy-portal/tutorial-network.png)
+
+For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own VNet. The workload servers are in peered VNets in the same region with one or more subnets.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up a test network environment
+> * Deploy a firewall and firewall policy
+> * Create a default route
+> * Configure an application rule to allow access to www.google.com
+> * Configure a network rule to allow access to external DNS servers
+> * Configure a NAT rule to allow a remote desktop to the test server
+> * Test the firewall
+
+If you prefer, you can complete this procedure using [Azure PowerShell](deploy-ps-policy.md).
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Set up the network
+
+First, create a resource group to contain the resources needed to deploy the firewall. Then create a VNet, subnets, and a test server.
+
+### Create a resource group
+
+The resource group contains all the resources for the tutorial.
+
+1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page, then select **Add**. Enter or select the following values:
+
+ | Setting | Value |
+ | -- | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Enter *Test-FW-RG*. |
+ | Region | Select a region. All other resources that you create must be in the same region. |
+
+1. Select **Review + create**.
+1. Select **Create**.
+
+### Create a DDoS protection plan
+
+1. In the search box at the top of the portal, enter **DDoS protection**. Select **DDoS protection plans** in the search results and then select **+ Create**.
+
+1. In the **Basics** tab of **Create a DDoS protection plan** page, enter or select the following information:
+
+ | Setting | Value |
+ |--|--|
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | **Instance details** | |
+ | Name | Enter **myDDoSProtectionPlan**. |
+ | Region | Select the region. |
+
+1. Select **Review + create** and then select **Create** to deploy the DDoS protection plan.
+
+### Create a VNet
+
+This VNet will have two subnets.
+
+> [!NOTE]
+> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
+
+1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+1. Select **Networking**.
+1. Search for **Virtual network** and select it.
+1. Select **Create**, then enter or select the following values:
+
+ | Setting | Value |
+ | -- | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Name | Enter *Test-FW-VN*. |
+ | Region | Select the same location that you used previously. |
+
+1. Select **Next: IP addresses**.
+1. For **IPv4 Address space**, accept the default **10.1.0.0/16**.
+1. Under **Subnet**, select **default**.
+1. For **Subnet name** change the name to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
+1. For **Address range**, type **10.1.1.0/26**.
+1. Select **Save**.
+
+ Next, create a subnet for the workload server.
+
+1. Select **Add subnet**.
+1. For **Subnet name**, type **Workload-SN**.
+1. For **Subnet address range**, type **10.1.2.0/24**.
+1. Select **Add**.
+1. Select **Next: Security**.
+1. In **DDoS Protection Standard** select **Enable**.
+1. Select **myDDoSProtectionPlan** in **DDoS protection plan**.
+1. Select **Review + create**.
+1. Select **Create**.
+
+### Create a virtual machine
+
+Now create the workload virtual machine, and place it in the **Workload-SN** subnet.
+
+1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+1. Select **Windows Server 2019 Datacenter**.
+1. Enter or select these values for the virtual machine:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Virtual machine name | Enter *Srv-Work*.|
+ | Region | Select the same location that you used previously. |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+
+1. Under **Inbound port rules**, **Public inbound ports**, select **None**.
+1. Accept the other defaults and select **Next: Disks**.
+1. Accept the disk defaults and select **Next: Networking**.
+1. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**.
+1. For **Public IP**, select **None**.
+1. Accept the other defaults and select **Next: Management**.
+1. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+1. Review the settings on the summary page, and then select **Create**.
+1. After the deployment completes, select the **Srv-Work** resource and note the private IP address for later use.
+
+## Deploy the firewall and policy
+
+Deploy the firewall into the VNet.
+
+1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+2. Type **firewall** in the search box and press **Enter**.
+3. Select **Firewall** and then select **Create**.
+4. On the **Create a Firewall** page, use the following table to configure the firewall:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Name | Enter *Test-FW01*. |
+ | Region | Select the same location that you used previously. |
+ | Firewall management | Select **Use a Firewall Policy to manage this firewall**. |
+ | Firewall policy | Select **Add new**, and enter *fw-test-pol*. <br> Select the same region that you used previously.
+ | Choose a virtual network | Select **Use existing**, and then select **Test-FW-VN**. |
+ | Public IP address | Select **Add new**, and enter *fw-pip* for the **Name**. |
+
+5. Accept the other default values, then select **Review + create**.
+6. Review the summary, and then select **Create** to create the firewall.
+
+ This will take a few minutes to deploy.
+7. After deployment completes, go to the **Test-FW-RG** resource group, and select the **Test-FW01** firewall.
+8. Note the firewall private and public IP addresses. You'll use these addresses later.
+
+## Create a default route
+
+For the **Workload-SN** subnet, configure the outbound default route to go through the firewall.
+
+1. On the Azure portal menu, select **All services** or search for and select *All services* from any page.
+1. Under **Networking**, select **Route tables**.
+1. Select **Create**, then enter or select the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Region | Select the same location that you used previously. |
+ | Name | Enter *Firewall-route*. |
+
+1. Select **Review + create**.
+1. Select **Create**.
+
+After deployment completes, select **Go to resource**.
+
+1. On the **Firewall-route** page, select **Subnets** and then select **Associate**.
+1. Select **Virtual network** > **Test-FW-VN**.
+1. For **Subnet**, select **Workload-SN**. Make sure that you select only the **Workload-SN** subnet for this route, otherwise your firewall won't work correctly.
+1. Select **OK**.
+1. Select **Routes** and then select **Add**.
+1. For **Route name**, enter *fw-dg*.
+1. For **Address prefix**, enter *0.0.0.0/0*.
+1. For **Next hop type**, select **Virtual appliance**.
+ Azure Firewall is actually a managed service, but virtual appliance works in this situation.
+1. For **Next hop address**, enter the private IP address for the firewall that you noted previously.
+1. Select **OK**.
+
+## Configure an application rule
+
+This is the application rule that allows outbound access to `www.google.com`.
+
+1. Open the **Test-FW-RG** resource group, and select the **fw-test-pol** firewall policy.
+1. Select **Application rules**.
+1. Select **Add a rule collection**.
+1. For **Name**, enter *App-Coll01*.
+1. For **Priority**, enter *200*.
+1. For **Rule collection action**, select **Allow**.
+1. Under **Rules**, for **Name**, enter *Allow-Google*.
+1. For **Source type**, select **IP address**.
+1. For **Source**, enter *10.0.2.0/24*.
+1. For **Protocol:port**, enter *http, https*.
+1. For **Destination Type**, select **FQDN**.
+1. For **Destination**, enter *`www.google.com`*
+1. Select **Add**.
+
+Azure Firewall includes a built-in rule collection for infrastructure FQDNs that are allowed by default. These FQDNs are specific for the platform and can't be used for other purposes. For more information, see [Infrastructure FQDNs](infrastructure-fqdns.md).
+
+## Configure a network rule
+
+This is the network rule that allows outbound access to two IP addresses at port 53 (DNS).
+
+1. Select **Network rules**.
+2. Select **Add a rule collection**.
+3. For **Name**, enter *Net-Coll01*.
+4. For **Priority**, enter *200*.
+5. For **Rule collection action**, select **Allow**.
+1. For **Rule collection group**, select **DefaultNetworkRuleCollectionGroup**.
+1. Under **Rules**, for **Name**, enter *Allow-DNS*.
+1. For **Source type**, select **IP Address**.
+1. For **Source**, enter *10.0.2.0/24*.
+1. For **Protocol**, select **UDP**.
+1. For **Destination Ports**, enter *53*.
+1. For **Destination type** select **IP address**.
+1. For **Destination**, enter *209.244.0.3,209.244.0.4*.<br>These are public DNS servers operated by CenturyLink.
+2. Select **Add**.
+
+## Configure a DNAT rule
+
+This rule allows you to connect a remote desktop to the **Srv-Work** virtual machine through the firewall.
+
+1. Select the **DNAT rules**.
+2. Select **Add a rule collection**.
+3. For **Name**, enter *rdp*.
+1. For **Priority**, enter *200*.
+1. For **Rule collection group**, select **DefaultDnatRuleCollectionGroup**.
+1. Under **Rules**, for **Name**, enter *rdp-nat*.
+1. For **Source type**, select **IP address**.
+1. For **Source**, enter *\**.
+1. For **Protocol**, select **TCP**.
+1. For **Destination Ports**, enter *3389*.
+1. For **Destination Type**, select **IP Address**.
+1. For **Destination**, enter the firewall public IP address.
+1. For **Translated address**, enter the **Srv-work** private IP address.
+1. For **Translated port**, enter *3389*.
+1. Select **Add**.
++
+### Change the primary and secondary DNS address for the **Srv-Work** network interface
+
+For testing purposes in this tutorial, configure the server's primary and secondary DNS addresses. This isn't a general Azure Firewall requirement.
+
+1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Select the **Test-FW-RG** resource group.
+2. Select the network interface for the **Srv-Work** virtual machine.
+3. Under **Settings**, select **DNS servers**.
+4. Under **DNS servers**, select **Custom**.
+5. Enter *209.244.0.3* in the **Add DNS server** text box, and *209.244.0.4* in the next text box.
+6. Select **Save**.
+7. Restart the **Srv-Work** virtual machine.
+
+## Test the firewall
+
+Now, test the firewall to confirm that it works as expected.
+
+1. Connect a remote desktop to firewall public IP address and sign in to the **Srv-Work** virtual machine.
+3. Open Internet Explorer and browse to `https://www.google.com`.
+4. Select **OK** > **Close** on the Internet Explorer security alerts.
+
+ You should see the Google home page.
+
+5. Browse to `https://www.microsoft.com`.
+
+ You should be blocked by the firewall.
+
+So now you've verified that the firewall rules are working:
+
+* You can browse to the one allowed FQDN, but not to any others.
+* You can resolve DNS names using the configured external DNS server.
+
+## Clean up resources
+
+You can keep your firewall resources for the next tutorial, or if no longer needed, delete the **Test-FW-RG** resource group to delete all firewall-related resources.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy and configure Azure Firewall Premium](premium-deploy.md)
frontdoor Scenario Storage Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scenario-storage-blobs.md
As a content delivery network (CDN), Front Door caches the content at its global
#### Authentication
-Front Door is designed to be internet-facing, and this scenario is optimized for publicly available blobs. If you need to authenticate access to blobs, consider using [shared access signatures](../storage/common/storage-sas-overview.md), and ensure that you enable the [*ignore query strings* query string behavior](front-door-caching.md#query-string-behavior) to avoid Front Door from serving requests to unauthenticated clients. However, this approach might not make effective use of the Front Door cache, because each request with a different shared access signature must be sent to the origin separately.
+Front Door is designed to be internet-facing, and this scenario is optimized for publicly available blobs. If you need to authenticate access to blobs, consider using [shared access signatures](../storage/common/storage-sas-overview.md), and ensure that you enable the [*Cache every unique URL* query string behavior](front-door-caching.md#query-string-behavior) to avoid Front Door from serving requests to unauthenticated clients. However, this approach might not make effective use of the Front Door cache, because each request with a different shared access signature must be sent to the origin separately.
#### Origin security
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
see [Tag support for Azure resources](../../../azure-resource-manager/management
The following Resource Provider modes are fully supported: -- `Microsoft.Kubernetes.Data` for managing your Kubernetes clusters on or off Azure, and for Azure Policy components that target [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md) components such as pods, containers, and ingresses. Definitions
+- `Microsoft.Kubernetes.Data` for managing Kubernetes clusters and components such as pods, containers, and ingresses. Supported for Azure Kubernetes Service clusters and [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md). Definitions
using this Resource Provider mode use effects _audit_, _deny_, and _disabled_. - `Microsoft.KeyVault.Data` for managing vaults and certificates in [Azure Key Vault](../../../key-vault/general/overview.md). For more information on these policy
The following Resource Provider modes are fully supported:
The following Resource Provider modes are currently supported as a **[preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)**: - `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.
+- `Microsoft.ManagedHSM.Data` for managing [Managed HSM](../../../key-vault/managed-hsm/overview.md) keys using Azure Policy.
> [!NOTE] >Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
A condition evaluates whether a value meets certain criteria. The supported cond
`"greaterOrEquals": intValue` - `"exists": "bool"`
-When using **equals** or **notEquals** conditions, non-string values are converted into strings for evaluation. For example, `123` would be resolved into `"123"`, and `null` would be resolved into an empty string `""`. It is recommended that all values are entered as type string to begin with.
- For **less**, **lessOrEquals**, **greater**, and **greaterOrEquals**, if the property type doesn't match the condition type, an error is thrown. String comparisons are made using `InvariantCultureIgnoreCase`.
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Policies with mode `Microsoft.KeyVault.Data` are applicable if the `type` condit
- Microsoft.KeyVault.Data/vaults/keys - Microsoft.KeyVault.Data/vaults/secrets
+### Microsoft.ManagedHSM.Data
+
+Policies with mode `Microsoft.ManagedHSM.Data` are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type:
+- Microsoft.ManagedHSM.Data/managedHsms/keys
+ ### Microsoft.Network.Data Policies with mode `Microsoft.Network.Data` are applicable if the `type` and `name` conditions of the policy rule evaluate to true. The `type` refers to component type:
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
Previously updated : 12/27/2022 Last updated : 1/24/2023 # How to enable diagnostic settings for the MedTech service
-In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs and metrics to different destinations (for example: to an [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md) or an [Azure storage account](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, backup, or troubleshooting of your MedTech service.
+In this article, you'll learn how to enable diagnostic settings for the MedTech service to:
+
+> [!div class="checklist"]
+> - Create a diagnostic setting to export logs and metrics for audit, analysis, or troubleshooting of the MedTech service.
+> - Use the Azure Log Analytics workspace to view the MedTech service logs.
+> - Access the MedTech service pre-defined Azure Log Analytics queries.
## Create a diagnostic setting for the MedTech service
In this article, you'll learn how to enable the diagnostic settings for the MedT
|Destination|Description| |--|--|
- |Log Analytics workspace|Metrics are converted to log form. Sending the metrics to the Azure Monitor Logs store (which is searchable via Log Analytics) enables you to integrate them into queries, alerts, and visualizations with existing log data.|
- |Azure storage account|Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive, and logs can be kept there indefinitely.|
- |Azure event hub|Sending logs and metrics to an event hub allows you to stream data to external systems such as third-party Security Information and Event Managements (SIEMs) and other Log Analytics solutions.|
+ |[Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md)|Metrics are converted to log form. Sending the metrics to the Azure Monitor Logs store (which is searchable via Log Analytics) enables you to integrate them into queries, alerts, and visualizations with existing log data.|
+ |[Azure storage account](../../storage/index.yml)|Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive, and logs can be kept there indefinitely.|
+ |[Azure event hub](../../event-hubs/index.yml)|Sending logs and metrics to an event hub allows you to stream data to external systems such as third-party Security Information and Event Managements (SIEMs) and other Log Analytics solutions.|
|Azure Monitor partner integrations|Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners.| 5. Select the **Save** option to save your diagnostic setting selections.
In this article, you'll learn how to enable the diagnostic settings for the MedT
> > To learn about how to work with diagnostic logs, see [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md).
-## Use the Log Analytics workspace to view the MedTech service logs - Optional
+## Use the Azure Log Analytics workspace to view the MedTech service logs
-If you choose to include your Log Analytics workspace as a destination option for your diagnostic setting, you can view the error logs within **Logs** in your MedTech service. If there are any error logs, they'll be a result of exceptions for your MedTech service (for example: *HealthCheck* exceptions).
+If you choose to include your Log Analytics workspace as a destination option for your diagnostic setting, you can view the logs within **Logs** in your MedTech service. If there are any logs, they'll be a result of exceptions for your MedTech service (for example: *HealthCheck* exceptions).
1. To access your Log Analytics workspace, select the **Logs** button within your MedTech service. :::image type="content" source="media/how-to-enable-diagnostic-settings/select-logs-button.png" alt-text="Screenshot of logs option." lightbox="media/how-to-enable-diagnostic-settings/select-logs-button.png":::
-2. Copy the below table query string into your Log Analytics workspace query window and select **Run**.
+2. Copy the below table query string into your Log Analytics workspace query area and select **Run**. Using the *AHDSMedTechDiagnosticLogs* table will provide you with all logs contained in the entire table for the selected **Time range** setting (the default value is **Last 24 hours**). The MedTech service provides five pre-defined queries that will be addressed in the article section titled [Accessing the MedTech service pre-defined Azure Log Analytics queries](how-to-enable-diagnostic-settings.md#accessing-the-medtech-service-pre-defined-azure-log-analytics-queries).
- ```Table
+ ```Kusto
AHDSMedTechDiagnosticLogs ``` :::image type="content" source="media/how-to-enable-diagnostic-settings/select-run-query.png" alt-text="Screenshot of query run option." lightbox="media/how-to-enable-diagnostic-settings/select-run-query.png":::
If you choose to include your Log Analytics workspace as a destination option fo
:::image type="content" source="media/how-to-enable-diagnostic-settings/clean-query-result-post-error-fix.png" alt-text="Screenshot of query after fixing error." lightbox="media/how-to-enable-diagnostic-settings/clean-query-result-post-error-fix.png":::
+> [!WARNING]
+> The above custom query is not saved and will have to be recreated if you leave your Log Analytics workspace without saving the custom query.
+>
+> To learn how to save a custom query in Log Analytics, see [Save a query in Azure Monitor Log Analytics](/azure/azure-monitor/logs/save-query)
+
+> [!TIP]
+> To learn how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
+>
+> To learn how to troubleshoot the MedTech service error messages and conditions, see [Troubleshoot MedTech service errors](troubleshoot-errors.md).
+
+## Accessing the MedTech service pre-defined Azure Log Analytics queries
+
+The MedTech service comes with pre-defined queries that can be used anytime in your Log Analytics workspace to filter and summarize your logs for more precise investigation. The queries can also be customized and saved/shared.
+
+1. To access the pre-defined queries, select **Queries**, type *MedTech* in the **Search** area, select a pre-defined query by using a double-click, and select **Run** to execute the pre-defined query. In this example, we've selected *MedTech healthcheck exceptions*. You'll select a pre-defined query of your own choosing.
+
+ > [!TIP]
+ > You can click on each of the MedTech service pre-defined queries to see their description and access different options for running the query or placing it into the Log Analytics workspace query area.
+
+ :::image type="content" source="media/how-to-enable-diagnostic-settings/select-and-run-pre-defined-query.png" alt-text="Screenshot of searching, selecting, and running a MedTech service pre-defined query." lightbox="media/how-to-enable-diagnostic-settings/select-and-run-pre-defined-query.png":::
+
+2. Multiple pre-defined queries can be selected. In this example, we've additionally selected *Log count per MedTech log or exception type*. You'll select another pre-defined query of your own choosing.
+
+ :::image type="content" source="media/how-to-enable-diagnostic-settings/select-and-run-additional-pre-defined-query.png" alt-text="Screenshot of searching, selecting, and running a MedTech service and additional pre-defined query." lightbox="media/how-to-enable-diagnostic-settings/select-and-run-additional-pre-defined-query.png":::
+
+3. Only the highlighted pre-defined query will be executed.
+
+ :::image type="content" source="media/how-to-enable-diagnostic-settings/results-of-select-and-run-additional-pre-defined-query.png" alt-text="Screenshot of results of running a MedTech service and additional pre-defined query." lightbox="media/how-to-enable-diagnostic-settings/results-of-select-and-run-additional-pre-defined-query.png":::
+
+> [!WARNING]
+> Any changes that you've made to the pre-defined queries are not saved and will have to be recreated if you leave your Log Analytics workspace without saving custom changes you've made to the pre-defined queries.
+>
+> To learn how to save a query in Log Analytics, see [Save a query in Azure Monitor Log Analytics](/azure/azure-monitor/logs/save-query)
+ > [!TIP]
-> To learn about how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
+> To learn how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
>
-> To learn about how to troubleshoot the MedTech service error messages and conditions, see [Troubleshoot the MedTech service error messages and conditions](troubleshoot-error-messages-and-conditions.md).
+> To learn how to troubleshoot the MedTech service error messages and conditions, see [Troubleshoot MedTech service errors](troubleshoot-errors.md).
## Next steps
-In this article, you learned how to enable the diagnostics settings for the MedTech service.
+In this article, you learned how to enable the diagnostics settings for the MedTech service and use the Log Analytics workspace to query and view the MedTech service logs.
To learn about the MedTech service frequently asked questions (FAQs), see > [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
iot-dps Tutorial Automation Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-automation-github-actions.md
Only repository owners and admins can manage repository secrets.
Rather than providing your personal access credentials, we'll create a service principal and then add those credentials as repository secrets. Use the Azure CLI to create a new service principal. For more information, see [Create an Azure service principal](/cli/azure/create-an-azure-service-principal-azure-cli).
-The following command creates a service principal with *contributor* access to a specific resource group. Replace **<SUBSCRIPTION_ID>** and **<RESOURCE_GROUP_NAME>** with your own information.
+1. Use the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command to create a service principal with *contributor* access to a specific resource group. Replace `<SUBSCRIPTION_ID>` and `<RESOURCE_GROUP_NAME>` with your own information.
-This command requires owner or user access administrator roles in the subscription.
+ This command requires owner or user access administrator roles in the subscription.
-```azurecli
-az ad sp create-for-rbac --name github-actions-sp --role contributor --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>
-```
+ ```azurecli
+ az ad sp create-for-rbac --name github-actions-sp --role contributor --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>
+ ```
+
+1. Copy the following items from the output of the service principal creation command to use in the next section:
-The output for this command includes a generated password for the service principal. Copy this password to use in the next section. You won't be able to access the password again.
+ * The *clientId*.
+ * The *clientSecret*. This is a generated password for the service principal that you won't be able to access again.
+ * The *tenantId*.
+
+1. Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to assign two more access roles to the service principal: *Device Provisioning Service Data Contributor* and *IoT Hub Data Contributor*. Replace `<SP_CLIENT_ID>` with the *clientId* value that you copied from the previous command's output.
+
+ ```azurecli
+ az role assignment create --assignee "<SP_CLIENT_ID>" --role "Device Provisioning Service Data Contributor" --resource-group "<RESOURCE_GROUP_NAME>"
+ ```
+
+ ```azurecli
+ az role assignment create --assignee "<SP_CLIENT_ID>" --role "IoT Hub Data Contributor" --resource-group "<RESOURCE_GROUP_NAME>"
+ ```
### Save service principal credentials as secrets
The output for this command includes a generated password for the service princi
1. Create a secret for your service principal ID. * **Name**: `APP_ID`
- * **Secret**: `github-actions-sp`, or the value you used for the service principal name if you used a different value.
+ * **Secret**: Paste the *clientId* that you copied from the output of the service principal creation command.
1. Select **Add secret**, then select **New repository secret** to add a second secret. 1. Create a secret for your service principal password. * **Name**: `SECRET`
- * **Secret**: Paste the password that you copied from the output of the service principal creation command.
+ * **Secret**: Paste the *clientSecret* that you copied from the output of the service principal creation command.
1. Select **Add secret**, then select **New repository secret** to add the final secret. 1. Create a secret for your Azure tenant. * **Name**: `TENANT`
- * **Secret**: Provide your Azure tenant. The value of this argument can either be an .onmicrosoft.com domain or the Azure object ID for the tenant.
+ * **Secret**: Paste the *tenantId* that you copied from the output of the service principal creation command.
1. Select **Add secret**.
For this tutorial, we'll create one workflow that contains jobs for each of the
* Provision an IoT Hub instance and a DPS instance. * Link the IoT Hub and DPS instances to each other.
-* Create an individual enrollment on the DPS instance, and register a device to the IoT hub via the DPS enrollment.
+* Create an individual enrollment on the DPS instance, and register a device to the IoT hub using symmetric key authentication via the DPS enrollment.
* Simulate the device for five minutes and monitor the IoT hub events. Workflows are YAML files that are located in the `.github/workflows/` directory of a repository.
Workflows are YAML files that are located in the `.github/workflows/` directory
jobs: ```
-1. Define the first job for our workflow, which we'll call the `provision` job. This job provisions the IoT Hub and DPS instances.
+1. Define the first job for our workflow, which we'll call the `provision` job. This job provisions the IoT Hub and DPS instances:
```yml provision:
Workflows are YAML files that are located in the `.github/workflows/` directory
az iot dps create -n "$DPS_NAME" -g "$RESOURCE_GROUP" ```
+ For more information about the commands run in this job, see:
+
+ * [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create)
+ * [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create)
+ 1. Define a job to `configure` the DPS and IoT Hub instances. Notice that this job uses the [needs](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds) parameter, which means that the `configure` job won't run until listed job completes its own run successfully. ```yml
Workflows are YAML files that are located in the `.github/workflows/` directory
az iot dps linked-hub create --dps-name "$DPS_NAME" --hub-name "$HUB_NAME" ```
+ For more information about the commands run in this job, see:
+
+ * [az iot dps linked-hub create](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-create)
+ 1. Define a job called `register` that will create an individual enrollment and then use that enrollment to register a device to IoT Hub. ```yml
Workflows are YAML files that are located in the `.github/workflows/` directory
az iot device registration create -n "$DPS_NAME" --rid "$DEVICE_NAME" --auth-type login ```
+ > [!NOTE]
+ > This job and others use the parameter `--auth-type login` in some commands to indicate that the operation should use the service principal from the current Azure AD session. The alternative, `--auth-type key` doesn't require the service principal configuration, but is less secure.
+
+ For more information about the commands run in this job, see:
+
+ * [az iot dps enrollment create](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-create)
+ * [az iot device registration create](/cli/azure/iot/device/registration#az-iot-device-registration-create)
+ 1. Define a job to `simulate` an IoT device that will connect to the IoT hub and send sample telemetry messages. ```yml
Workflows are YAML files that are located in the `.github/workflows/` directory
az iot device simulate -n "$HUB_NAME" -d "$DEVICE_NAME" ```
+ For more information about the commands run in this job, see:
+
+ * [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate)
+ 1. Define a job to `monitor` the IoT hub endpoint for events, and watch messages coming in from the simulated device. Notice that the **simulate** and **monitor** jobs both define the **register** job in their `needs` parameter. This configuration means that once the **register** job completes successfully, both these jobs will run in parallel. ```yml
Workflows are YAML files that are located in the `.github/workflows/` directory
az iot hub monitor-events -n "$HUB_NAME" -y ```
+ For more information about the commands run in this job, see:
+
+ * [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events)
+ 1. The complete workflow file should look like this example, with your information replacing the placeholder values in the environment variables: ```yml
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
[!INCLUDE [iot-edge-version-1.1-or-1.4](includes/iot-edge-version-1.1-or-1.4.md)]
-This article provides end-to-end instructions for registering and provisioning a Linux IoT Edge device, including installing IoT Edge.
+This article provides end-to-end instructions for registering and provisioning a Linux IoT Edge device, which includes installing IoT Edge.
-Every device that connects to an IoT hub has a device ID that's used to track cloud-to-device or device-to-cloud communications. You configure a device with its connection information, which includes the IoT hub hostname, the device ID, and the information the device uses to authenticate to IoT Hub.
+Each device that connects to an [IoT hub](../iot-hub/index.yml) has a device ID that's used to track [cloud-to-device](../iot-hub/iot-hub-devguide-c2d-guidance.md) or [device-to-cloud](../iot-hub/iot-hub-devguide-d2c-guidance.md) communications. You configure a device with its connection information, which includes:
-The steps in this article walk through a process called manual provisioning, where you connect a single device to its IoT hub. For manual provisioning, you have two options for authenticating IoT Edge devices:
+* IoT hub hostname
+* Device ID
+* Authentication details to connect to IoT Hub
+
+The steps in this article walk through a process called *manual provisioning*, where you connect a single device to its IoT hub. For manual provisioning, you have two options for authenticating IoT Edge devices:
* **Symmetric keys**: When you create a new device identity in IoT Hub, the service creates two keys. You place one of the keys on the device, and it presents the key to IoT Hub when authenticating.
This article covers using symmetric keys as your authentication method. If you w
## Prerequisites
-This article covers registering your IoT Edge device and installing IoT Edge on it. These tasks have different prerequisites and utilities used to accomplish them. Make sure you have all the prerequisites covered before proceeding.
+This article shows how to register your IoT Edge device and install IoT Edge (also called IoT Edge runtime) on your device. Make sure you have the device management tool of your choice, for example Azure CLI, and device requirements before you register and install your device.
<!-- Device registration prerequisites H3 and content --> [!INCLUDE [iot-edge-prerequisites-register-device.md](includes/iot-edge-prerequisites-register-device.md)]
This article covers registering your IoT Edge device and installing IoT Edge on
<!-- Device requirements H3 and content --> [!INCLUDE [iot-edge-prerequisites-device-requirements-linux.md](includes/iot-edge-prerequisites-device-requirements-linux.md)]
+<!-- Azure IoT extensions for Visual Studio Code-->
+### Visual Studio Code extensions
+
+If you are using Visual Studio Code, there are helpful Azure IoT extensions that will make the device creation and management process easier.
+
+Install both the Azure IoT Edge and Azure IoT Hub extensions:
+
+* [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge)
+
+* [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
+
+<!-- Prerequisites end -->
+ <!-- Register your device and View provisioning information H2s and content --> [!INCLUDE [iot-edge-register-device-symmetric.md](includes/iot-edge-register-device-symmetric.md)]
This article covers registering your IoT Edge device and installing IoT Edge on
## Provision the device with its cloud identity
-Now that the container engine and the IoT Edge runtime are installed on your device, you're ready for the next step, which is to set up the device with its cloud identity and authentication information.
+Now that the container engine and the IoT Edge runtime are installed on your device, you're ready to set up the device with its cloud identity and authentication information.
<!-- 1.1 --> ::: moniker range="iotedge-2018-06"
After entering the provisioning information in the configuration file, restart t
<!-- iotedge-2020-11 --> ::: moniker range=">=iotedge-2020-11"
-You can quickly configure your IoT Edge device with symmetric key authentication using the following command:
+1. You can quickly configure your IoT Edge device with symmetric key authentication using the following command. Replace `PASTE_DEVICE_CONNECTION_STRING_HERE` with your own connection string.
```bash
- sudo iotedge config mp --connection-string 'PASTE_DEVICE_CONNECTION_STRING_HERE'
+ sudo iotedge config mp --connection-string `PASTE_DEVICE_CONNECTION_STRING_HERE`
```
- The `iotedge config mp` command creates a configuration file on the device and enters your connection string in the file.
+ This `iotedge config mp` command creates a configuration file on the device and enters your connection string in the configuration file.
-Apply the configuration changes.
+1. Apply the configuration changes.
```bash sudo iotedge config apply ```
-If you want to see the configuration file, you can open it:
+1. To view the configuration file, you can open it:
```bash sudo nano /etc/aziot/config.toml ```-
+Verify successful configuration
<!-- end iotedge-2020-11 --> ::: moniker-end
+## Deploy modules
+
+To deploy your IoT Edge modules, go to your IoT hub in the Azure portal, then:
+
+1. Select **Devices** from the IoT Hub menu.
+
+1. Select your device to open its page.
+
+1. Select the **Set Modules** tab.
+
+1. Since we want to deploy the IoT Edge default modules (edgeAgent and edgeHub), we don't need to add any modules to this pane, so select **Review + create** at the bottom.
+
+1. You see the JSON confirmation of your modules. Select **Create** to deploy the modules.<br>
+
+For more information, see [Deploy a module](quickstart-linux.md#deploy-a-module).
+ ## Verify successful configuration Verify that the runtime was successfully installed and configured on your IoT Edge device.
->[!TIP]
->You need elevated privileges to run `iotedge` commands. Once you sign out of your machine and sign back in the first time after installing the IoT Edge runtime, your permissions are automatically updated. Until then, use `sudo` in front of the commands.
+> [!TIP]
+> You need elevated privileges to run `iotedge` commands. Once you sign out of your machine and sign back in the first time after installing the IoT Edge runtime, your permissions are automatically updated. Until then, use `sudo` in front of the commands.
-Check to see that the IoT Edge system service is running.
+1. Check to see that the IoT Edge system service is running.
-<!-- 1.1 -->
+ <!-- 1.1 -->
+ ::: moniker range="iotedge-2018-06"
```bash sudo systemctl status iotedge ```
+ ::: moniker-end
-<!-- iotedge-2020-11 -->
+ <!-- iotedge-2020-11 -->
+ ::: moniker range=">=iotedge-2020-11"
```bash sudo iotedge system status ```
-A successful status response is `Ok`.
+ A successful status response shows the `aziot` services as running or ready.
+ ::: moniker-end
-If you need to troubleshoot the service, retrieve the service logs.
+1. To troubleshoot the service, retrieve the service logs.
-<!-- 1.1 -->
+ <!-- 1.1 -->
+ ::: moniker range="iotedge-2018-06"
```bash journalctl -u iotedge ```
+ ::: moniker-end
-<!-- iotedge-2020-11 -->
+ <!-- iotedge-2020-11 -->
+ ::: moniker range=">=iotedge-2020-11"
```bash sudo iotedge system logs ```
+ ::: moniker-end
-Use the `check` tool to verify configuration and connection status of the device.
+1. Use the `check` tool to verify configuration and connection status of the device.
```bash sudo iotedge check ```
->[!TIP]
->Always use `sudo` to run the check tool, even after your permissions are updated. The tool needs elevated privileges to access the config file to verify configuration status.
+ You can expect a range of responses that may include **OK** (green), **Warning** (yellow), or **Error** (red).
->[!NOTE]
->On a newly provisioned device, you may see an error related to IoT Edge Hub:
->
->**× production readiness: Edge Hub's storage directory is persisted on the host filesystem - Error**
->
->**Could not check current state of edgeHub container**
->
->This error is expected on a newly provisioned device because the IoT Edge Hub module isn't running. To resolve the error, in IoT Hub, set the modules for the device and create a deployment. Creating a deployment for the device starts the modules on the device including the IoT Edge Hub module.
+ :::image type="content" source="media/how-to-provision-single-device-linux-symmetric/config-checks.png" alt-text="Screenshot of sample responses from the check command." lightbox="media/how-to-provision-single-device-linux-symmetric/config-checks.png":::
+
+ >[!TIP]
+ >Always use `sudo` to run the check tool, even after your permissions are updated. The tool needs elevated privileges to access the config file to verify configuration status.
+
+ >[!NOTE]
+ >On a newly provisioned device, you may see an error related to IoT Edge Hub:
+ >
+ >**× production readiness: Edge Hub's storage directory is persisted on the host filesystem - Error**
+ >**Could not check current state of edgeHub container**
+ >
+ >This error is expected on a newly provisioned device because the IoT Edge Hub module is not yet running. Be sure your IoT Edge modules were deployed in the previous steps. Deployment resolves this error.
+ >
+ >Alternatively, you may see a status code as `417 -- The device's deployment configuration is not set`. Once your modules are deployed, this status will change.
+ >
-View all the modules running on your IoT Edge device. When the service starts for the first time, you should only see the **edgeAgent** module running. The edgeAgent module runs by default and helps to install and start any additional modules that you deploy to your device.
+1. When the service starts for the first time, you should only see the **edgeAgent** module running. The edgeAgent module runs by default and helps to install and start any additional modules that you deploy to your device.
+
+ Check that your device and modules are deployed and running, by viewing your device page in the Azure portal.
+
+ :::image type="content" source="media/how-to-provision-single-device-linux-symmetric/modules-deployed.png" alt-text="Screenshot of IoT Edge modules deployed and running confirmation in the Azure portal.":::
+
+ Once your modules are deployed and running, list them in your device or virtual machine with the following command:
```bash sudo iotedge list ```
-When you create a new IoT Edge device, it will display the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment.
- ## Offline or specific version installation (optional) The steps in this section are for scenarios not covered by the standard installation steps. This may include:
-* Install IoT Edge while offline
-* Install a release candidate version
+* Installing IoT Edge while offline
+* Installing a release candidate version
-Use the steps in this section if you want to install a specific version of the Azure IoT Edge runtime that isn't available through your package manager. The Microsoft package list only contains a limited set of recent versions and their sub-versions, so these steps are for anyone who wants to install an older version or a release candidate version.
+Use the steps in this section if you want to install a [specific version of the Azure IoT Edge runtime](version-history.md) that isn't available through your package manager. The Microsoft package list only contains a limited set of recent versions and their sub-versions, so these steps are for anyone who wants to install an older version or a release candidate version.
Using curl commands, you can target the component files directly from the IoT Edge GitHub repository.
iot-edge Iot Edge Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-runtime.md
The IoT Edge runtime is responsible for the following functions on IoT Edge devi
* Report module health to the cloud for remote monitoring.
-* Manage communication between downstream devices and IoT Edge devices.
-
-* Manage communication between modules on an IoT Edge device.
-
-* Manage communication between an IoT Edge device and the cloud.
-<!-- iotedge-2020-11 -->
-* Manage communication between IoT Edge devices.
+* Manage communication between:
+ - Downstream devices and IoT Edge devices
+ - Modules on an IoT Edge device
+ - An IoT Edge device and the cloud
+ - IoT Edge devices
![Runtime communicates insights and module health to IoT Hub](./media/iot-edge-runtime/Pipeline.png)
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot.md
For more information, see [Collect and transport metrics](how-to-collect-and-tra
## Check your IoT Edge version
-If you're running an older version of IoT Edge, then upgrading may resolve your issue. The `iotedge check` tool checks that the IoT Edge security daemon is the latest version, but does not check the versions of the IoT Edge hub and agent modules. To check the version of the runtime modules on your device, use the commands `iotedge logs edgeAgent` and `iotedge logs edgeHub`. The version number is declared in the logs when the module starts up.
+If you're running an older version of IoT Edge, then upgrading may resolve your issue. The `iotedge check` tool checks that the IoT Edge security daemon is the latest version, but doesn't check the versions of the IoT Edge hub and agent modules. To check the version of the runtime modules on your device, use the commands `iotedge logs edgeAgent` and `iotedge logs edgeHub`. The version number is declared in the logs when the module starts up.
For instructions on how to update your device, see [Update the IoT Edge security daemon and runtime](how-to-update-iot-edge.md).
You can retrieve the container logs from several places:
## Clean up container logs
-By default the Moby container engine does not set container log size limits. Over time this can lead to the device filling up with logs and running out of disk space. If large container logs are affecting your IoT Edge device performance, use the following command to force remove the container along with its related logs.
+By default the Moby container engine doesn't set container log size limits. Over time extensive logs can lead to the device filling up with logs and running out of disk space. If large container logs are affecting your IoT Edge device performance, use the following command to force remove the container along with its related logs.
If you're still troubleshooting, wait until after you've inspected the container logs to take this step.
You can also restart modules remotely from the Azure portal. For more informatio
Azure IoT Edge allows communication from an on-premises server to Azure cloud using supported IoT Hub protocols, see [choosing a communication protocol](../iot-hub/iot-hub-devguide-protocols.md). For enhanced security, communication channels between Azure IoT Edge and Azure IoT Hub are always configured to be Outbound. This configuration is based on the [Services Assisted Communication pattern](/archive/blogs/clemensv/service-assisted-communication-for-connected-devices), which minimizes the attack surface for a malicious entity to explore. Inbound communication is only required for [specific scenarios](#anchortext) where Azure IoT Hub needs to push messages to the Azure IoT Edge device. Cloud-to-device messages are protected using secure TLS channels and can be further secured using X.509 certificates and TPM device modules. The Azure IoT Edge Security Manager governs how this communication can be established, see [IoT Edge Security Manager](../iot-edge/iot-edge-security-manager.md).
-While IoT Edge provides enhanced configuration for securing Azure IoT Edge runtime and deployed modules, it is still dependent on the underlying machine and network configuration. Hence, it is imperative to ensure proper network and firewall rules are set up for secure edge to cloud communication. The following table can be used as a guideline when configuration firewall rules for the underlying servers where Azure IoT Edge runtime is hosted:
+While IoT Edge provides enhanced configuration for securing Azure IoT Edge runtime and deployed modules, it's still dependent on the underlying machine and network configuration. Hence, it's imperative to ensure proper network and firewall rules are set up for secure edge to cloud communication. The following table can be used as a guideline when configuration firewall rules for the underlying servers where Azure IoT Edge runtime is hosted:
|Protocol|Port|Incoming|Outgoing|Guidance| |--|--|--|--|--|
-|MQTT|8883|BLOCKED (Default)|BLOCKED (Default)|<ul> <li>Configure Outgoing (Outbound) to be Open when using MQTT as the communication protocol.<li>1883 for MQTT is not supported by IoT Edge. <li>Incoming (Inbound) connections should be blocked.</ul>|
-|AMQP|5671|BLOCKED (Default)|OPEN (Default)|<ul> <li>Default communication protocol for IoT Edge. <li> Must be configured to be Open if Azure IoT Edge is not configured for other supported protocols or AMQP is the desired communication protocol.<li>5672 for AMQP is not supported by IoT Edge.<li>Block this port when Azure IoT Edge uses a different IoT Hub supported protocol.<li>Incoming (Inbound) connections should be blocked.</ul></ul>|
-|HTTPS|443|BLOCKED (Default)|OPEN (Default)|<ul> <li>Configure Outgoing (Outbound) to be Open on 443 for IoT Edge provisioning. This configuration is required when using manual scripts or Azure IoT Device Provisioning Service (DPS). <li><a id="anchortext">Incoming (Inbound) connection</a> should be Open only for specific scenarios: <ul> <li> If you have a transparent gateway with downstream devices that may send method requests. In this case, Port 443 does not need to be open to external networks to connect to IoTHub or provide IoTHub services through Azure IoT Edge. Thus the incoming rule could be restricted to only open Incoming (Inbound) from the internal network. <li> For Client to Device (C2D) scenarios.</ul><li>80 for HTTP is not supported by IoT Edge.<li>If non-HTTP protocols (for example, AMQP or MQTT) cannot be configured in the enterprise; the messages can be sent over WebSockets. Port 443 will be used for WebSocket communication in that case.</ul>|
+|MQTT|8883|BLOCKED (Default)|BLOCKED (Default)|<ul> <li>Configure Outgoing (Outbound) to be Open when using MQTT as the communication protocol.<li>1883 for MQTT isn't supported by IoT Edge. <li>Incoming (Inbound) connections should be blocked.</ul>|
+|AMQP|5671|BLOCKED (Default)|OPEN (Default)|<ul> <li>Default communication protocol for IoT Edge. <li> Must be configured to be Open if Azure IoT Edge isn't configured for other supported protocols or AMQP is the desired communication protocol.<li>5672 for AMQP isn't supported by IoT Edge.<li>Block this port when Azure IoT Edge uses a different IoT Hub supported protocol.<li>Incoming (Inbound) connections should be blocked.</ul></ul>|
+|HTTPS|443|BLOCKED (Default)|OPEN (Default)|<ul> <li>Configure Outgoing (Outbound) to be Open on 443 for IoT Edge provisioning. This configuration is required when using manual scripts or Azure IoT Device Provisioning Service (DPS). <li><a id="anchortext">Incoming (Inbound) connection</a> should be Open only for specific scenarios: <ul> <li> If you have a transparent gateway with downstream devices that may send method requests. In this case, Port 443 doesn't need to be open to external networks to connect to IoTHub or provide IoTHub services through Azure IoT Edge. Thus the incoming rule could be restricted to only open Incoming (Inbound) from the internal network. <li> For Client to Device (C2D) scenarios.</ul><li>80 for HTTP isn't supported by IoT Edge.<li>If non-HTTP protocols (for example, AMQP or MQTT) can't be configured in the enterprise; the messages can be sent over WebSockets. Port 443 will be used for WebSocket communication in that case.</ul>|
## Last resort: stop and recreate all containers
-Sometimes, a system might require significant special modification to work with existing networking or operating system constraints. For example, a system could require a different data disk mount and proxy settings. If you tried all steps above and still get container failures, it's possible that somewhere the docker system caches or persisted network settings are not up to date with the latest reconfiguration. In this case, the last resort option is to use [`docker prune`](https://docs.docker.com/engine/reference/commandline/system_prune/) get a clean start from scratch.
+Sometimes, a system might require significant special modification to work with existing networking or operating system constraints. For example, a system could require a different data disk mount and proxy settings. If you tried all previous steps and still get container failures, the docker system caches or persisted network settings might not up to date with the latest reconfiguration. In this case, the last resort option is to use [`docker prune`](https://docs.docker.com/engine/reference/commandline/system_prune/) get a clean start from scratch.
-The followling command stops the IoT Edge system (and thus all containers), uses the "all" and "volume" option for `docker prune` to remove all containers and volumes. Review the warning that the command issues and confirm with `y` when ready.
+The following command stops the IoT Edge system (and thus all containers), uses the "all" and "volume" option for `docker prune` to remove all containers and volumes. Review the warning that the command issues and confirm with `y` when ready.
```bash sudo iotedge system stop
iot-hub-device-update Create Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update.md
For handler properties, you may need to escape certain characters in your JSON.
The `init` command supports advanced scenarios, including the [related files feature](related-files.md) that allows you to define the relationship between different update files. For more examples and a complete list of optional parameters, see [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5).
-Once you've created your import manifest and saved it as a JSON file, if you're ready to [import your update](import-update.md).
+Once you've created your import manifest and saved it as a JSON file, you're ready to [import your update](import-update.md).
## Create an advanced Device Update import manifest for a proxy update
iot-hub-device-update Delta Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/delta-updates.md
sudo ./DiffGenTool
## Import the generated delta update
-### Generate import manifest
- The basic process of importing an update to the Device Update service is unchanged for delta updates, so if you haven't already, be sure to review this page: [How to prepare an update to be imported into Azure Device Update for IoT Hub](create-update.md)
+### Generate import manifest
+ The first step to import an update into the Device Update service is always to create an import manifest if you don't already have one. For more information about import manifests, see [Importing updates into Device Update](import-concepts.md#import-manifest). For delta updates, your import manifest will need to reference two files:+ - The _recompressed_ target SWU image created when you ran the DiffGen tool. - The delta file created when you ran the DiffGen tool.
-The delta update feature uses a new capability called [Related Files](related-files.md), which requires an import manifest that is version 5 or later.
+The delta update feature uses a capability called [related files](related-files.md), which requires an import manifest that is version 5 or later.
-To create an import manifest for your delta update using the Related Files feature, you'll need to add [relatedFiles](import-schema.md#relatedfiles-object) and [downloadHandler](import-schema.md#downloadhandler-object) elements to your import manifest.
+To create an import manifest for your delta update using the related files feature, you'll need to add [relatedFiles](import-schema.md#relatedfiles-object) and [downloadHandler](import-schema.md#downloadhandler-object) objects to your import manifest.
-The `relatedFiles` element is used to specify information about the delta update file, including the file name, file size and sha256 hash (examples available at the link above). Importantly, you also need to specify two properties which are unique to the delta update feature:
+Use the `relatedFiles` object to specify information about the delta update file, including the file name, file size and sha256 hash. Importantly, you also need to specify two properties which are unique to the delta update feature:
```json "properties": {
The `relatedFiles` element is used to specify information about the delta update
"microsoft.sourceFileHash": "[insert the source SWU image file hash]" } ```+ Both of the properties above are specific to your _source SWU image file_ that you used as an input to the DiffGen tool when creating your delta update. The information about the source SWU image is needed in your import manifest even though you will not actually be importing the source image. The delta components on the device use this metadata about the source image to locate the image on the device once the delta has been downloaded.
-The `downloadHandler` element is used to specify how the Device Update agent will orchestrate the delta update, using the Related Files feature. Unless you are customizing your own version of the Device Update agent for delta functionality, you should only use this downloadHandler:
+Use the `downloadHandler` object to specify how the Device Update agent will orchestrate the delta update, using the related files feature. Unless you are customizing your own version of the Device Update agent for delta functionality, you should only use this downloadHandler:
```json "downloadHandler": { "id": "microsoft/delta:1" } ```
-You can use the Azure Command Line Interface (CLI) to generate an import manifest for your delta update. If you haven't used the Azure CLI to create an import manifest before, refer to [these instructions](create-update.md#create-a-basic-device-update-import-manifest).
+
+You can use the Azure Command Line Interface (CLI) to generate an import manifest for your delta update. If you haven't used the Azure CLI to create an import manifest before, see [Create a basic import manifest](create-update.md#create-a-basic-device-update-import-manifest).
```azurecli
- az iot du update init v5
update-provider <replace with your Provider> --update-name <replace with your update Name> --update-version <replace with your update Version>compat manufacturer=<replace with the value your device will report> model=<replace with the value your device will report>step handler=microsoft/swupdate:2 properties=<replace with any desired handler properties (JSON-formatted), such as '{"installedCriteria": "1.0"}'>file path=<replace with path(s) to your update file(s), including the full file name> downloadHandler=microsoft/delta:1related-file path=<replace with path(s) to your delta file(s), including the full file name> properties='{"microsoft.sourceFileHashAlgorithm": "sha256", "microsoft.sourceFileHash": "<replace with the source SWU image file hash>"}'
+az iot du update init v5
+--update-provider <replace with your Provider> --update-name <replace with your update Name> --update-version <replace with your update Version> --compat manufacturer=<replace with the value your device will report> model=<replace with the value your device will report> --step handler=microsoft/swupdate:2 properties=<replace with any desired handler properties (JSON-formatted), such as '{"installedCriteria": "1.0"}'> --file path=<replace with path(s) to your update file(s), including the full file name> downloadHandler=microsoft/delta:1 --related-file path=<replace with path(s) to your delta file(s), including the full file name> properties='{"microsoft.sourceFileHashAlgorithm": "sha256", "microsoft.sourceFileHash": "<replace with the source SWU image file hash>"}'
``` Save your generated import manifest JSON to a file with the extension `.importmanifest.json`
iot-hub-device-update Device Update Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-resources.md
The following Message Routes are automatically configured in your linked IoT hub
:::image type="content" source="media/device-update-resources/consumer-group.png" alt-text="Screenshot of consumer groups." lightbox="media/device-update-resources/consumer-group.png":::
-### Configuring access for Azure Device Update service principal in the IoT Hub
-
-Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set IoT Hub Data Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
-
-Deployment, device and update management and diagnostic actions will not be allowed if these permissions are not set. Operations that will be blocked will include:
-* Create Deployment
-* Cancel Deployment
-* Retry Deployment
-* Get Device
-
-The permission can be set from IoT Hub Access Control (IAM). Refer to [Configure Access for Azure Device update service principal in linked IoT hub](configure-access-control-device-update.md#configure-access-for-azure-device-update-service-principal-in-linked-iot-hub)
- ## Next steps [Create device update resources](./create-device-update-account.md)
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-schema.md
For example:
} } ```+ ## relatedFiles object Collection of related files to one or more of your primary payload files.
For example:
} ], ```+
+For more information, see [Use the related files feature to reference multiple update files](related-files.md).
+ ## downloadHandler object Specifies how to process any related files.
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-update.md
An Azure CLI environment:
This section shows how to import an update using either the Azure portal or the Azure CLI. You can also use the [Device Update for IoT Hub APIs](#if-youre-importing-using-apis-instead) to import an update instead.
-To import an update, you first upload the update files and import manifest into an Azure Storage container. Then, you import the update from Azure Storage into Device Update for IoT Hub.
+To import an update, you first upload the update files and import manifest into an Azure Storage container. Then, you import the update from Azure Storage into Device Update for IoT Hub, where it will be stored for you to deploy to devices.
# [Azure portal](#tab/portal)
iot-hub-device-update Related Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/related-files.md
Title: Related files for Device Update for Azure IoT Hub | Microsoft Docs
-description: Understand the Device Update for IoT Hub related files feature.
+ Title: Related files for Device Update for Azure IoT Hub
+description: Create import manifests that reference multiple update files using the Device Update for IoT Hub related files feature.
Previously updated : 08/23/2022 Last updated : 01/24/2023
-# Use the related files feature in Device Update for IoT Hub
+# Use the related files feature to reference multiple update files
Use the related files feature when you need to express relationships between different update files in a single update.
-## What is the related files feature?
-
-When importing an update to Device Update for IoT Hub, an import manifest containing metadata about the update payload is required. The file-level metadata in the import manifest can be a flat list of update payload files in the simplest case. However, for more advanced scenarios, you can instead use the related files feature, which provides a way for files to have a relationship specified between them.
+When importing an update to Device Update for IoT Hub, an import manifest containing metadata about the update payload is required. The file-level metadata in the import manifest can be a flat list of update payload files in the simplest case. However, for more advanced scenarios, the related files feature provides a way for you to specify relationships between multiple update files.
When creating an import manifest using the related files feature, you can add a collection of _related_ files to one or more of your _primary_ payload files. An example of this concept is the Device Update [delta update](delta-updates.md) feature, which uses related files to specify a delta update that is associated with a full image file. In the delta scenario, the related files feature allows the full image and delta update to both be imported as a single update action, and then either one can be deployed to a device. However, the related files feature isn't limited to delta updates, since it's designed to be extensible by our customers depending on their own unique scenarios.
-### Example import manifest using related files
+## How to define related files
+
+The related files feature is available for import manifests that are version 5 or later.
+
+When you add related files to an import manifest, include the following information:
+
+* File details
+
+ Define the related files by providing the filename, size, and hash.
+
+* A download handler
+
+ Specify how to process these related files to produce the target file. You specify the processing approach by including a `downloadHandler` property in your import manifest. Including `downloadHandler` is required if you specify a non-empty collection of `relatedFiles` in a `file` element. You can specify a `downloadHandler` using a simple `id` property. The Download handler `id` has a limit of 64 ASCII characters.
+
+* Related files properties
+
+ You can provide extra metadata for the update handler on your device to know how to interpret and properly use the files that you've specified as related files. This metadata is added as part of a `properties` property bag to the `file` and `relatedFile` objects.
-Below is an example of an import manifest that uses the related files feature to import a delta update. In this example, you can see that in the `files` section, there's a full image specified (`full-image-file-name`) with a `properties` item. The `properties` item in turn has an associated `relatedFiles` item below it. Within the `relatedFiles` section, you can see another `properties` section for the delta update file (`delta-from-v1-file-name`), and also a `downloadHandler` item with the appropriate `id` listed (`microsoft/delta:1`).
+For more information about the import schema for related files, see [relatedFiles object](import-schema.md#relatedfiles-object).
+
+## Example import manifest using related files
+
+The following sample import manifest demonstrates how the related files feature is used to import a delta update. In this example, you can see that in the `files` section, there's a full image specified (`full-image-file-name`) with a `properties` item. The `properties` item in turn has an associated `relatedFiles` item below it. Within the `relatedFiles` section, you can see another `properties` section for the delta update file (`delta-from-v1-file-name`), and also a `downloadHandler` item with the appropriate `id` listed (`microsoft/delta:1`).
+
+>[!NOTE]
+>This example uses delta updates to demonstrate how to reference related files. If you want to use delta updates as a feature, learn more in the [delta update documentation](delta-updates.md).
```json {
Below is an example of an import manifest that uses the related files feature to
} ```
-## How to use related files
+## Example init command using related files
->[!NOTE]
->The documentation on this page uses delta updates as an example of how to use related files. If you want to use delta updates as a _feature_, follow the [delta update documentation](delta-updates.md).
-
-### Related files properties
+The [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5) command for creating an import manifest supports an optional `--related-file` parameter.
-In certain scenarios, you may want to provide extra metadata for the update handler on your device to know how to interpret and properly use the files that you've specified as related files. This metadata is added as part of a `properties` property bag to the `file` and `relatedFile` objects.
+The `--related-file` parameter takes a `path` and `properties` key:
-### Specify a download handler
+```azurecli
+--related-file path=<replace with path(s) to your delta file(s), including the full file name> properties='{"microsoft.sourceFileHashAlgorithm": "sha256", "microsoft.sourceFileHash": "<replace with the source SWU image file hash>"}'
+```
-When you use the related files feature, you need to specify how to process these related files to produce the target file. You specify the processing approach by including a `downloadHandler` property in your import manifest. Including `downloadHandler` is required if you specify a non-empty collection of `relatedFiles` in a `file` element. You can specify a `downloadHandler` using a simple `id` property. The Download handler `id` has a limit of 64 ASCII characters.
+For example:
+
+```azurecli
+az iot du update init v5 \
+--update-provider Microsoft --update-name myBundled --update-version 2.0 \
+--compat manufacturer=Contoso model=SpaceStation \
+--step handler=microsoft/script:1 properties='{"arguments": "--pre"}' description="Pre-install script" \
+--file path=/my/update/scripts/preinstall.sh downloadHandler=microsoft/delta:1 \
+--related-file path=/my/update/scripts/related_preinstall.json properties='{"microsoft.sourceFileHashAlgorithm": "sha256"}' \
+--step updateId.provider=Microsoft updateId.name=SwUpdate updateId.version=1.1 \
+--step handler=microsoft/script:1 properties='{"arguments": "--post"}' description="Post-install script" \
+--file path=/my/update/scripts/postinstall.sh
+```
## Next steps -- Learn about [import manifest schema](import-schema.md)-- Learn about [delta updates](delta-updates.md)
+* Learn about [import manifest schema](import-schema.md)
+* Learn about [delta updates](delta-updates.md)
key-vault About Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys.md
tags: azure-resource-manager
Previously updated : 02/17/2021 Last updated : 01/24/2023 # About keys
-Azure Key Vault provides two types of resources to store and manage cryptographic keys. Vaults support software-protected and HSM-protected (Hardware Security Module) keys. Managed HSMs only support HSM-protected keys.
+Azure Key Vault provides two types of resources to store and manage cryptographic keys. Vaults support software-protected and HSM-protected (Hardware Security Module) keys. Managed HSMs only support HSM-protected keys.
|Resource type|Key protection methods|Data-plane endpoint base URL| |--|--|--|
key-vault Byok Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/byok-specification.md
Title: Bring your own key specification - Azure Key Vault | Microsoft Docs
description: This document described bring your own key specification. - tags: azure-resource-manager Previously updated : 02/04/2021 Last updated : 01/24/2023
This document describes specifications for importing HSM-protected keys from cus
## Scenario
-A Key Vault customer would like to securely transfer a key from their on-premises HSM outside Azure, into the HSM backing Azure Key Vault. The process of importing a key generated outside Key Vault is generally referred to as Bring Your Own Key (BYOK).
+A Key Vault customer would like to securely transfer a key from their on-premises HSM outside Azure, into the HSM backing Azure Key Vault. The process of importing a key generated outside Key Vault is referred to as Bring Your Own Key (BYOK).
The following are the requirements: * The key to be transferred never exists outside an HSM in plain text form.
The following are the requirements:
|Key Name|Key Type|Origin|Description| ||||| |Key Exchange Key (KEK)|RSA|Azure Key Vault HSM|An HSM backed RSA key pair generated in Azure Key Vault
-Wrapping Key|AES|Vendor HSM|An [ephemeral] AES key generated by HSM on-prem
+Wrapping Key|AES|Vendor HSM|An [ephemeral] AES key generated by HSM on-premises
Target Key|RSA, EC, AES (Managed HSM only)|Vendor HSM|The key to be transferred to the Azure Key Vault HSM **Key Exchange Key**: An HSM-backed key that customer generates in the key vault where the BYOK key will be imported. This KEK must have following properties:
To perform a key transfer, a user performs following steps:
Customers use the BYOK tool and documentation provided by HSM vendor to complete Steps 3. It produces a Key Transfer Blob (a ".byok" file). - ## HSM constraints Existing HSM may apply constraints on key that they manage, including: * The HSM may need to be configured to allow key wrap-based export * The target key may need to be marked CKA_EXTRACTABLE for the HSM to allow controlled export
-* In some cases, the KEK and wrapping key may need to be marked as CKA_TRUSTED. This allows it to be used to wrap keys in the HSM.
+* In some cases, the KEK and wrapping key may need to be marked as CKA_TRUSTED, which allows it to be used to wrap keys in the HSM.
The configuration of source HSM is, generally, outside the scope of this specification. Microsoft expects the HSM vendor to produce documentation accompanying their BYOK tool to include any such configuration steps. > [!NOTE]
-> Steps 1, 2, and 4 described below can be performed using other interfaces such as Azure PowerShell and Azure Portal. They can also be performed programmatically using equivalent functions in Key Vault SDK.
+> Several of these steps can be performed using other interfaces such as Azure PowerShell and Azure Portal. They can also be performed programmatically using equivalent functions in Key Vault SDK.
-### Step 1: Generate KEK
+### Generate KEK
Use the **az keyvault key create** command to create KEK with key operations set to import. Note down the key identifier 'kid' returned from the below command.
az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import
> [!NOTE] > Services support different KEK lengths; Azure SQL, for instance, only supports key lengths of [2048 or 3072 bytes](/azure/azure-sql/database/transparent-data-encryption-byok-overview#requirements-for-configuring-customer-managed-tde). Consult the documentation for your service for specifics.
-### Step 2: Retrieve the public key of the KEK
+### Retrieve the public key of the KEK
Download the public key portion of the KEK and store it into a PEM file.
Download the public key portion of the KEK and store it into a PEM file.
az keyvault key download --name KEKforBYOK --vault-name ContosoKeyVaultHSM --file KEKforBYOK.publickey.pem ```
-### Steps 3: Generate key transfer blob using HSM vendor provided BYOK tool
+### Generate key transfer blob using HSM vendor provided BYOK tool
Customer will use HSM Vendor provided BYOK tool to create a key transfer blob (stored as a ".byok" file). KEK public key (as a .pem file) will be one of the inputs to this tool.
If CKM_RSA_AES_KEY_WRAP_PAD is used, the JSON serialization of the transfer blob
* kid = key identifier of KEK. For Key Vault keys it looks like this: https://ContosoKeyVaultHSM.vault.azure.net/keys/mykek/eba63d27e4e34e028839b53fac905621 * alg = algorithm.
-* dir = Direct mode, i.e. the referenced kid is used to directly protect the ciphertext which is an accurate representation of CKM_RSA_AES_KEY_WRAP
+* dir = Direct mode, that is, the referenced kid is used to directly protect the ciphertext that is an accurate representation of CKM_RSA_AES_KEY_WRAP
* generator = an informational field that denotes the name and version of BYOK tool and the source HSM manufacturer and model. This information is intended for use in troubleshooting and support. The JSON blob is stored in a file with a ".byok" extension so that the Azure PowerShell/CLI clients treats it correctly when ΓÇÿAdd-AzKeyVaultKeyΓÇÖ (PSH) or ΓÇÿaz keyvault key importΓÇÖ (CLI) commands are used.
-### Step 4: Upload key transfer blob to import HSM-key
+### Upload key transfer blob to import HSM-key
Customer will transfer the Key Transfer Blob (".byok" file) to an online workstation and then run a **az keyvault key import** command to import this blob as a new HSM-backed key into Key Vault.
-To import an RSA key use this command:
+To import an RSA key, use this command:
```azurecli az keyvault key import --vault-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file KeyTransferPackage-ContosoFirstHSMkey.byok --ops encrypt decrypt ```
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-byok.md
For added assurance when you use Azure Key Vault, you can import or generate a k
Use the information in this article to help you plan for, generate, and transfer your own HSM-protected keys to use with Azure Key Vault. > [!NOTE]
-> This functionality is not available for Azure China 21Vianet.
->
-> This import method is available only for [supported HSMs](#supported-hsms).
+> This functionality is not available for Azure China 21Vianet.
+>
+> This import method is available only for [supported HSMs](#supported-hsms).
For more information, and for a tutorial to get started using Key Vault (including how to create a key vault for HSM-protected keys), see [What is Azure Key Vault?](../general/overview.md).
Here's an overview of the process. Specific steps to complete are described late
* In Key Vault, generate a key (referred to as a *Key Exchange Key* (KEK)). The KEK must be an RSA-HSM key that has only the `import` key operation. Only Key Vault Premium and Managed HSM support RSA-HSM keys. * Download the KEK public key as a .pem file. * Transfer the KEK public key to an offline computer that is connected to an on-premises HSM.
-* In the offline computer, use the BYOK tool provided by your HSM vendor to create a BYOK file.
+* In the offline computer, use the BYOK tool provided by your HSM vendor to create a BYOK file.
* The target key is encrypted with a KEK, which stays encrypted until it is transferred to the Key Vault HSM. Only the encrypted version of your key leaves the on-premises HSM. * A KEK that's generated inside a Key Vault HSM is not exportable. HSMs enforce the rule that no clear version of a KEK exists outside a Key Vault HSM. * The KEK must be in the same key vault where the target key will be imported.
The following table lists prerequisites for using BYOK in Azure Key Vault:
To generate and transfer your key to a Key Vault Premium or Managed HSM:
-* [Step 1: Generate a KEK](#step-1-generate-a-kek)
-* [Step 2: Download the KEK public key](#step-2-download-the-kek-public-key)
-* [Step 3: Generate and prepare your key for transfer](#step-3-generate-and-prepare-your-key-for-transfer)
-* [Step 4: Transfer your key to Azure Key Vault](#step-4-transfer-your-key-to-azure-key-vault)
+* [Step 1: Generate a KEK](#generate-a-kek)
+* [Step 2: Download the KEK public key](#download-the-kek-public-key)
+* [Step 3: Generate and prepare your key for transfer](#generate-and-prepare-your-key-for-transfer)
+* [Step 4: Transfer your key to Azure Key Vault](#transfer-your-key-to-azure-key-vault)
-### Step 1: Generate a KEK
+### Generate a KEK
A KEK is an RSA key that's generated in a Key Vault Premium or Managed HSM. The KEK is used to encrypt the key you want to import (the *target* key).
The KEK must be:
> [!NOTE] > The KEK must have 'import' as the only allowed key operation. 'import' is mutually exclusive with all other key operations.
-Use the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create) command to create a KEK that has key operations set to `import`. Record the key identifier (`kid`) that's returned from the following command. (You will use the `kid` value in [Step 3](#step-3-generate-and-prepare-your-key-for-transfer).)
+Use the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create) command to create a KEK that has key operations set to `import`. Record the key identifier (`kid`) that's returned from the following command. (You will use the `kid` value in [Step 3](#generate-and-prepare-your-key-for-transfer).)
```azurecli az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --vault-name ContosoKeyVaultHSM
or for Managed HSM
az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --hsm-name ContosoKeyVaultHSM ```
-### Step 2: Download the KEK public key
+### Download the KEK public key
Use [az keyvault key download](/cli/azure/keyvault/key#az-keyvault-key-download) to download the KEK public key to a .pem file. The target key you import is encrypted by using the KEK public key.
az keyvault key download --name KEKforBYOK --hsm-name ContosoKeyVaultHSM --file
Transfer the KEKforBYOK.publickey.pem file to your offline computer. You will need this file in the next step.
-### Step 3: Generate and prepare your key for transfer
+### Generate and prepare your key for transfer
-Refer to your HSM vendor's documentation to download and install the BYOK tool. Follow instructions from your HSM vendor to generate a target key, and then create a key transfer package (a BYOK file). The BYOK tool will use the `kid` from [Step 1](#step-1-generate-a-kek) and the KEKforBYOK.publickey.pem file you downloaded in [Step 2](#step-2-download-the-kek-public-key) to generate an encrypted target key in a BYOK file.
+Refer to your HSM vendor's documentation to download and install the BYOK tool. Follow instructions from your HSM vendor to generate a target key, and then create a key transfer package (a BYOK file). The BYOK tool will use the `kid` from [Step 1](#generate-a-kek) and the KEKforBYOK.publickey.pem file you downloaded in [Step 2](#download-the-kek-public-key) to generate an encrypted target key in a BYOK file.
Transfer the BYOK file to your connected computer.
Transfer the BYOK file to your connected computer.
> > **Known issue**: Importing an RSA 4K target key from Luna HSMs is only supported with firmware 7.4.0 or newer.
-### Step 4: Transfer your key to Azure Key Vault
+### Transfer your key to Azure Key Vault
To complete the key import, transfer the key transfer package (a BYOK file) from your disconnected computer to the internet-connected computer. Use the [az keyvault key import](/cli/azure/keyvault/key#az-keyvault-key-import) command to upload the BYOK file to the Key Vault HSM. To import an RSA key use following command. Parameter --kty is optional and defaults to 'RSA-HSM'.+ ```azurecli az keyvault key import --vault-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file KeyTransferPackage-ContosoFirstHSMkey.byok ```+ or for Managed HSM ```azurecli
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md
Title: How to generate and transfer HSM-protected keys for Azure Key Vault - Azu
description: Use this article to help you plan for, generate, and then transfer your own HSM-protected keys to use with Azure Key Vault. Also known as BYOK or bring your own key. - tags: azure-resource-manager Previously updated : 02/24/2021 Last updated : 01/24/2023
For added assurance, when you use Azure Key Vault, you can import or generate keys in hardware security modules (HSMs) that never leave the HSM boundary. This scenario is often referred to as *bring your own key*, or BYOK. Azure Key Vault uses nCipher nShield family of HSMs (FIPS 140-2 Level 2 validated) to protect your keys.
-Use the information in this topic to help you plan for, generate, and then transfer your own HSM-protected keys to use with Azure Key Vault.
+Use this article to help you plan for, generate, and then transfer your own HSM-protected keys to use with Azure Key Vault.
-This functionality is not available for Azure China 21Vianet.
+This functionality isn't available for Azure China 21Vianet.
> [!NOTE] > For more information about Azure Key Vault, see [What is Azure Key Vault?](../general/overview.md)
This functionality is not available for Azure China 21Vianet.
More information about generating and transferring an HSM-protected key over the Internet: * You generate the key from an offline workstation, which reduces the attack surface.
-* The key is encrypted with a Key Exchange Key (KEK), which stays encrypted until it is transferred to the Azure Key Vault HSMs. Only the encrypted version of your key leaves the original workstation.
+* The key is encrypted with a Key Exchange Key (KEK), which stays encrypted until it's transferred to the Azure Key Vault HSMs. Only the encrypted version of your key leaves the original workstation.
* The toolset sets properties on your tenant key that binds your key to the Azure Key Vault security world. So after the Azure Key Vault HSMs receive and decrypt your key, only these HSMs can use it. Your key cannot be exported. This binding is enforced by the nCipher HSMs. * The Key Exchange Key (KEK) that is used to encrypt your key is generated inside the Azure Key Vault HSMs and is not exportable. The HSMs enforce that there can be no clear version of the KEK outside the HSMs. In addition, the toolset includes attestation from nCipher that the KEK is not exportable and was generated inside a genuine HSM that was manufactured by nCipher.
-* The toolset includes attestation from nCipher that the Azure Key Vault security world was also generated on a genuine HSM manufactured by nCipher. This attestation proves to you that Microsoft is using genuine hardware.
+* The toolset includes attestation from nCipher that the Azure Key Vault security world was also generated on a genuine HSM manufactured by nCipher. This attestation demonstrates that Microsoft is using genuine hardware.
* Microsoft uses separate KEKs and separate Security Worlds in each geographical region. This separation ensures that your key can be used only in data centers in the region in which you encrypted it. For example, a key from a European customer cannot be used in data centers in North American or Asia. ## More information about nCipher HSMs and Microsoft services
-nCipher Security, an Entrust Datacard company, is a leader in the general purpose HSM market, empowering world-leading organizations by delivering trust, integrity and control to their business critical information and applications. nCipher's cryptographic solutions secure emerging technologies ΓÇô cloud, IoT, blockchain, digital payments ΓÇô and help meet new compliance mandates, using the same proven technology that global organizations depend on today to protect against threats to their sensitive data, network communications and enterprise infrastructure. nCipher delivers trust for business critical applications, ensuring the integrity of data and putting customers in complete control ΓÇô today, tomorrow, at all times.
+nCipher Security, an Entrust Datacard company, is a leader in the general purpose HSM market, empowering world-leading organizations by delivering trust, integrity and control to their business critical information and applications. nCipher's cryptographic solutions secure emerging technologies ΓÇô cloud, IoT, blockchain, digital payments ΓÇô and help meet new compliance mandates, using the same proven technology that global organizations depend on today to protect against threats to their sensitive data, network communications and enterprise infrastructure. nCipher delivers trust for business critical applications, ensuring the integrity of data and putting customers in complete control ΓÇô today, tomorrow, always.
Microsoft has collaborated with nCipher Security to enhance the state of art for HSMs. These enhancements enable you to get the typical benefits of hosted services without relinquishing control over your keys. Specifically, these enhancements let Microsoft manage the HSMs so that you do not have to. As a cloud service, Azure Key Vault scales up at short notice to meet your organization's usage spikes. At the same time, your key is protected inside Microsoft's HSMs: You retain control over the key lifecycle because you generate the key and transfer it to Microsoft's HSMs. ## Implementing bring your own key (BYOK) for Azure Key Vault
-Use the following information and procedures if you will generate your own HSM-protected key and then transfer it to Azure Key VaultΓÇöthe bring your own key (BYOK) scenario.
+Use the following information and procedures if you will generate your own HSM-protected key and then transfer it to Azure Key Vault. This is known as the Bring Your Own Key (BYOK) scenario.
## Prerequisites for BYOK
See the following table for a list of prerequisites for bring your own key (BYOK
| A subscription to Azure |To create an Azure Key Vault, you need an Azure subscription: [Sign up for free trial](https://azure.microsoft.com/pricing/free-trial/) | | The Azure Key Vault Premium service tier to support HSM-protected keys |For more information about the service tiers and capabilities for Azure Key Vault, see the [Azure Key Vault Pricing](https://azure.microsoft.com/pricing/details/key-vault/) website. | | nCipher nShield HSMs, smartcards, and support software |You must have access to a nCipher Hardware Security Module and basic operational knowledge of nCipher nShield HSMs. See [nCipher nShield Hardware Security Module](https://www.arrow.com/ecs-media/8441/33982ncipher_nshield_family_brochure.pdf) for the list of compatible models, or to purchase an HSM if you do not have one. |
-| The following hardware and software:<ol><li>An offline x64 workstation with a minimum Windows operation system of Windows 7 and nCipher nShield software that is at least version 11.50.<br/><br/>If this workstation runs Windows 7, you must [install Microsoft .NET Framework 4.5](https://download.microsoft.com/download/b/a/4/ba4a7e71-2906-4b2d-a0e1-80cf16844f5f/dotnetfx45_full_x86_x64.exe).</li><li>A workstation that is connected to the Internet and has a minimum Windows operating system of Windows 7 and [Azure PowerShell](/powershell/azure/) **minimum version 1.1.0** installed.</li><li>A USB drive or other portable storage device that has at least 16 MB free space.</li></ol> |For security reasons, we recommend that the first workstation is not connected to a network. However, this recommendation is not programmatically enforced.<br/><br/>In the instructions that follow, this workstation is referred to as the disconnected workstation.</p></blockquote><br/>In addition, if your tenant key is for a production network, we recommend that you use a second, separate workstation to download the toolset, and upload the tenant key. But for testing purposes, you can use the same workstation as the first one.<br/><br/>In the instructions that follow, this second workstation is referred to as the Internet-connected workstation.</p></blockquote><br/> |
+| The following hardware and software:<ol><li>An offline x64 workstation with a minimum Windows operation system of Windows 7 and nCipher nShield software that is at least version 11.50.<br/><br/>If this workstation runs Windows 7, you must [install Microsoft .NET Framework 4.5](https://download.microsoft.com/download/b/a/4/ba4a7e71-2906-4b2d-a0e1-80cf16844f5f/dotnetfx45_full_x86_x64.exe).</li><li>A workstation that is connected to the Internet and has a minimum Windows operating system of Windows 7 and [Azure PowerShell](/powershell/azure/) **minimum version 1.1.0** installed.</li><li>A USB drive or other portable storage device that has at least 16-MB free space.</li></ol> |For security reasons, we recommend that the first workstation is not connected to a network. However, this recommendation is not programmatically enforced.<br/><br/>In the instructions that follow, this workstation is referred to as the disconnected workstation.</p></blockquote><br/>In addition, if your tenant key is for a production network, we recommend that you use a second, separate workstation to download the toolset, and upload the tenant key. But for testing purposes, you can use the same workstation as the first one.<br/><br/>In the instructions that follow, this second workstation is referred to as the Internet-connected workstation.</p></blockquote><br/> |
## Generate and transfer your key to Azure Key Vault HSM
-You will use the following five steps to generate and transfer your key to an Azure Key Vault HSM:
+You'll use the following five steps to generate and transfer your key to an Azure Key Vault HSM:
-* [Step 1: Prepare your Internet-connected workstation](#step-1-prepare-your-internet-connected-workstation)
-* [Step 2: Prepare your disconnected workstation](#step-2-prepare-your-disconnected-workstation)
-* [Step 3: Generate your key](#step-3-generate-your-key)
-* [Step 4: Prepare your key for transfer](#step-4-prepare-your-key-for-transfer)
-* [Step 5: Transfer your key to Azure Key Vault](#step-5-transfer-your-key-to-azure-key-vault)
+* [Step 1: Prepare your Internet-connected workstation](#prepare-your-internet-connected-workstation)
+* [Step 2: Prepare your disconnected workstation](#prepare-your-disconnected-workstation)
+* [Step 3: Generate your key](#generate-your-key)
+* [Step 4: Prepare your key for transfer](#prepare-your-key-for-transfer)
+* [Step 5: Transfer your key to Azure Key Vault](#transfer-your-key-to-azure-key-vault)
-## Step 1: Prepare your Internet-connected workstation
+## Prepare your Internet-connected workstation
For this first step, do the following procedures on your workstation that is connected to the Internet.
-### Step 1.1: Install Azure PowerShell
+### Install Azure PowerShell
From the Internet-connected workstation, download and install the Azure PowerShell module that includes the cmdlets to manage Azure Key Vault. For installation instructions, see [How to install and configure Azure PowerShell](/powershell/azure/).
-### Step 1.2: Get your Azure subscription ID
+### Get your Azure subscription ID
Start an Azure PowerShell session and sign in to your Azure account by using the following command:
In the pop-up browser window, enter your Azure account user name and password. T
Get-AzSubscription ```
-From the output, locate the ID for the subscription you will use for Azure Key Vault. You will need this subscription ID later.
+From the output, locate the ID for the subscription you will use for Azure Key Vault. You'll need this subscription ID later.
Do not close the Azure PowerShell window.
-### Step 1.3: Download the BYOK toolset for Azure Key Vault
+### Download the BYOK toolset for Azure Key Vault
Go to the Microsoft Download Center and download the Azure Key Vault BYOK toolset for your geographic region or instance of Azure. Use the following information to identify the package name to download and its corresponding SHA-256 package hash:
The toolset includes:
Copy the package to a USB drive or other portable storage.
-## Step 2: Prepare your disconnected workstation
+## Prepare your disconnected workstation
For this second step, do the following procedures on the workstation that is not connected to a network (either the Internet or your internal network).
-### Step 2.1: Prepare the disconnected workstation with nCipher nShield HSM
+### Prepare the disconnected workstation with nCipher nShield HSM
Install the nCipher support software on a Windows computer, and then attach a nCipher nShield HSM to that computer.
-Ensure that the nCipher tools are in your path (**%nfast_home%\bin**). For example, type the following:
+Ensure that the nCipher tools are in your path (**%nfast_home%\bin**). For example, type :
```cmd set PATH=%PATH%;"%nfast_home%\bin"
Ensure that the nCipher tools are in your path (**%nfast_home%\bin**). For examp
For more information, see the user guide included with the nShield HSM.
-### Step 2.2: Install the BYOK toolset on the disconnected workstation
+### Install the BYOK toolset on the disconnected workstation
-Copy the BYOK toolset package from the USB drive or other portable storage, and then do the following:
+Copy the BYOK toolset package from the USB drive or other portable storage, and then:
1. Extract the files from the downloaded package into any folder. 2. From that folder, run vcredist_x64.exe. 3. Follow the instructions to the install the Visual C++ runtime components for Visual Studio 2013.
-## Step 3: Generate your key
+## Generate your key
For this third step, do the following procedures on the disconnected workstation. To complete this step your HSM must be in initialization mode.
-### Step 3.1: Change the HSM mode to 'I'
+### Change the HSM mode to 'I'
If you are using nCipher nShield Edge, to change the mode: 1. Use the Mode button to highlight the required mode. 2. Within a few seconds, press and hold the Clear button for a couple of seconds. If the mode changes, the new mode's LED stops flashing and remains lit. The Status LED might flash irregularly for a few seconds and then flashes regularly when the device is ready. Otherwise, the device remains in the current mode, with the appropriate mode LED lit.
-### Step 3.2: Create a security world
+### Create a security world
Start a command prompt and run the nCipher new-world program.
Start a command prompt and run the nCipher new-world program.
This program creates a **Security World** file at %NFAST_KMDATA%\local\world, which corresponds to the C:\ProgramData\nCipher\Key Management Data\local folder. You can use different values for the quorum but in our example, you're prompted to enter three blank cards and pins for each one. Then, any two cards give full access to the security world. These cards become the **Administrator Card Set** for the new security world. > [!NOTE]
-> If your HSM does not support the newer cypher suite DLf3072s256mRijndael, you can replace --cipher-suite= DLf3072s256mRijndael with --cipher-suite=DLf1024s160mRijndael
+> If your HSM does not support the newer cypher suite DLf3072s256mRijndael, you can replace `--cipher-suite= DLf3072s256mRijndael` with `--cipher-suite=DLf1024s160mRijndael`.
> > Security world created with new-world.exe that ships with nCipher software version 12.50 is not compatible with this BYOK procedure. There are two options available: > 1) Downgrade nCipher software version to 12.40.2 to create a new security world. > 2) Contact nCipher support and request them to provide a hotfix for 12.50 software version, which allows you to use 12.40.2 version of new-world.exe that is compatible with this BYOK procedure.
-Then do the following:
+Then:
* Back up the world file. Secure and protect the world file, the Administrator Cards, and their pins, and make sure that no single person has access to more than one card.
-### Step 3.3: Change the HSM mode to 'O'
+### Change the HSM mode to 'O'
If you are using nCipher nShield Edge, to change the mode: 1. Use the Mode button to highlight the required mode. 2. Within a few seconds, press and hold the Clear button for a couple of seconds. If the mode changes, the new mode's LED stops flashing and remains lit. The Status LED might flash irregularly for a few seconds and then flashes regularly when the device is ready. Otherwise, the device remains in the current mode, with the appropriate mode LED lit.
-### Step 3.4: Validate the downloaded package
+### Validate the downloaded package
This step is optional but recommended so that you can validate the following:
This script validates the signer chain up to the nShield root key. The hash of t
You're now ready to create a new key.
-### Step 3.5: Create a new key
+### Create a new key
Generate a key by using the nCipher nShield **generatekey** program.
Back up this Tokenized Key File in a safe location.
You are now ready to transfer your key to Azure Key Vault.
-## Step 4: Prepare your key for transfer
+## Prepare your key for transfer
For this fourth step, do the following procedures on the disconnected workstation.
-### Step 4.1: Create a copy of your key with reduced permissions
+### Create a copy of your key with reduced permissions
Open a new command prompt and change the current directory to the location where you unzipped the BYOK zip file. To reduce the permissions on your key, from a command prompt, run one of the following, depending on your geographic region or instance of Azure:
To reduce the permissions on your key, from a command prompt, run one of the fol
KeyTransferRemote.exe -ModifyAcls -KeyAppName simple -KeyIdentifier contosokey -ExchangeKeyPackage BYOK-KEK-pkg-SUI-1 -NewSecurityWorldPackage BYOK-SecurityWorld-pkg-SUI-1 ```
-When you run this command, replace *contosokey* with the same value you specified in **Step 3.5: Create a new key** from the [Generate your key](#step-3-generate-your-key) step.
+When you run this command, replace *contosokey* with the same value you specified in **Step 3.5: Create a new key** from the [Generate your key](#generate-your-key) step.
You are asked to plug in your security world admin cards.
You may inspects the ACLS using following commands using the nCipher nShield uti
"%nfast_home%\bin\kmfile-dump.exe" "%NFAST_KMDATA%\local\key_xferacld_contosokey" ```
- When you run these commands, replace contosokey with the same value you specified in **Step 3.5: Create a new key** from the [Generate your key](#step-3-generate-your-key) step.
+ When you run these commands, replace contosokey with the same value you specified in **Step 3.5: Create a new key** from the [Generate your key](#generate-your-key) step.
-### Step 4.2: Encrypt your key by using Microsoft's Key Exchange Key
+### Encrypt your key by using Microsoft's Key Exchange Key
Run one of the following commands, depending on your geographic region or instance of Azure:
Run one of the following commands, depending on your geographic region or instan
When you run this command, use these instructions:
-* Replace *contosokey* with the identifier that you used to generate the key in **Step 3.5: Create a new key** from the [Generate your key](#step-3-generate-your-key) step.
-* Replace *SubscriptionID* with the ID of the Azure subscription that contains your key vault. You retrieved this value previously, in **Step 1.2: Get your Azure subscription ID** from the [Prepare your Internet-connected workstation](#step-1-prepare-your-internet-connected-workstation) step.
+* Replace *contosokey* with the identifier that you used to generate the key in **Step 3.5: Create a new key** from the [Generate your key](#generate-your-key) step.
+* Replace *SubscriptionID* with the ID of the Azure subscription that contains your key vault. You retrieved this value previously, in **Step 1.2: Get your Azure subscription ID** from the [Prepare your Internet-connected workstation](#prepare-your-internet-connected-workstation) step.
* Replace *ContosoFirstHSMKey* with a label that is used for your output file name. When this completes successfully, it displays **Result: SUCCESS** and there is a new file in the current folder that has the following name: KeyTransferPackage-*ContosoFirstHSMkey*.byok
-### Step 4.3: Copy your key transfer package to the Internet-connected workstation
+### Copy your key transfer package to the Internet-connected workstation
Use a USB drive or other portable storage to copy the output file from the previous step (KeyTransferPackage-ContosoFirstHSMkey.byok) to your Internet-connected workstation.
-## Step 5: Transfer your key to Azure Key Vault
+## Transfer your key to Azure Key Vault
For this final step, on the Internet-connected workstation, use the [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey) cmdlet to upload the key transfer package that you copied from the disconnected workstation to the Azure Key Vault HSM:
key-vault Hsm Protected Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys.md
tags: azure-resource-manager
Previously updated : 02/24/2021 Last updated : 01/24/2023
This functionality is not available for Azure China 21Vianet.
## Supported HSMs
-Transferring HSM-protected keys to Key Vault is supported via two different methods depending on the HSMs you use. Use the table below to determine which method should be used for your HSMs to generate, and then transfer your own HSM-protected keys to use with Azure Key Vault.
+Transferring HSM-protected keys to Key Vault is supported via two different methods depending on the HSMs you use. Use this table to determine which method should be used for your HSMs to generate, and then transfer your own HSM-protected keys to use with Azure Key Vault.
|Vendor Name|Vendor Type|Supported HSM models|Supported HSM-key transfer method| |||||
Transferring HSM-protected keys to Key Vault is supported via two different meth
|Fortanix|Manufacturer,<br/>HSM as a Service|<ul><li>Self-Defending Key Management Service (SDKMS)</li><li>Equinix SmartKey</li></ul>|[Use new BYOK method](hsm-protected-keys-byok.md)| |IBM|Manufacturer|IBM 476x, CryptoExpress|[Use new BYOK method](hsm-protected-keys-byok.md)| |Marvell|Manufacturer|All LiquidSecurity HSMs with<ul><li>Firmware version 2.0.4 or later</li><li>Firmware version 3.2 or newer</li></ul>|[Use new BYOK method](hsm-protected-keys-byok.md)|
-|[nCipher](https://www.ncipher.com/products/key-management/cloud-microsoft-azure)|Manufacturer,<br/>HSM as a Service|<ul><li>nShield family of HSMs</li><li>nShield as a service</ul>|**Method 1:** [nCipher BYOK](hsm-protected-keys-ncipher.md) (deprecated). This method will not be supported after <strong>June 30, 2021</strong><br/>**Method 2:** [Use new BYOK method](hsm-protected-keys-byok.md) (recommended)<br/>See Entrust row above|
+|[nCipher](https://www.ncipher.com/products/key-management/cloud-microsoft-azure)|Manufacturer,<br/>HSM as a Service|<ul><li>nShield family of HSMs</li><li>nShield as a service</ul>|**Method 1:** [nCipher BYOK](hsm-protected-keys-ncipher.md) (deprecated). This method will not be supported after <strong>June 30, 2021</strong><br/>**Method 2:** [Use new BYOK method](hsm-protected-keys-byok.md) (recommended)<br/>See the Entrust row. |
|Securosys SA|Manufacturer,<br/>HSM as a service|Primus HSM family, Securosys Clouds HSM|[Use new BYOK method](hsm-protected-keys-byok.md)| |StorMagic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>Utimaco</li><li>Thales</li><li>nCipher</li></ul>See [StorMagic site for details](https://stormagic.com/doc/svkms/Content/Integrations/Azure_KeyVault_BYOK.htm)|[Use new BYOK method](hsm-protected-keys-byok.md)| |Thales|Manufacturer|<ul><li>Luna HSM 7 family with firmware version 7.3 or newer</li></ul>| [Use new BYOK method](hsm-protected-keys-byok.md)|
lab-services Class Type Ethical Hacking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-ethical-hacking.md
Title: Set up an Ethical Hacking lab with Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab using Azure Lab Services to teach ethical hacking.
+ Title: Set up an ethical hacking lab
+
+description: Learn how to set up a lab to teach ethical hacking using Azure Lab Services.
++++ Previously updated : 01/04/2022 Last updated : 01/24/2023
-# Set up a lab to teach ethical hacking class
+# Set up a lab to teach ethical hacking class by using Azure Lab Services
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-This article shows you how to set up a class that focuses on forensics side of ethical hacking. Penetration testing, a practice used by the ethical hacking community, occurs when someone attempts to gain access to the system or network to demonstrate vulnerabilities that a malicious attacker may exploit.
+This article shows you how to set up a class that focuses on the forensics side of ethical hacking with Azure Lab Services. In an ethical hacking class, students can learn modern techniques for defending against vulnerabilities. Penetration testing, a practice that the ethical hacking community uses, occurs when someone attempts to gain access to the system or network to demonstrate vulnerabilities that a malicious attacker may exploit.
-In an ethical hacking class, students can learn modern techniques for defending against vulnerabilities. Each student gets a Windows Server host virtual machine that has two nested virtual machines ΓÇô one virtual machine with [Metasploitable3](https://github.com/rapid7/metasploitable3) image and another machine with [Kali Linux](https://www.kali.org/) image. The Metasploitable virtual machine is used for exploiting purposes and Kali virtual machine provides access to the tools needed to execute forensic tasks.
+Each student gets a Windows Server host virtual machine (VM) that has two nested virtual machines: one VM with [Metasploitable3](https://github.com/rapid7/metasploitable3) image and another VM with the [Kali Linux](https://www.kali.org/) image. You use the Metasploitable VM for exploiting purposes. The Kali VM provides access to the tools you need to execute forensic tasks.
This article has two main sections. The first section covers how to create the lab. The second section covers how to create the template machine with nested virtualization enabled and with the tools and images needed. In this case, a Metasploitable image and a Kali Linux image on a machine that has Hyper-V enabled to host the images.
-## Lab configuration
+## Prerequisites
[!INCLUDE [must have subscription](./includes/lab-services-class-type-subscription.md)] [!INCLUDE [must have lab plan](./includes/lab-services-class-type-lab-plan.md)]
-### Lab settings
+## Lab configuration
[!INCLUDE [create lab](./includes/lab-services-class-type-lab.md)] Use the following settings when creating the lab.
This article has two main sections. The first section covers how to create the l
[!INCLUDE [configure template vm](./includes/lab-services-class-type-template-vm.md)]
-To configure the template VM, we'll complete the following three major tasks.
+To configure the template VM, complete the following three tasks:
+
+1. Set up the machine for nested virtualization. You enable all the appropriate windows features, like Hyper-V, and set up the networking for the Hyper-V images to be able to communicate with each other and the internet.
-1. Set up the machine for nested virtualization. It enables all the appropriate windows features, like Hyper-V, and sets up the networking for the Hyper-V images to be able to communicate with each other and the internet.
2. Set up the [Kali](https://www.kali.org/) Linux image. Kali is a Linux distribution that includes tools for penetration testing and security auditing.
-3. Set up the Metasploitable image. For this example, the [Metasploitable3](https://github.com/rapid7/metasploitable3) image will be used. This image is created to purposely have security vulnerabilities.
-You can complete the tasks above by executing the [Lab Services Hyper-V Script](https://aka.ms/azlabs/scripts/hyperV) and [Lab Services Ethical Hacking Script](https://aka.ms/azlabs/scripts/EthicalHacking) PowerShell scripts on the template machine. Once scripts have been executed, continue to [Next steps](#next-steps).
+3. Set up the Metasploitable image. For this example, you use the [Metasploitable3](https://github.com/rapid7/metasploitable3) image. This image is created to purposely have security vulnerabilities.
+
+You can complete these tasks in either of two ways:
-If you choose to set up the template machine manually, continue reading. The rest of this article will cover the manual completion of template configuration tasks.
+- Run the following PowerShell scripts on the template machine: [Lab Services Hyper-V Script](https://aka.ms/azlabs/scripts/hyperV) and [Lab Services Ethical Hacking Script](https://aka.ms/azlabs/scripts/EthicalHacking). Once the scripts have completed, continue to the [Next steps](#next-steps).
+
+- Set up the template machine manually by completing the steps outlined below.
### Prepare template machine for nested virtualization
-Follow instructions to [enable nested virtualization](how-to-enable-nested-virtualization-template-vm.md) to prepare your template virtual machine for nested virtualization.
+Follow the instructions to [enable nested virtualization](how-to-enable-nested-virtualization-template-vm.md) to prepare your template VM for nested virtualization.
+
+### Set up a nested virtual machine with Kali Linux image
-### Set up a nested virtual machine with Kali Linux Image
+Kali is a Linux distribution that includes tools for penetration testing and security auditing. To install the Kali nested VM on the template VM:
-Kali is a Linux distribution that includes tools for penetration testing and security auditing.
+1. Connect to the template VM by using remote desktop.
-1. Download image from [Offensive Security Kali Linux VM images](https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/). Remember the default username and password noted on the download page.
+1. Download the image from [Offensive Security Kali Linux VM images](https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/). Remember the default username and password noted on the download page.
1. Download the **Kali Linux VMware 64-Bit (7z)** image for VMware. 1. Extract the .7z file. If you donΓÇÖt already have 7 zip, download it from [https://www.7-zip.org/download.html](https://www.7-zip.org/download.html). Remember the location of the extracted folder as you'll need it later.
-1. Convert the extracted vmdk file to a vhdx file so that you can use the vhdx file with Hyper-V. There are several tools available to convert VMware images to Hyper-V images. We'll be using the [StarWind V2V Converter](https://www.starwindsoftware.com/starwind-v2v-converter). To download, see [StarWind V2V Converter download page](https://www.starwindsoftware.com/starwind-v2v-converter#download).
+
+1. Convert the extracted vmdk file to a Hyper-V vhdx file with StarWind V2V Converter.
+ 1. Download and install [StarWind V2V Converter](https://www.starwindsoftware.com/starwind-v2v-converter#download).
1. Start **StarWind V2V Converter**. 1. On the **Select location of image to convert** page, choose **Local file**. Select **Next**. 1. On the **Source image** page, navigate to and select the Kali Linux vmdk file extracted in the previous step for the **File name** setting. The file will be in the format Kali-Linux-{version}-vmware-amd64.vmdk. Select **Next**.
Kali is a Linux distribution that includes tools for penetration testing and sec
1. On the **Select option for VHD/VHDX image format** page, choose **VHDX growable image**. Select **Next**. 1. On the **Select destination file name** page, accept the default file name. Select **Convert**. 1. On the **Converting** page, wait for the image to be converted. Conversion may take several minutes. Select **Finish** when the conversion is completed.+ 1. Create a new Hyper-V virtual machine. 1. Open **Hyper-V Manager**. 1. Choose **Action** -> **New** -> **Virtual Machine**.
Kali is a Linux distribution that includes tools for penetration testing and sec
1. On the **Legacy Network Adapter** page, select **LabServicesSwitch** for the **Virtual Switch** setting, and select **OK**. LabServicesSwitch was created when preparing the template machine for Hyper-V in the **Prepare Template for Nested Virtualization** section. 1. The Kali-Linux image is now ready for use. From **Hyper-V Manager**, choose **Action** -> **Start**, then choose **Action** -> **Connect** to connect to the virtual machine. The default username is `kali` and the password is `kali`.
-### Set up a nested VM with Metasploitable Image
+### Set up a nested VM with Metasploitable image
+
+The Rapid7 Metasploitable image is an image purposely configured with security vulnerabilities. You use this image to test and find issues. The following instructions show you how to use a pre-created Metasploitable image. However, if a newer version of the Metasploitable image is needed, see [https://github.com/rapid7/metasploitable3](https://github.com/rapid7/metasploitable3).
+
+To install the Metasploitable nested VM on the template VM:
-The Rapid7 Metasploitable image is an image purposely configured with security vulnerabilities. You'll use this image to test and find issues. The following instructions show you how to use a pre-created Metasploitable image. However, if a newer version of the Metasploitable image is needed, see [https://github.com/rapid7/metasploitable3](https://github.com/rapid7/metasploitable3).
+1. Connect to the template VM by using remote desktop.
1. Download the Metasploitable image. 1. Navigate to [https://information.rapid7.com/download-metasploitable-2017.html](https://information.rapid7.com/download-metasploitable-2017.html). Fill out the form to download the image and select the **Submit** button.
+
+ > [!NOTE]
+ > You can check for newer versions of the Metasploitable image on [https://github.com/rapid7/metasploitable3](https://github.com/rapid7/metasploitable3).
+ 2. Select the **Download Metasploitable Now** button.
- 3. When the zip file is downloaded, extract the zip file, and remember the location of the Metasploitable.vmdk file.
-1. Convert the extracted vmdk file to a vhdx file so that you can use the vhdx file with Hyper-V. There are several tools available to convert VMware images to Hyper-V images. We'll be using the [StarWind V2V Converter](https://www.starwindsoftware.com/starwind-v2v-converter) again. To download, see [StarWind V2V Converter download page](https://www.starwindsoftware.com/starwind-v2v-converter#download).
+ 3. When the download finishes, extract the zip file, and remember the location of the *Metasploitable.vmdk* file.
+
+1. Convert the extracted vmdk file to a Hyper-V vhdx file with StarWind V2V Converter.
+ 1. Download and install [StarWind V2V Converter](https://www.starwindsoftware.com/starwind-v2v-converter#download).
1. Start **StarWind V2V Converter**. 1. On the **Select location of image to convert** page, choose **Local file**. Select **Next**. 1. On the **Source image** page, navigate to and select the Metasploitable.vmdk extracted in the previous step for the **File name** setting. Select **Next**.
The Rapid7 Metasploitable image is an image purposely configured with security v
1. On the **Select option for VHD/VHDX image format** page, choose **VHDX growable image**. Select **Next**. 1. On the **Select destination file name** page, accept the default file name. Select **Convert**. 1. On the **Converting** page, wait for the image to be converted. Conversion may take several minutes. Select **Finish** when the conversion is completed.+ 1. Create a new Hyper-V virtual machine. 1. Open **Hyper-V Manager**. 1. Choose **Action** -> **New** -> **Virtual Machine**.
The Rapid7 Metasploitable image is an image purposely configured with security v
:::image type="content" source="./media/class-type-ethical-hacking/legacy-network-adapter-page.png" alt-text="Screenshot of Legacy Network adapter settings page for Hyper V VM."::: 1. The Metasploitable image is now ready for use. From **Hyper-V Manager**, choose **Action** -> **Start**, then choose **Action** -> **Connect** to connect to the virtual machine. The default username is `msfadmin` and the password is `msfadmin`.
-The template is now updated and has images needed for an ethical hacking penetration testing class, an image with tools to do the penetration testing and another image with security vulnerabilities to discover. The template image can now be [published](how-to-create-manage-template.md#publish-the-template-vm) to the class.
+The template is now updated and has the nested VM images needed for an ethical hacking penetration testing class: an image with tools to do the penetration testing, and another image with security vulnerabilities to discover. You can now [publish the template VM](how-to-create-manage-template.md#publish-the-template-vm) to the class.
## Cost
For a class of 25 students with 20 hours of scheduled class time and 10 hours of
25 students \* (20 + 10) hours \* 55 Lab Units \* 0.01 USD per hour = 412.50 USD >[!IMPORTANT]
->Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
+>This cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
## Conclusion
-This article walked you through the steps to create a lab for ethical hacking class. It includes steps to set up nested virtualization for creating two virtual machines inside the host virtual machine for penetrating testing.
+In this article, you went through the steps to create a lab for ethical hacking class. The lab VM contains two nested virtual machines to practice penetrating testing.
## Next steps
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
The following example shows the sample inputs and outputs:
* If your template uses [Liquid filters](https://shopify.github.io/liquid/basics/introduction/#filters), make sure that you follow the [DotLiquid and C# naming conventions](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers#filter-and-output-casing), which use *sentence casing*. For all Liquid transforms, make sure that filter names in your template also use sentence casing. Otherwise, the filters won't work.
- For example, when you use the `replace` filter, use `Replace`, not `replace`. The same rule applies if you try out examples at [DotLiquid online](http://dotliquidmarkup.org/TryOnline). For more information, see [Shopify Liquid filters](https://shopify.dev/docs/themes/liquid/reference/filters) and [DotLiquid Liquid filters](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Developers#create-your-own-filters). The Shopify specification includes examples for each filter, so for comparison, you can try these examples at [DotLiquid - Try online](http://dotliquidmarkup.org/TryOnline).
+ For example, when you use the `replace` filter, use `Replace`, not `replace`. The same rule applies if you try out examples at [DotLiquid online](https://github.com/dotliquid/dotliquid/tree/master/src/DotLiquid.Website/Views/TryOnline). For more information, see [Shopify Liquid filters](https://shopify.dev/docs/themes/liquid/reference/filters) and [DotLiquid Liquid filters](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Developers#create-your-own-filters). The Shopify specification includes examples for each filter, so for comparison, you can try these examples at [DotLiquid - Try online](https://github.com/dotliquid/dotliquid/tree/master/src/DotLiquid.Website/Views/TryOnline).
* The `json` filter from the Shopify extension filters is currently [not implemented in DotLiquid](https://github.com/dotliquid/dotliquid/issues/384). Typically, you can use this filter to prepare text output for JSON string parsing, but instead, you need to use the `Replace` filter instead.
The following example shows the sample inputs and outputs:
## Next steps * [Shopify Liquid language and examples](https://shopify.github.io/liquid/basics/introduction/)
-* [DotLiquid](http://dotliquidmarkup.org/)
-* [DotLiquid - Try online](http://dotliquidmarkup.org/TryOnline)
+* [DotLiquid](https://github.com/dotliquid/dotliquid/)
+* [DotLiquid - Try online](https://github.com/dotliquid/dotliquid/tree/master/src/DotLiquid.Website/Views/TryOnline)
* [DotLiquid GitHub](https://github.com/dotliquid/dotliquid) * [DotLiquid GitHub issues](https://github.com/dotliquid/dotliquid/issues/) * Learn more about [maps](../logic-apps/logic-apps-enterprise-integration-maps.md)
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
+ Last updated 10/21/2021
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Make sure your code follows these tips:
* For the full notebook to run the above example, see [azureml-examples: Train a basic neural network with distributed MPI on the MNIST dataset using Horovod](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb)
-### DeepSpeed
-
-Don't use DeepSpeed's custom launcher to run distributed training with the [DeepSpeed](https://www.deepspeed.ai/) library on Azure ML. Instead, configure an MPI job to launch the training job [with MPI](https://www.deepspeed.ai/getting-started/#mpi-and-azureml-compatibility).
-
-Make sure your code follows these tips:
-
-* Your Azure ML environment contains DeepSpeed and its dependencies, Open MPI, and mpi4py.
-* Create an `MpiConfiguration` with your distribution.
- ### Environment variables from Open MPI When running MPI jobs with Open MPI images, the following environment variables for each process launched:
Azure ML will set the `MASTER_ADDR`, `MASTER_PORT`, `WORLD_SIZE`, and `NODE_RANK
- For the full notebook to run the above example, see [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb)
+## DeepSpeed
+
+[DeepSpeed](https://www.deepspeed.ai/tutorials/azure/) is supported as a first-class citizen within Azure Machine Learning to run distributed jobs with near linear scalabibility in terms of 
+
+* Increase in model size
+* Increase in number of GPUs
+
+`DeepSpeed` can be enabled using either Pytorch distribution or MPI for running distributed training. Azure Machine Learning supports the `DeepSpeed` launcher to launch distributed training as well as autotuning to get optimal `ds` configuration.
+
+You can use a [curated environment](resource-curated-environments.md#azure-container-for-pytorch-acpt-preview) for an out of the box environment with the latest state of art technologies including `DeepSpeed`, `ORT`, `MSSCCL`, and `Pytorch` for your DeepSpeed training jobs.
+ ## TensorFlow If you're using [native distributed TensorFlow](https://www.tensorflow.org/guide/distributed_training) in your training code, such as TensorFlow 2.x's `tf.distribute.Strategy` API, you can launch the distributed job via Azure ML using `distribution` parameters or the `TensorFlowDistribution` object.
If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-e
## Next steps * [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
-* [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
+* [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-custom-image.md
+ Last updated 08/11/2021
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
+ Last updated 03/01/2022
This issue can happen when the name of your custom environment uses terms reserv
*Applies to: Azure CLI & Python SDK v1* To create a new environment, you must use one of the following approaches (see [DockerSection](https://aka.ms/azureml/environment/environment-docker-section)):-- Base image
- - Provide base image name, repository from which to pull it, and credentials if needed
- - Provide a conda specification
-- Base Dockerfile
- - Provide a Dockerfile
- - Provide a conda specification
-- Docker build context
- - Provide the location of the build context (URL)
- - The build context must contain at least a Dockerfile, but may contain other files as well
+* Base image
+ * Provide base image name, repository from which to pull it, and credentials if needed
+ * Provide a conda specification
+* Base Dockerfile
+ * Provide a Dockerfile
+ * Provide a conda specification
+* Docker build context
+ * Provide the location of the build context (URL)
+ * The build context must contain at least a Dockerfile, but may contain other files as well
*Applies to: Azure CLI & Python SDK v2* To create a new environment, you must use one of the following approaches:-- Docker image
- - Provide the image URI of the image hosted in a registry such as Docker Hub or Azure Container Registry
- - [Sample here](https://aka.ms/azureml/environment/create-env-docker-image-v2)
-- Docker build context
- - Specify the directory that will serve as the build context
- - The directory should contain a Dockerfile and any other files needed to build the image
- - [Sample here](https://aka.ms/azureml/environment/create-env-build-context-v2)
-- Conda specification
- - You must specify a base Docker image for the environment; the conda environment will be built on top of the Docker image provided
- - Provide the relative path to the conda file
- - [Sample here](https://aka.ms/azureml/environment/create-env-conda-spec-v2)
+* Docker image
+ * Provide the image URI of the image hosted in a registry such as Docker Hub or Azure Container Registry
+ * [Sample here](https://aka.ms/azureml/environment/create-env-docker-image-v2)
+* Docker build context
+ * Specify the directory that will serve as the build context
+ * The directory should contain a Dockerfile and any other files needed to build the image
+ * [Sample here](https://aka.ms/azureml/environment/create-env-build-context-v2)
+* Conda specification
+ * You must specify a base Docker image for the environment; the conda environment will be built on top of the Docker image provided
+ * Provide the relative path to the conda file
+ * [Sample here](https://aka.ms/azureml/environment/create-env-conda-spec-v2)
### Missing Docker definition *Applies to: Python SDK v1*
This issue can happen when your environment definition is missing a `DockerSecti
Add a `DockerSection` to your environment definition, specifying either a base image, base dockerfile, or docker build context.
-```
+```python
from azureml.core import Environment myenv = Environment(name="myenv") # Specify docker steps as a string.
myenv.docker.base_dockerfile = dockerfile
*Applies to: Python SDK v1* You have more than one of these Docker options specified in your environment definition-- `base_image`-- `base_dockerfile`-- `build_context`-- See [DockerSection](https://aka.ms/azureml/environment/docker-section-class)
+* `base_image`
+* `base_dockerfile`
+* `build_context`
+* See [DockerSection](https://aka.ms/azureml/environment/docker-section-class)
*Applies to: Azure CLI & Python SDK v2* You have more than one of these Docker options specified in your environment definition-- `image`-- `build`-- See [azure.ai.ml.entities.Environment](https://aka.ms/azureml/environment/environment-class-v2)
+* `image`
+* `build`
+* See [azure.ai.ml.entities.Environment](https://aka.ms/azureml/environment/environment-class-v2)
**Affected areas (symptoms):** * Failure in registering your environment
Choose which Docker option you'd like to use to build your environment. Then set
*Applies to: Python SDK v1*
-```
+```python
from azureml.core import Environment myenv = Environment(name="myEnv") dockerfile = r'''
myenv.docker.base_image = None
*Applies to: Python SDK v1* You didn't specify one of the following options in your environment definition-- `base_image`-- `base_dockerfile`-- `build_context`-- See [DockerSection](https://aka.ms/azureml/environment/docker-section-class)
+* `base_image`
+* `base_dockerfile`
+* `build_context`
+* See [DockerSection](https://aka.ms/azureml/environment/docker-section-class)
*Applies to: Azure CLI & Python SDK v2* You didn't specify one of the following options in your environment definition-- `image`-- `build`-- See [azure.ai.ml.entities.Environment](https://aka.ms/azureml/environment/environment-class-v2)
+* `image`
+* `build`
+* See [azure.ai.ml.entities.Environment](https://aka.ms/azureml/environment/environment-class-v2)
**Affected areas (symptoms):** * Failure in registering your environment
Choose which Docker option you'd like to use to build your environment, then pop
*Applies to: Python SDK v1*
-```
+```python
from azureml.core import Environment myenv = Environment(name="myEnv") myenv.docker.base_image = "pytorch/pytorch:latest"
myenv.docker.base_image = "pytorch/pytorch:latest"
*Applies to: Python SDK v2*
-```
+```python
env_docker_image = Environment( image="pytorch/pytorch:latest", name="docker-image-example",
ml_client.environments.create_or_update(env_docker_image)
Add the missing username or password to your environment definition to fix the issue
-```
+```python
myEnv.docker.base_image_registry.username = "username" ``` Alternatively, provide authentication via [workspace connections](https://aka.ms/azureml/environment/set-connection-v1)
-```
+```python
from azureml.core import Workspace ws = Workspace.from_config() ws.set_connection("connection1", "ACR", "<URL>", "Basic", "{'Username': '<username>', 'Password': '<password>'}")
az ml connection create --file connection.yml --resource-group my-resource-group
If you're using workspace connections, view the connections you have set, and delete whichever one(s) you don't want to use
-```
+```python
from azureml.core import Workspace ws = Workspace.from_config() ws.list_connections()
ws.delete_connection("myConnection2")
If you've specified credentials in your environment definition, choose one set of credentials to use, and set all others to null
-```
+```python
myEnv.docker.base_image_registry.registry_identity = None ```
Specifying credentials in your environment definition is no longer supported. De
Set a workspace connection on your workspace
-```
+```python
from azureml.core import Workspace ws = Workspace.from_config() ws.set_connection("connection1", "ACR", "<URL>", "Basic", "{'Username': '<username>', 'Password': '<password>'}")
If any of the above-listed properties are specified in your environment definiti
* Python SDK v1 [Environment Class](https://aka.ms/azureml/environment/environment-class-v1) ### Location type not supported/Unknown location type-- The following are accepted location types:
- - Git
- - Git URLs can be provided to AzureML, but images can't yet be built using them. Use a storage
- account until builds have Git support
- - [How to use git repository as build context](https://aka.ms/azureml/environment/git-repo-as-build-context)
- - Storage account
+<!--issueDescription-->
+**Potential causes:**
+* You specified a location type for your Docker build context that isn't supported or is unknown
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK v1*
+
+The following are accepted location types:
+* Git
+ * Git URLs can be provided to AzureML, but images can't yet be built using them. Use a storage account until builds have Git support
+* Storage account
+ * See this [storage account overview](../storage/common/storage-account-overview.md)
+ * See how to [create a storage account](../storage/common/storage-account-create.md)
+
+**Resources**
+* See [DockerBuildContext Class](/python/api/azureml-core/azureml.core.environment.dockerbuildcontext)
+* [Understand build context](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#understand-build-context)
### Invalid location-- The specified location of the Docker build context is invalid-- If the build context is stored in a git repository, the path of the build context must be specified as a git URL-- If the build context is stored in a storage account, the path of the build context must be specified as
- - `https://storage-account.blob.core.windows.net/container/path/`
+<!--issueDescription-->
+**Potential causes:**
+* The specified location of your Docker build context is invalid
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK v1*
+
+For scenarios in which you're storing your Docker build context in a storage account
+* The path of the build context must be specified as `https://<storage-account>.blob.core.windows.net/<container>/<path>`
+* Ensure that the location you provided is a valid URL
+* Ensure that you've specified a container and a path
+
+**Resources**
+* See [DockerBuildContext Class](/python/api/azureml-core/azureml.core.environment.dockerbuildcontext)
+* [Python SDK/Azure CLI v2 sample](https://aka.ms/azureml/environment/create-env-build-context-v2)
+* [Understand build context](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#understand-build-context)
### *Base image issues* ### Base image is deprecated-- The following base images are deprecated:
- - `azureml/base`
- - `azureml/base-gpu`
- - `azureml/base-lite`
- - `azureml/intelmpi2018.3-cuda10.0-cudnn7-ubuntu16.04`
- - `azureml/intelmpi2018.3-cuda9.0-cudnn7-ubuntu16.04`
- - `azureml/intelmpi2018.3-ubuntu16.04`
- - `azureml/o16n-base/python-slim`
- - `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu16.04`
- - `azureml/openmpi3.1.2-ubuntu16.04`
- - `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu18.04`
- - `azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04`
- - `azureml/openmpi3.1.2-cuda10.2-cudnn7-ubuntu18.04`
- - `azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04`
-- AzureML can't provide troubleshooting support for failed builds with deprecated images. -- Deprecated images are also at risk for vulnerabilities since they're no longer updated or maintained.
-It's best to use newer, non-deprecated versions.
+<!--issueDescription-->
+**Potential causes:**
+* You used a deprecated base image
+ * AzureML can't provide troubleshooting support for failed builds with deprecated images
+ * These images aren't updated or maintained, so they're at risk of vulnerabilities
+
+The following base images are deprecated:
+* `azureml/base`
+* `azureml/base-gpu`
+* `azureml/base-lite`
+* `azureml/intelmpi2018.3-cuda10.0-cudnn7-ubuntu16.04`
+* `azureml/intelmpi2018.3-cuda9.0-cudnn7-ubuntu16.04`
+* `azureml/intelmpi2018.3-ubuntu16.04`
+* `azureml/o16n-base/python-slim`
+* `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu16.04`
+* `azureml/openmpi3.1.2-ubuntu16.04`
+* `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu18.04`
+* `azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04`
+* `azureml/openmpi3.1.2-cuda10.2-cudnn7-ubuntu18.04`
+* `azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04`
+* `azureml/openmpi3.1.2-ubuntu18.04`
+* `azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04`
+* `azureml/openmpi4.1.0-cuda11.1-cudnn8-ubuntu18.04`
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Upgrade your base image to a latest version of supported images
+* See available [base images](https://github.com/Azure/AzureML-Containers/tree/master/base)
### No tag or digest-- For the environment to be reproducible, one of the following must be included on a provided base image:
- - Version tag
- - Digest
-- See [image with immutable identifier](https://aka.ms/azureml/environment/pull-image-by-digest)
+<!--issueDescription-->
+**Potential causes:**
+* You didn't include a version tag or a digest on your specified base image
+* Without one of these, the environment isn't reproducible
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Include at least one of the following on your specified base image
+* Version tag
+* Digest
+* See [image with immutable identifier](https://aka.ms/azureml/environment/pull-image-by-digest)
### *Environment variable issues* ### Misplaced runtime variables-- An environment definition shouldn't contain runtime variables-- Use the `environment_variables` attribute on the [RunConfiguration object](https://aka.ms/azureml/environment/environment-variables-on-run-config) instead
+<!--issueDescription-->
+**Potential causes:**
+* You specified runtime variables in your environment definition
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK v1*
+
+Use the `environment_variables` attribute on the [RunConfiguration object](https://aka.ms/azureml/environment/environment-variables-on-run-config) instead
### *Python issues* ### Python section missing
-*V1*
--- An environment definition must have a Python section-- Conda dependencies are specified in this section, and Python (along with its version) should be one of them
-```json
-"python": {
- "baseCondaEnvironment": null,
- "condaDependencies": {
- "channels": [
- "anaconda",
- "conda-forge"
- ],
- "dependencies": [
- "python=3.8"
- ],
- },
- "condaDependenciesFile": null,
- "interpreterPath": "python",
- "userManagedDependencies": false
-}
-```
-- See [PythonSection class](https://aka.ms/azureml/environment/environment-python-section)
+<!--issueDescription-->
+**Potential causes:**
+* Your environment definition doesn't have a Python section
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK v1*
+
+Populate the Python section of your environment definition
+* See [PythonSection class](https://aka.ms/azureml/environment/environment-python-section)
### Python version missing
-*V1*
+<!--issueDescription-->
+**Potential causes:**
+* You haven't specified a Python version in your environment definition
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
-- A Python version must be specified in the environment definition -- A Python version can be added by adding Python as a conda package and specifying the version:
+*Applies to: Python SDK v1*
+
+Add Python as a conda package and specify the version
```python from azureml.core.environment import CondaDependencies
myenv = Environment(name="myenv")
conda_dep = CondaDependencies() conda_dep.add_conda_package("python==3.8") ```-- See [Add conda package](https://aka.ms/azureml/environment/add-conda-package-v1)+
+*Applies to: all scenarios*
+
+If you're using a yaml for your conda specification, include Python as a dependency
+
+```yaml
+name: project_environment
+dependencies:
+ - python=3.8
+ - pip:
+ - azureml-defaults
+channels:
+ - anaconda
+```
+
+**Resources**
+* [Add conda package v1](https://aka.ms/azureml/environment/add-conda-package-v1)
### Multiple Python versions - Only one Python version can be specified in the environment definition
This issue can happen by failing to access a workspace's associated Azure Contai
Update the workspace image build compute property using SDK:
-```
+```python
from azureml.core import Workspace ws = Workspace.from_config() ws.update(image_build_compute = 'mycomputecluster')
This issue can happen when one or more conda packages listed in your specificati
Specify channels in your conda specification:
-```
+```yaml
channels: - conda-forge - anaconda
This issue can happen when a Python module listed in your conda specification do
* If you haven't listed a specific Python version in your conda specification, make sure to list a specific version that's compatible with your module otherwise a default may be used that isn't compatible Pin a Python version that's compatible with the pip module you're using:
-```
+```yaml
channels: - conda-forge - anaconda
This issue can happen when there's no package found that matches the version you
How to list channels in a conda yaml specification:
-```
+```yaml
channels:
- - conda-forge
- - anaconda
+ - conda-forge
+ - anaconda
dependencies:
- - python = 3.8
- - tensorflow = 2.8
+ - python = 3.8
+ - tensorflow = 2.8
Name: my_environment ```
Provide authentication via workspace connections
*Applies to: Python SDK v1*
-```
+```python
from azureml.core import Workspace ws = Workspace.from_config() ws.set_connection("connection1", "PythonFeed", "<URL>", "Basic", "{'Username': '<username>', 'Password': '<password>'}")
If your container registry is behind a virtual network or is using a private end
* After you put the container registry behind a virtual network, run the [Azure Resource Manager template](https://aka.ms/azureml/environment/secure-resources-using-vnet) so the workspace can communicate with the container registry instance If you aren't using a virtual network, or if you've configured it correctly, test that your credentials are correct for your ACR by attempting a simple local build
-* Get credentials for your workspace ACR from the Azure Portal
+* Get credentials for your workspace ACR from the Azure portal
* Log in to your ACR using `docker login <myregistry.azurecr.io> -u "username" -p "password"` * For an image "helloworld", test pushing to your ACR by running `docker push helloworld` * See [Quickstart: Build and run a container image using Azure Container Registry Tasks](../container-registry/container-registry-quickstart-task-cli.md)
machine-learning How To Troubleshoot Protobuf Descriptor Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-protobuf-descriptor-error.md
+ Last updated 11/04/2022
machine-learning How To Troubleshoot Serialization Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-serialization-error.md
+ Last updated 11/04/2022
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md
The pipeline requires the following parameters to be configured:
| Parameter | Description | Sample value | | | -|- | | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
-| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2022-10-01` | `2022-10-01` |
| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` | | `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
-| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. You can use the default Azure Machine Learning data store, named `workspaceblobstore`. | `azureml://datastores/workspaceblobstore/paths/batch/predictions.csv` |
# [Using a Service Principal](#tab/sp)
The pipeline requires the following parameters to be configured:
| `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` | | `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` | | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
-| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2022-10-01` | `2022-10-01` |
| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` | | `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
-| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. You can use the default Azure Machine Learning data store, named `workspaceblobstore`. | `azureml://datastores/workspaceblobstore/paths/batch/predictions.csv` |
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
--+++ Last updated 09/09/2022
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-register-dataset.md
-+ Last updated 09/28/2022
machine-learning Reference Yaml Compute Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-kubernetes.md
Last updated 03/31/2022-+ # CLI (v2) Attached Azure Arc-enabled Kubernetes cluster (KubernetesCompute) YAML schema
machine-learning Reference Yaml Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-model.md
Last updated 03/31/2022-+ # CLI (v2) model YAML schema
machine-learning Samples Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-designer.md
-+ Last updated 10/21/2021
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
The [`PipelineEndpoint`](/python/api/azureml-pipeline-core/azureml.pipeline.core
+ **New features** + Azure Machine Learning now provides first class support for popular DNN framework Chainer. Using [`Chainer`](/python/api/azureml-train-core/azureml.train.dnn.chainer) class users can easily train and deploy Chainer models.
- + Learn how to [run distributed training with ChainerMN](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/chainer/distributed-chainer/distributed-chainer.ipynb)
+ Learn how to [run hyperparameter tuning with Chainer using HyperDrive](https://github.com/Azure/MachineLearningNotebooks/blob/b881f78e4658b4e102a72b78dbd2129c24506980/how-to-use-azureml/ml-frameworks/chainer/deployment/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb) + Azure Machine Learning Pipelines added ability to trigger a Pipeline run based on datastore modifications. The pipeline [schedule notebook](https://aka.ms/pl-schedule) is updated to showcase this feature.
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-monitoring-best-practices.md
Previously updated : 06/20/2022 Last updated : 01/27/2023
-# Best practices for monitoring Azure Database for MySQL - Single server
+# Best practices for monitoring Azure Database for MySQL
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)]
mysql Concept Perf Benchmark Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-perf-benchmark-best-practices.md
+
+ Title: Performance benchmarking considerations and best practices - Azure Database for MySQL
+description: This article describes some considerations and best practices to apply when conducting performance benchmarks on Azure Database for MySQL servers.
+++++ Last updated : 01/27/2023++
+# Best practices for benchmarking the performance of Azure Database for MySQL servers
+++
+Performance is a hallmark of any application, and itΓÇÖs vital to define a clear strategy for analyzing and assessing how a database performs when handling an application's variable workload requirements.
+
+This article provides considerations and best practices for running performance benchmarks against Azure Database for MySQL servers.
+
+## Performance testing
+
+Benchmarking the performance of relational database systems synthetically may at first seem like a trivial task. After all, itΓÇÖs relatively easy to assess the performance of individual queries manually and even to launch simple synthetic tests using one of the available benchmarking tools. These types of tests take little time and quickly produce easy to understand results.
+
+However, benchmarking the performance of real-world production systems requires a lot of additional effort. ItΓÇÖs not easy to design, implement, and run tests that are truly representative of production workloads. ItΓÇÖs even more difficult to make decisions about production data stacks based on the results of a series of benchmarks that are performed in an isolated environment.
+
+## Performance testing methodologies
+
+### Synthetic benchmarks
+
+Synthetic testing is designed to put stress on a database system using artificial workload samples that simulate repeatable results in a database environment. This allows customers to perform comparisons between multiple environments to gauge the right database resource for their production deployments.
+
+There are several benefits associated with using synthetic benchmarks. For example, they:
+
+- Are predictable, repeatable, and allow for selective testing (e.g., write-only tests, read-only tests, a mix of write and read tests, and targeted tests against a table).
+- Provide overall results that can be represented using simple metrics (e.g., ΓÇ£queries per secondΓÇ¥, ΓÇ£transactions per secondΓÇ¥ etc.).
+- DonΓÇÖt require application or environment-specific knowledge to build and run.
+- Can be performed quickly and with little to no preparation.
+However, there are also associated drawbacks, in that:
+- Artificial workload samples aren't representative of real-world application traffic.
+- Results can't be used to accurately predict the performance of production workloads.
+- They may not expose product-specific performance characteristics when used to test different database products.
+- ItΓÇÖs easy to perform the tests incorrectly and produce results that are even less representative.
+
+Synthetic tests are handy for quick comparisons between products. You can also use them to implement continuous performance monitoring mechanisms. For example, you may run a test suite every weekend to validate the baseline performance of your database system, detect anomalies, and predict long-term performance patterns (e.g., query latency degradation as a result of data growth).
+
+### Real-world benchmarks
+
+With real-world testing, the database is presented with workload samples that closely resemble production traffic. You can achieve this directly, by replaying a log of production queries and measuring database performance. You can also achieve this indirectly, by running the test at the application level and measuring application performance on a given database server.
+
+There are several benefits associated with using real-world benchmarks, in that they:
+
+- Provide an accurate view of system performance in real production conditions.
+- May reveal application or database-specific characteristics that simplified synthetic tests wouldn't.
+- Help with capacity planning related to application growth.
+
+There are also certain disadvantages: Real-world benchmarks:
+
+- Are difficult to design and run.
+- Must be maintained to ensure relevancy as the application evolves.
+- Provide results that are meaningful only in the context of a given application.
+
+When you're preparing for a major change to your environment, e.g., when deploying a new database product, it's strongly recommended to use real-world tests. In such a situation, a comprehensive benchmark run using actual production workload will be of significant help. It will not only provide accurate results you can trust, but also remove or at least greatly reduce the number of ΓÇ£unknownsΓÇ¥ about your system.
+
+## Choosing the ΓÇ£rightΓÇ¥ test methodology
+
+The ΓÇ£rightΓÇ¥ test methodology for your purposes depends entirely on the objective of your testing.
+
+If youΓÇÖre looking to quickly compare different database products using artificial data and workload samples, you can safely use an existing benchmark program that will generate data and run the test for you.
+
+To accurately assess the performance of an actual application that you intend to run on a new database product, you should perform real-world benchmark tests. Each application has a unique set of requirements and performance characteristics, and it's strongly suggested that you include real-world benchmark testing in all performance evaluations.
+
+For guidelines on preparing and running synthetic and real-world benchmarks, see the following sections later in this post:
+
+- Preparing and running synthetic tests
+- Preparing and run real-world tests
+
+## Performance testing best practices
+
+### Server-specific recommendations
+
+#### Server sizing
+
+When launching Azure Database for MySQL instances to perform benchmarking, use an Azure Database for MySQL instance tier, SKU, and instance count that matches your current database environment.
+
+For example:
+
+- If your current server has eight CPU cores and 64 GB of memory, itΓÇÖs best to choose an instance based on the Standard_E8ds_v4 SKU.
+- If your current database environment uses Read Replicas, use Azure Database for MySQL read replicas.
+
+Depending on the results of your benchmark testing, you may decide to use different instance sizes and counts in production. However, itΓÇÖs still a good practice to ensure that the initial specifications of test instances are close to your current server specifications to provide a more accurate, ΓÇ£apples-to-applesΓÇ¥ comparison.
+
+#### Server configuration
+
+If the application/benchmark requires that certain database features be enabled, then prior to running the benchmark test, adjust the server parameters accordingly. For example, you may need to:
+
+- Set a non-default server time zone.
+- Set a custom ΓÇ£max_connectionsΓÇ¥ parameter if the default value isn't sufficient.
+- Configure the thread pool if your Azure Database for MySQL flexible server is running version 8.0.
+- Enable Slow Query Logs if you expect to use them in production so you can analyze any bottleneck queries.
+
+Other parameters, such as those related to the size of various database buffers and caches, are already pre-tuned in Azure Database for MySQL, and you can initially leave them set at their default values. While you can modify them, itΓÇÖs best to avoid making server parameter changes unless your performance benchmarks show that a given change does in fact improve performance.
+
+When performing tests comparing Azure Database for MySQL to other database products, be sure to enable all features that you expect to use in production on your test databases. For example, if you donΓÇÖt enable zone redundant HA, backups, and Read Replicas in your test environment, then your results may not accurately reflect real-world performance.
+
+### Client-specific recommendations
+
+All performance benchmarks involve the use of a client, so regardless of your chosen benchmarking methodology, be sure to consider the following client-side recommendations.
+
+- Make sure client instances exist in the same Azure Virtual Network (VNet) as the Azure Database for MySQL instance you are testing. For latency-sensitive applications, itΓÇÖs a good practice to place client instances in the same Availability Zone (AZ) as the database server.
+- If a production application is expected to run on multiple instances (e.g., an app server fleet behind a Load Balancer), itΓÇÖs a good practice to use multiple client instances when performing the benchmark.
+- Ensure that all client instances have adequate compute, memory, I/O, and network capacity to handle the benchmark. In other words, the clients must be able to produce requests faster than the database engine can handle them. All operating systems provide diagnostic tools (such as ΓÇ£topΓÇ¥, ΓÇ£htopΓÇ¥, ΓÇ£dstatΓÇ¥ or ΓÇ£iostatΓÇ¥ on Linux) that can help you diagnose resource utilization on client instances. It's strongly recommended that you leverage these tools and ensure that all client instances always have spare CPU, memory, network, and IO capacity while the benchmark is running.
+
+Note that even with a very large SKU, a single client instance may not always be able to generate requests quickly enough to saturate the database. Depending on the test configuration, Azure Database for MySQL can be capable of handling hundreds of thousands of read/write requests per second, which may be more than a single client can accommodate. To avoid client-side contention during heavy performance tests, itΓÇÖs therefore a common practice to run a benchmark from multiple client instances in parallel.
+
+> [!IMPORTANT]
+> If youΓÇÖre benchmarking your application using a traffic generator script or third-party tool (such as Apache Benchmark, Apache JMeter, or Siege), you should also evaluate the instance on which the tool is running using the recommendations called out previously.
+
+## Preparing and running synthetic tests
+
+Synthetic benchmarking tools such as sysbench are easy to install and run, but they typically require a certain degree of configuration and tuning before any given benchmark can achieve optimal results.
+
+### Table count and size
+
+The number and size of tables generated prior to benchmarking should be realistically large. For example, tests conducted on a single table with 100,000 rows are unlikely to yield useful results because such data set is likely smaller than virtually any real-world database. For comparison, a benchmark using several tables (e.g., 10-25) with 5 million rows each might be a more realistic representation of real time workload.
+
+### Test mode
+
+With most benchmark tools (including the popular sysbench), you can define the type of workload that you want to run against the server. For example, the tool can generate:
+
+- Read-only queries with identical syntax but different parameters.
+- Read-only queries of different types (point selects, range selects, selects with sorts, etc.).
+- Write-only statements that modify individual rows or ranges of rows.
+- A mix of read/write statements.
+
+You can use read-only or write-only workloads if youΓÇÖd like to test database performance and scalability in these specific scenarios. However, a representative benchmark should typically include a good mix of read/write statements, because this is the type of workload most OLTP databases have to handle.
+
+### Concurrency level
+
+Concurrency level is the number of threads simultaneously executing operations against the database. Most benchmark tools use a single thread by default, which isn't representative of real-world database environments, as databases are rarely used by a single client at a time.
+
+To test the theoretical peak performance of a database, use the following process:
+
+1. Run multiple tests using a different thread count for each test. For example, start with 32 threads, and then increase the thread count for each subsequent test (64, 128, 256, and so on).
+2. Continue to increase the thread count until database performance stabilizes - this is your peak theoretical performance.
+3. When you determine that database performance stops increasing at a given concurrency level, you can still attempt to increase the thread count a couple more times, which will show whether performance remains stable or begins to degrade.
+For more information, see the blog post [Benchmarking Azure Database for MySQL ΓÇô Flexible Server using Sysbench](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/benchmarking-azure-database-for-mysql-flexible-server-using/ba-p/3108799).
+
+## Preparing and running real-world tests
+
+Every application is unique in terms of data characteristics and performance requirements. As a result, itΓÇÖs somewhat difficult to come up with a single, universal list of steps that would be sufficient to prepare and run a representative, real-world benchmark in an arbitrary database environment.
+
+The ideas presented in this section are intended to make performance testing projects a little easier.
+
+### Preparing test data
+
+Before conducting performance benchmarks against your Azure Database for MySQL server, be sure that the server is populated with a representative sample of your production data set.
+
+Whenever possible, use a full copy of the production set. When this isnΓÇÖt possible, use the following suggestions to help you determine which portions of data you should always include and which data you can leave out.
+
+- The test server needs to include all objects (i.e., schemas, tables, functions, and procedures) that are directly used by the benchmark. Each table should be fully populated, i.e., it should contain all the rows it contains in production. If tables aren't fully populated (e.g., they only contain a small sample of the row set), benchmark results won't be representative.
+- Exclude tables that are used by production applications but that arenΓÇÖt part of continuous operational traffic. For example, if a database contains a live, operational data set as well as historical data used for analytics, the historical data may not be required to run benchmarks.
+- Populate all tables that you copy to the test server with real production data rather than artificial, programmatically generated samples.
+
+### Designing application benchmarks
+
+The high-level process for performing application benchmarks is as follows:
+
+1. Create an Azure Database for MySQL server and populate it with a copy of your production data.
+2. Deploy a copy of the application in Azure.
+3. Configure the application to use the Azure Database for MySQL server.
+4. Run load tests against the application and assess the results.
+
+This approach is primarily useful when you can easily deploy a copy of your application in Azure. It allows you to conduct performance assessment in the most thorough and accurate way, but there are still certain recommendations to keep in mind.
+
+- The tool used to generate application traffic must be able to generate a mix of requests representative of your production workload. For example, don't test by repeatedly accessing the same application URL, as this likely isn't representative of how your clients will use the application in the real world.
+- The pool of client and application instances must be powerful enough to generate requests, handle them, and receive responses from the database without introducing any bottlenecks.
+- Concurrency level (the number of parallel requests generated by the benchmark tool) should match or slightly exceed the expected peak concurrency level observed in your application.
+
+### Designing database benchmarks
+
+If you canΓÇÖt easily deploy a copy of your application in Azure, you'll need to perform the benchmark by running SQL statements directly against the database. To accomplish this, use the following high-level procedure:
+
+1. Identify the SQL statements that most commonly appear in your production workload.
+2. Based on the information gathered in the first step, prepare a large sample of SQL statements to test.
+3. Create an Azure Database for MySQL node and populate it with a copy of your production data.
+4. Launch Azure virtual machine (VM) client instance(s) in Azure.
+5. From the VMs, run the SQL workload sample against your Azure Database for MySQL server and assess the results.
+
+There are two main approaches to generating the test payload (SQL statement samples):
+
+- Observe/record the SQL traffic occurring in your current database, then generate SQL samples based on those observations. For details on how to record query traffic by leveraging a combination of audit logs and slow query logging in Azure Database for MySQL.
+- Use actual query logs as the payload. Third party tools such as ΓÇ£Percona PlaybackΓÇ¥ can generate multi-threaded workloads based on MySQL Slow Query Logs.
+
+If you decide to generate SQL sample manually, be sure that the sample contains:
+
+- **A large enough number of unique statements**.
+
+ Example: if you determine that the production workload uses 15 main types of statements, it isn't enough for the sample to contain a total of 15 statements (one per type). For such a small sample, the database would easily cache the required data in memory, making the benchmark non-representative. Instead, provide a large query sample for each statement type and the use the following additional recommendations.
+
+- **Statements of different types in the right proportions.**
+
+ Example: if you determine that your production workload uses 12 types of statements, it is likely that some types of statements appear more often than others. Your sample should reflect these proportions: if query A appears 10 times more often than query B in production workload, it should also appear 10 times more often in your sample.
+
+- **Query parameters that are realistically randomized.**
+
+ If you followed earlier recommendations and your query sample contains groups of queries of the same type/syntax, parameters of such queries should be randomized. If the sample contains one million queries of the same type and they're all identical (including parameters in WHERE conditions), the required data will easily be cached in database memory, making the benchmark non-representative.
+
+- **A statement execution order that is realistically randomized.**
+
+ If you follow the previous recommendations and your test payload contains many queries of different types, you should execute these queries in a realistic order. For example, the sample may contain 10 million SELECTs and 1 million UPDATES. In such a case, executing all SELECTs before all UPDATEs may not be the best choice as this is likely not how your application executes queries in real world. More likely, the application interleaves SELECTs and UPDATEs and your test should try to simulate that.
+
+When the query sample is ready, run it against the server by using a command line MySQL client or a tool such as mysqlslapv.
+
+## Running tests
+
+Regardless of whether youΓÇÖre running a synthetic benchmark or a real-world application performance test, there are several rules of thumb to follow to help ensure that you achieve more representative results.
+
+### Run tests against multiple instance types
+
+Assume that you decide to run benchmarks against a db.r3.2xlarge server and find that the application/query performance already meets your requirements. It's recommended also to run tests both against smaller and larger instance types, which provides two benefits:
+
+- Testing with smaller instance types may still yield good performance results and reveal potential cost saving opportunities.
+- Testing with larger instance types may provide ideas or insight about future scalability options.
+
+### Measure both sustained and peak performance
+
+The test strategy you choose should provide you with answers to whether the database will provide adequate:
+
+- Sustained performance - Will it perform as expected under the normal workload, when user traffic is smooth and well within expected levels?
+- Peak performance - Will it ensure application responsiveness during traffic spikes?
+
+Consider the following guidelines:
+
+- Ensure that test runs are long enough to assess database performance in a stable state. For example, a complex test that only lasts for 10 minutes will likely produce inaccurate results, as the database caches and buffers may not be able to warm up in such a short time.
+- Benchmarks can be a lot more meaningful and informative if the workload levels vary throughout the test. For example, if your application typically receives traffic from 64 simultaneous clients, start the benchmark with 64 clients. Then, while the test is still running, add 64 additional clients to determine how the server behaves during a simulated traffic spike.
+
+### Include blackout/brownout tests in the benchmark procedure
+
+Sustained server performance is a particularly important metric, likely to become the main point of focus during your tests. For mission-critical applications however, performance testing shouldn't stop at measuring server behavior in steady state.
+
+Consider including the following scenarios in your tests.
+
+- ΓÇ£BlackoutΓÇ¥ tests, which are designed to determine how the database behaves during a reboot or a crash. Azure Database for MySQL introduces significant improvements around crash recovery times, and reboot/crash tests are instrumental in understanding how Azure Database for MySQL contributes to reducing your application downtime in such scenarios.
+- ΓÇ£BrownoutΓÇ¥ tests, which are designed to gauge how quickly a database achieves nominal performance levels after a reboot or crash. Databases often need time to achieve optimal performance, and Azure Database for MySQL introduces improvements in this area as well.
+
+In the event of stability issues affecting your database, any information gathered during the performance benchmarks will help identify bottlenecks or further tune the application to cater to the workload needs.
+
+## Next steps
+
+- [Best practices for optimal performance of Azure Database for servers](concept-performance-best-practices.md)
+- [Best practices for server operations using Azure Database for MySQL](concept-operation-excellence-best-practices.md)
+- [Best practices for monitoring your Azure Database for MySQL](concept-monitoring-best-practices.md)
+- [Get started with Azure Database for MySQL](quickstart-create-mysql-server-database-using-azure-portal.md)
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-performance-best-practices.md
Last updated 07/22/2022
-# Best practices for optimal performance of your Azure Database for MySQL server
+# Best practices for optimal performance of Azure Database for MySQL servers
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)]
network-watcher Connection Monitor Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-powershell.md
Select-AzSubscription -SubscriptionId <your-subscription>
//Select region $nw = "NetworkWatcher_centraluseuap" //Declare endpoints like Azure VM below. You can also give VNET,Subnet,Log Analytics workspace
-$sourcevmid1 = New-AzNetworkWatcherConnectionMonitorEndpointObject -Name MyAzureVm -ResourceID /subscriptions/<your-subscription>/resourceGroups/<your resourceGroup>/providers/Microsoft.Compute/virtualMachines/<vm-name>
+$sourcevmid1 = New-AzNetworkWatcherConnectionMonitorEndpointObject -AzureVM -Name MyAzureVm -ResourceID /subscriptions/<your-subscription>/resourceGroups/<your resourceGroup>/providers/Microsoft.Compute/virtualMachines/<vm-name>
//Declare endpoints like URL, IPs
-$bingEndpoint = New-AzNetworkWatcherConnectionMonitorEndpointObject -name Bing -Address www.bing.com # Destination URL
+$bingEndpoint = New-AzNetworkWatcherConnectionMonitorEndpointObject -ExternalAddress -Name Bing -Address www.bing.com # Destination URL
//Create test configuration.Choose Protocol and parametersSample configs below. $IcmpProtocolConfiguration = New-AzNetworkWatcherConnectionMonitorProtocolConfigurationObject -IcmpProtocol
openshift Howto Use Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-use-key-vault-secrets.md
Title: Use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift
+ Title: Use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift
description: This article explains how to use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift.
export AZ_TENANT_ID=$(az account show -o tsv --query tenantId)
``` helm install -n k8s-secrets-store-csi csi-secrets-store \ secrets-store-csi-driver/secrets-store-csi-driver \
- --version v1.0.1 \
+ --version v1.3.1 \
--set "linux.providersDir=/var/run/secrets-store-csi-providers" ``` Optionally, you can enable autorotation of secrets by adding the following parameters to the command above:
export AZ_TENANT_ID=$(az account show -o tsv --query tenantId)
csi-secrets-store-provider-azure/csi-secrets-store-provider-azure \ --set linux.privileged=true --set secrets-store-csi-driver.install=false \ --set "linux.providersDir=/var/run/secrets-store-csi-providers" \
- --version=v1.0.1
+ --version=v1.4.1
``` 1. Set SecurityContextConstraints to allow the CSI driver to run:
Uninstall the Key Vault Provider and the CSI Driver.
``` helm uninstall -n k8s-secrets-store-csi csi-secrets-store
+ oc delete project k8s-secrets-store-csi
``` 1. Delete the SecurityContextConstraints:
Uninstall the Key Vault Provider and the CSI Driver.
``` oc adm policy remove-scc-from-user privileged \ system:serviceaccount:k8s-secrets-store-csi:secrets-store-csi-driver
- ```
+ ```
orbital Modem Chain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/modem-chain.md
We currently support the following named modem configurations.
|--|--|--| | Aqua Direct Broadcast | aqua_direct_broadcast | This is NASA AQUA's 15-Mbps direct broadcast service | | Aqua Direct Playback | aqua_direct_playback | This is NASA's AQUA's 150-Mbps direct broadcast service |
+| Aura Direct Broadcast | aura_direct_broadcast | This is NASA Aura's 15-Mbps direct broadcast service |
+| Terra Direct Broadcast | terra_direct_broadcast | This is NASA Terra's 8.3-Mbps direct broadcast service |
+| SNPP Direct Broadcast | snpp_direct_broadcast | This is NASA SNPP 15-Mbps direct broadcast service |
+| JPSS-1 Direct Broadcast | jpss-1_direct_broadcast | This is NASA JPSS-1 15-Mbps direct broadcast service |
> [!NOTE]
-> We recommend using the Aqua Direct Broadcast modem configuration when testing with Aqua.
+> We recommend using the Aqua Direct Broadcast modem configuration when testing with Aqua.
+>
+> Orbital does not have control over the downlink schedules for these public satellites. NASA conducts their own operations which may interrupt the downlink availabilities.
#### Specifying a named modem configuration using the API Enter the named modem string into the demodulationConfiguration parameter when using the API.
Leave the modulationConfiguration or demodulationConfiguration parameters blank
- [Register Spacecraft](register-spacecraft.md) - [Prepare the network](prepare-network.md)-- [Schedule a contact](schedule-contact.md)
+- [Schedule a contact](schedule-contact.md)
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Sign in to the [Azure portal](https://aka.ms/orbital/portal).
> [!NOTE] > TLE stands for Two-Line Element.
+ >
> Spacecraft resources can be created in any Azure region with a Microsoft ground station and schedule contacts on any ground station. Current eligible regions are West US 2, Sweden Central, and Southeast Asia. :::image type="content" source="media/orbital-eos-register-bird.png" alt-text="Register Spacecraft Resource Page" lightbox="media/orbital-eos-register-bird.png":::
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md
This document contains information about troubleshooting your solutions that use
* The EA subscription doesn't allow Marketplace purchases.
- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support).
+ Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support).
## Unable to create Datadog - An Azure Native ISV Service resource
If the Datadog agent has been configured with an incorrect key, navigate to the
## Next steps -- Learn about [managing your instance](manage.md) of Datadog.
+- Learn about [managing your instance](manage.md) of Datadog.
partner-solutions Qumulo Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-create.md
Title: Get started with Azure Native Qumulo Scalable File Service Preview
description: In this quickstart, learn how to create an instance of Azure Native Qumulo Scalable File Service. -+ Last updated 01/18/2023
partner-solutions Qumulo How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-how-to-manage.md
Title: Manage Azure Native Qumulo Scalable File Service Preview description: This article describes how to manage Azure Native Qumulo Scalable File Service in the Azure portal. --++ Last updated 01/18/2023
This article describes how to manage your instance of Azure Native Qumulo Scalab
## Configure and use the Qumulo file system
-For help with configuring and using your file system, see the [Qumulo documentation hub](https://docs.qumulo.com/cloud-guide/).
+For help with configuring and using your file system, see the [Qumulo documentation hub](https://docs.qumulo.com/azure-guide/).
## Delete the Qumulo file system
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a list of high-level features and capabilities comp
| PgCron, lo, pglogical | No | Yes | | pgAudit | Preview | Yes | | **Security** | | |
-| Azure Active Directory Support(AAD) | Yes | Preview |
-| Customer managed encryption key(BYOK) | Yes | Preview |
+| Azure Active Directory Support(AAD) | Yes | Yes |
+| Customer managed encryption key(BYOK) | Yes | Yes |
| SCRAM Authentication (SHA-256) | No | Yes | | Secure Sockets Layer support (SSL) | Yes | Yes | | **Other features** | | |
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
When you're using data encryption by using a customer-managed key, here are reco
:::image type="content" source="media/concepts-data-encryption/key-vault-trusted-service.png" alt-text="Screenshot of an image of networking screen with trusted-service-with-AKV setting." lightbox="media/concepts-data-encryption/key-vault-trusted-service.png"::: > [!NOTE]
->Important to note, that after choosing **disable public access** option in Azure Key Vault networking and allowing only *trusted Microsoft* services you may see error similar to following : *You have enabled the network access control. Only allowed networks will have access to this key vault* while attempting to administer Azure Key Vault via portal through public access, since portal is not considered to be trusted service.
+>Important to note, that after choosing **disable public access** option in Azure Key Vault networking and allowing only *trusted Microsoft* services you may see error similar to following : *You have enabled the network access control. Only allowed networks will have access to this key vault* while attempting to administer Azure Key Vault via portal through public access. This doesn't preclude ability to provide key during CMK setup or fetch keys from Azure Key Vault during server operations.
Here are recommendations for configuring a customer-managed key:
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
description: Describes the connectivity architecture of your Azure Database for
--++ Last updated 06/24/2022
postgresql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-dropped-server.md
description: This article describes how to restore a dropped server in Azure Dat
--++ Last updated 06/24/2022
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-data-sources.md
Previously updated : 10/17/2022 Last updated : 01/25/2023 # Manage data sources in Microsoft Purview
In this article, you learn how to register new data sources, manage collections
## Register a new source
-Use the following steps to register a new source.
+>[!NOTE]
+> You'll need to be a Data Source Admin and one of the other Purview roles (for example, Data Reader or Data Share Contributor) to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details on roles and adding permissions.
++
+Use the following steps to register a new source:
1. Open [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/), navigate to the **Data Map**, **Sources**, and select **Register**.
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
Previously updated : 10/13/2022 Last updated : 01/25/2023 # Create an Azure Cognitive Search service in the portal
Last updated 10/13/2022
If you have an Azure subscription, including a [trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F), you can create a search service for free. Free services have limitations, but you can complete all of the quickstarts and most tutorials.
-The easiest way to create search service is using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md), [Azure CLI](/cli/azure/search), the [Management REST API](/rest/api/searchmanagement/), an [Azure Resource Manager service template](https://azure.microsoft.com/resources/templates/azure-search-create/), or a [Bicep file](search-get-started-bicep.md).
+The easiest way to create search service is using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md#create-or-delete-a-service), [Azure CLI](search-manage-azure-cli.md#create-or-delete-a-service), the [Management REST API](search-manage-rest.md#create-or-update-a-service), an [Azure Resource Manager service template](search-get-started-arm.md), or a [Bicep file](search-get-started-bicep.md).
[![Animated GIF](./media/search-create-service-portal/AnimatedGif-AzureSearch-small.gif)](./media/search-create-service-portal/AnimatedGif-AzureSearch.gif#lightbox)
The easiest way to create search service is using the [Azure portal](https://por
The following service properties are fixed for the lifetime of the service. Because they're fixed, consider the usage implications as you fill in each property: + Service name becomes part of the URL endpoint ([review tips for helpful service names](#name-the-service)).
-+ [Tier](search-sku-tier.md) (Basic, Standard, and so forth) determines the underlying physical hardware and billing. Some features are tier-constrained.
++ [Tier](search-sku-tier.md) (Free, Basic, Standard, and so forth) determines the underlying physical hardware and billing. Some features are tier-constrained. + [Service region](#choose-a-region) can determine the availability of certain scenarios. If you need high availability or [AI enrichment](cognitive-search-concept-intro.md), you'll need to create the resource in a region that provides the feature. ## Subscribe (free or paid)
-To try search for free, you have two options:
+To try search for free, [open a free Azure account](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service using the **Free** tier. Because the tier is fixed, it will never transition to become a billable tier.
-+ [Open a free Azure account](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and use free credits to try out paid Azure services. After credits are used up, keep the account and continue to use free Azure services, such as Websites. Your credit card is never charged unless you explicitly change your settings and ask to be charged.
+Alternatively, you can use free credits to try out paid Azure services, which means you can create your search service at **Basic** or above to get more capacity. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Another approach is to [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
-+ Alternatively, [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
-
-Paid (or billable) search becomes effective when you choose a billable tier (Basic or above) when creating the resource.
+Paid (or billable) search occurs when you choose a billable tier (Basic or above) when creating the resource on a billable Azure subscription.
## Find the Azure Cognitive Search offering
Paid (or billable) search becomes effective when you choose a billable tier (Bas
1. Use the search bar to find "Azure Cognitive Search" or navigate to the resource through **Web** > **Azure Cognitive Search**. ## Choose a subscription
A resource group is a container that holds related resources for your Azure solu
If you aren't combining resources into a single group, or if existing resource groups are filled with resources used in unrelated solutions, create a new resource group just for your Azure Cognitive Search resource. Over time, you can track current and projected costs all-up or you can view charges for individual resources. The following screenshot shows the kind of cost information you can expect to see when you combine multiple resources into one group. > [!TIP] > Resource groups simplify cleanup because deleting a group deletes all of the services within it. For prototype projects utilizing multiple services, putting all of them in the same resource group makes cleanup easier after the project is over.
Two notable exceptions might lead to provisioning one or more search services in
+ [Outbound connections from Cognitive Search to Azure Storage](search-indexer-securing-resources.md). You might want storage in a different region if you're enabling a firewall.
-+ Business continuity and disaster recovery (BCDR) requirements dictate creating multiple search services in [regional pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). For example, if you're operating in North America, you might choose East US and West US, or North Central US and South Centra US, for each search service.
++ Business continuity and disaster recovery (BCDR) requirements dictate creating multiple search services in [regional pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). For example, if you're operating in North America, you might choose East US and West US, or North Central US and South Central US, for each search service. Some features are subject to regional availability. If you require any of following features, choose a region that provides them: + [AI enrichment](cognitive-search-concept-intro.md) requires Cognitive Services to be in the same physical region as Azure Cognitive Search. There are just a few regions that *don't* provide both. The [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page indicates a common regional presence by showing two stacked check marks. An unavailable combination has a missing check mark. The time piece icon indicates future availability.
- :::image type="content" source="media/search-create-service-portal/region-availability.png" alt-text="Regional availability" border="true":::
+ :::image type="content" source="media/search-create-service-portal/region-availability.png" lightbox="media/search-create-service-portal/region-availability.png" alt-text="Screenshot of the regional availability page." border="true":::
+ Semantic search is [currently in preview in selected regions](https://azure.microsoft.com/global-infrastructure/services/?products=search), such as "Australia East" in the above screenshot.
Azure Cognitive Search is currently offered in [multiple pricing tiers](https://
Basic and Standard are the most common choices for production workloads, but initially many customers start with the Free service for evaluation purposes. Among the billable tiers, key differences are partition size and speed, and limits on the number of objects you can create. Remember, a pricing tier can't be changed once the service is created. If you need a higher or lower tier, you'll have to re-create the service.
Remember, a pricing tier can't be changed once the service is created. If you ne
After you've provided the necessary inputs, go ahead and create the service. Your service is deployed within minutes. You can monitor progress through Azure notifications. Consider pinning the service to your dashboard for easy access in the future. +
+## Configure authentication
+
+Unless you're using the portal, programmatic access to your new service requires that you provide the URL endpoint and an authenticated connection. You can use either or both of these options:
-## Get a key and URL endpoint
++ [Connect using key-based authentication](search-security-api-keys.md)++ [Connect using Azure roles](search-security-rbac.md)
-Unless you're using the portal, programmatic access to your new service requires that you provide the URL endpoint and an authenticated connection. [Azure role-based access control with Azure Active Directory](search-security-rbac.md) is in public preview. [Key-based authentication](search-security-api-keys.md) is the default. It's also the only generally available authentication methodology for inbound connections to a search service.
+1. When setting up a programmatic connection, you'll need the search service endpoint. On the **Overview** page, locate and copy the URL endpoint on the right side of the page.
-1. On the **Overview** page, locate and copy the URL endpoint on the right side of the page.
+ :::image type="content" source="media/search-create-service-portal/get-endpoint.png" lightbox="media/search-create-service-portal/get-endpoint.png" alt-text="Screenshot of the service overview page with URL endpoint." border="true":::
-1. On the **Keys** page, copy either one of the admin keys (they're equivalent). Admin API keys are required for creating, updating, and deleting objects on your service. In contrast, query keys provide read-access to index content.
+1. To set authentication options, use the **Keys** page. Most quickstarts and tutorials use API keys for simplicity, but if you're setting up a service for production workloads, consider using Azure roles. You can copy keys from this page.
- :::image type="content" source="media/search-create-service-portal/get-url-key.png" alt-text="Service overview page with URL endpoint" border="false":::
+ :::image type="content" source="media/search-create-service-portal/set-authentication-options.png" lightbox="media/search-create-service-portal/set-authentication-options.png" alt-text="Screenshot of the keys page with authentication options." border="true":::
An endpoint and key aren't needed for portal-based tasks. The portal is already linked to your Azure Cognitive Search resource with admin rights. For a portal walkthrough, start with [Quickstart: Create an Azure Cognitive Search index in the portal](search-get-started-portal.md).
After a search service is provisioned, you can [scale it to meet your needs](sea
Adding resources increases your monthly bill. The [pricing calculator](https://azure.microsoft.com/pricing/calculator/) can help you understand the billing ramifications of adding resources. Remember that you can adjust resources based on load. For example, you might increase resources to create a full initial index, and then reduce resources later to a level more appropriate for incremental indexing.
-> [!Important]
+> [!IMPORTANT]
> A service must have [2 replicas for read-only SLA and 3 replicas for read/write SLA](https://azure.microsoft.com/support/legal/sla/search/v1_0/). 1. Go to your search service page in the Azure portal. 1. In the left-navigation pane, select **Settings** > **Scale**. 1. Use the slidebar to add resources of either type. ## When to add a second service
search Search Manage Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-azure-cli.md
ms.devlang: azurecli Previously updated : 01/05/2023 Last updated : 01/25/2023 # Manage your Azure Cognitive Search service with the Azure CLI
Results should look similar to the following output:
} ```
+[**az search service delete**](/cli/azure/search/service#az-search-service-delete-required-parameters) removes the service and its data.
+
+```azurecli-interactive
+az search service delete --name <service-name> \
+ --resource-group <search-service-resource-group-name> \
+```
+ ### Create a service with IP rules Depending on your security requirements, you may want to create a search service with an [IP firewall configured](service-configure-firewall.md). To do so, pass the Public IP (v4) addresses or CIDR ranges to the `ip-rules` argument as shown below. Rules should be separated by a comma (`,`) or semicolon (`;`).
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-powershell.md
ms.devlang: powershell Previously updated : 06/08/2022 Last updated : 01/25/2023
ResourceId : /subscriptions/<alphanumeric-subscription-ID>/resourceGroups
```azurepowershell-interactive New-AzSearchService -ResourceGroupName <resource-group-name> -Name <search-service-name> -Sku "Standard" -Location "West US" -PartitionCount 3 -ReplicaCount 3 -HostingMode Default ``` + Results should look similar to the following output.
-```
+```azurepowershell
ResourceGroupName : demo-westus Name : my-demo-searchapp Id : /subscriptions/<alphanumeric-subscription-ID>/demo-westus/providers/Microsoft.Search/searchServices/my-demo-searchapp
HostingMode : Default
Tags ```
+[**Remove-AzSearchService**](/powershell/module/az.search/remove-azsearchservice) is used to delete a service and its data.
+
+```azurepowershell-interactive
+Remove-AzSearchService -ResourceGroupName <resource-group-name> -Name <search-service-name>
+```
+
+You'll be asked to confirm the action.
+
+```azurepowershell
+Confirm
+Are you sure you want to remove Search Service 'pstestazuresearch01'?
+[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
+```
++ ### Create a service with IP rules Depending on your security requirements, you may want to create a search service with an [IP firewall configured](service-configure-firewall.md). To do so, first define the IP Rules and then pass them to the `IPRuleList` parameter as shown below.
search Tutorial Create Custom Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-create-custom-analyzer.md
Source code for this tutorial is in the [custom-analyzers](https://github.com/Az
To complete this tutorial, you'll need an Azure Cognitive Search service, which you can [create in the portal](search-create-service-portal.md). You can use the Free tier to complete this walkthrough.
-For the next step, you'll need to know the name of your search service and its API Key. If you're unsure how to find those items, check out this [quickstart](search-create-service-portal.md#get-a-key-and-url-endpoint).
-
+For the next step, you'll need to know the name of your search service and its API Key. If you're unsure how to find those items, check out this [REST quickstart](search-get-started-rest.md).
## 2 - Set up Postman
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/hunting.md
Create or modify a query and save it as your own query or share it with users wh
**To clone and modify an existing query**:
-1. Select the hunting query in the table you want to modify.
-
+1. From the table, select the hunting query you want to modify.
1. Select the ellipsis (...) in the line of the query you want to modify, and select **Clone query**. :::image type="content" source="./media/hunting/clone-query.png" alt-text="Clone query" lightbox="./media/hunting/clone-query.png"::: 1. Modify the query and select **Create**.
+**To modify an existing custom query**:
+
+1. From the table, select the hunting query that you wish to modify. Note that only queries that from a custom content source can be edited. Other content sources have to be edited at that source.
+
+1. Select the ellipsis (...) in the line of the query you want to modify, and select **Edit query**.
+
+1. Modify the **Custom query** field with the updated query. You can also modify the entity mapping and techniques as explained in the "**To create a new query**" section of this documentation.
+ ## Sample query A typical query starts with a table or parser name followed by a series of operators separated by a pipe character ("\|").
For more information, see:
- [Use bookmarks to save interesting information while hunting](bookmarks.md) Learn from an example of using custom analytics rules when [monitoring Zoom](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) with a [custom connector](create-custom-connector.md).+
sentinel Normalization Schema Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-audit.md
The following list mentions fields that have specific guidelines for Audit Event
| Field | Class | Type | Description | ||-||--|
-| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation audited by the event using a normalized value. Use [EventSubType](#eventsubtype) to provide further details, which the normalized value does not convey, and [Operation](#operation). to store the operation as reported by the reporting device.<br><br> For Audit Event records, the allowed values are:<br> - `Set`<br>- `Read`<br>- `Create`<br>- `Delete`<br>- `Execute`<br>- `Install`<br>- `Clear`<br>- `Enable`<br>- `Disable`<br>- `Other`. <br><br>Audit events represent a large variety of operations, and the `Other` value enables mapping operations that have no corresponding `EventType`. However, the use of `Other` limits the usability of the event and should be avoided if possible. |
+| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation audited by the event using a normalized value. Use [EventSubType](#eventsubtype) to provide further details, which the normalized value does not convey, and [Operation](#operation). to store the operation as reported by the reporting device.<br><br> For Audit Event records, the allowed values are:<br> - `Set`<br>- `Read`<br>- `Create`<br>- `Delete`<br>- `Execute`<br>- `Install`<br>- `Clear`<br>- `Enable`<br>- `Disable`<br>- `Other` <br><br>Audit events represent a large variety of operations, and the `Other` value enables mapping operations that have no corresponding `EventType`. However, the use of `Other` limits the usability of the event and should be avoided if possible. |
| <a name="eventsubtype"></a> **EventSubType** | Optional | String | Provides further details, which the normalized value in [EventType](#eventtype) does not convey. | | **EventSchema** | Mandatory | String | The name of the schema documented here is `AuditEvent`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1`. |
Fields that appear in the table are common to all ASIM schemas. Any of guideline
| <a name="oldvalue"></a> **OldValue** | Optional | String | The old value of [Object](#object) prior to the operation, if applicable. | | <a name="newvalue"></a>**NewValue** | Optional | String | The new value of [Object](#object) after the operation was performed, if applicable. | | <a name="value"></a>**Value** | Alias | | Alias to [NewValue](#newvalue) |
-| **ValueType** | Optional | Enumerated | The type of the old and new values. Allowed values are<br>- Other. |
+| **ValueType** | Optional | Enumerated | The type of the old and new values. Allowed values are<br>- Other |
### Actor fields
sentinel Normalization Schema Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-web.md
The following list mentions fields that have specific guidelines for Web Session
| Field | Class | Type | Description | ||-||--|
-| <a name='eventtype'></a>**EventType** | Mandatory | Enumerated | Describes the operation reported by the record. Allowed values are:<br> - `HTTPsession`: Denotes a network session used for HTTP or HTTPS, typically reported by an intermediary device, such as a proxy or a Web security gateway.<br> - `WebServerSession`: Denotes an HTTP request reported by a web server. Such an event typically has less network related information. The URL reported should not include a schema and a server name, but only the path and parameters part of the URL. <br> - `Api`: Denotes an HTTP request reported associated with an API call, typically reported by an application server. Such an event typically has less network related information. When reported by the application server, the URL reported should not include a schema and a server name, but only the path and parameters part of the URL. |
+| <a name='eventtype'></a>**EventType** | Mandatory | Enumerated | Describes the operation reported by the record. Allowed values are:<br> - `HTTPsession`: Denotes a network session used for HTTP or HTTPS, typically reported by an intermediary device, such as a proxy or a Web security gateway.<br> - `WebServerSession`: Denotes an HTTP request reported by a web server. Such an event typically has less network related information. The URL reported should not include a schema and a server name, but only the path and parameters part of the URL. <br> - `ApiRequest`: Denotes an HTTP request reported associated with an API call, typically reported by an application server. Such an event typically has less network related information. When reported by the application server, the URL reported should not include a schema and a server name, but only the path and parameters part of the URL. |
| **EventResult** | Mandatory | Enumerated | Describes the event result, normalized to one of the following values: <br> - `Success` <br> - `Partial` <br> - `Failure` <br> - `NA` (not applicable) <br><br>For an HTTP session, `Success` is defined as a status code lower than `400`, and `Failure` is defined as a status code higher than `400`. For a list of HTTP status codes, refer to [W3 Org](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html).<br><br>The source may provide only a value for the [EventResultDetails](#eventresultdetails) field, which must be analyzed to get the **EventResult** value. | | <a name="eventresultdetails"></a>**EventResultDetails** | Recommended | String | The HTTP status code.<br><br>**Note**: The value may be provided in the source record using different terms, which should be normalized to these values. The original value should be stored in the **EventOriginalResultDetails** field.| | **EventSchema** | Mandatory | String | The name of the schema documented here is `WebSession`. |
sentinel Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/overview.md
Microsoft Sentinel's automation and orchestration solution provides a highly ext
- HTTP requests - Microsoft Teams - Slack-- Windows Defender ATP-- Defender for Cloud Apps
+- Azure Active Directory
+- Microsoft Defender for Endpoint
+- Microsoft Defender for Cloud Apps
For example, if you use the ServiceNow ticketing system, use Azure Logic Apps to automate your workflows and open a ticket in ServiceNow each time a particular alert or incident is generated.
sentinel Sentinel Soar Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-soar-content.md
You can find SOAR integrations and their components in the following places:
> - Logic Apps HTTP calls
+## AbuseIPDB
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **AbuseIPDB**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Enrich incident by IP info, <br>Report IP to Abuse IP DB, <br>Deny list to Threat intelligence |
+|
+
## Atlassian | Product | Integration components | Supported by | Scenarios | | | | | | | **Jira** | [Managed Logic Apps connector](/connectors/jira/)<br><br>Playbooks | Microsoft<br><br>Community | Sync incidents | |
+
+## AWS IAM
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **AWS IAM**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Add User Tags, <br>Delete Access Keys, <br>Enrich incidents |
+|
+
+## Checkphish by Bolster
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **Checkphish by Bolster**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Get URL scan results |
+|
+
## Check Point | Product | Integration components | Supported by | Scenarios |
You can find SOAR integrations and their components in the following places:
| | | | | | **Falcon endpoint protection**<br>(Available as solution) | Playbooks | Microsoft | Endpoints enrichment,<br>isolate endpoints | |
+
+## Elastic Search
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **Elastic search**<br>(Available as solution) | Playbooks | Microsoft | Enrich incident |
+|
## F5
You can find SOAR integrations and their components in the following places:
| Product | Integration components | Supported by | Scenarios | | | | | | | **FortiGate**<br>(Available as solution) | Custom Logic Apps connector<br><br>Azure Function<br><br>Playbooks | Microsoft | Block IPs and URLs |
-|
+| **Fortiweb Cloud**<br>(Available as solution) | Custom Logic Apps connector<br><br>Azure Function<br><br>Playbooks | Microsoft | Block IPs and URLs , <br>Incident enrichment |
+|
## Freshdesk
You can find SOAR integrations and their components in the following places:
| **Freshdesk** | [Managed Logic Apps connector](/connectors/freshdesk/) | | Sync incidents | |
+## GCP IAM
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **GCP IAM**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Disable service account, <br>Disable service account key, <br>Enrich Service account info |
+|
## Have I Been Pwned
You can find SOAR integrations and their components in the following places:
| **Resilient** | Custom Logic Apps connector<br><br>Playbooks | Community | Sync incidents | |
+## InsightVM Cloud API
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **InsightVM Cloud API** | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Enrich incident with asset info, <br>Enrich vulnerability info, <br>Run VM scan |
+|
+
## Microsoft | Product | Integration components | Supported by | Scenarios |
You can find SOAR integrations and their components in the following places:
| **Microsoft Defender for IoT** | Playbooks | Microsoft | Orchestration and notification | | **Microsoft Teams** | [Managed Logic Apps connector](/connectors/teams/)<br><br>Playbooks | Microsoft<br><br>Community | Notifications, <br>Collaboration, <br>create human-involved responses | |
+
+## Minemeld
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **Minemeld**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Create indicator, <br>Enrich incident |
+|
+
+## Neustar IP GEO Point
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **Neustar IP GEO Point**<br>(Available as solution) | Playbooks | Microsoft | Get IP Geo Info |
+|
## Okta
You can find SOAR integrations and their components in the following places:
| | | | | | **Okta** | Managed Logic Apps connector<br><br>Playbooks | Community | Users enrichment, <br>Users remediation | |
+
+## OpenCTI
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **OpenCTI**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Create Indicator, <br>Enrich incident, <br>Get Indicator stream, <br>Import to Sentinel |
+|
## Palo Alto
You can find SOAR integrations and their components in the following places:
| **Proofpoint TAP**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Accounts enrichment | |
+## Qualys VM
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **Qualys VM**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Get asset details, <br>Get asset by CVEID, <br>Get asset by Open port, <br>Launch VM scan |
+|
+ ## Recorded Future | Product | Integration components | Supported by | Scenarios |
You can find SOAR integrations and their components in the following places:
| | | | | | **Slack** | [Managed Logic Apps connector](/connectors/slack/)<br><br>Playbooks | Microsoft<br><br>Community | Notification, <br>Collaboration | |
+
+## TheHive
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **TheHive**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Create alert, <br>Create Case, <br>Lock User |
+|
+
+## ThreatX WAF
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **ThreatX WAF**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Block IP / URL, <br>Incident enrichment |
+|
+
+## URLhaus
+
+| Product | Integration components | Supported by | Scenarios |
+| | | | |
+| **URLhaus**<br>(Available as solution) | Custom Logic Apps connector<br><br>Playbooks | Microsoft | Check host and enrich incident, <br>Check hash and enrich incident, <br>Check URL and enrich incident |
+|
## Virus Total
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
Title: Discover and deploy Microsoft Sentinel out-of-the-box solutions from Content hub
+ Title: Discover and deploy Microsoft Sentinel out-of-the-box content from Content hub
description: Learn how to find and deploy Sentinel packaged solutions containing data connectors, analytics rules, hunting queries, workbooks, and other content. Previously updated : 09/30/2022 Last updated : 01/09/2022
-# Discover and deploy Microsoft Sentinel out-of-the-box solutions from Content hub (Public preview)
+# Discover and manage Microsoft Sentinel out-of-the-box content (Public preview)
-The Microsoft Sentinel Content hub provides access to out-of-the-box (built-in) solutions, which are packed with Sentinel content for end-to-end products by domain or industry.
+The Microsoft Sentinel Content hub is your centralized location to discover and manage out-of-the-box (built-in) content. There you'll find packaged solutions for end-to-end products by domain or industry. You'll also have access to the vast number of standalone contributions hosted in our GitHub repository and feature blades.
-- Discover solutions in the Content hub based on status, the content type, support, provider and category.
+- Discover solutions and standalone content with a consistent set of filtering capabilities based on status, content type, support, provider and category.
-- Install solutions in your workspace all at once or individually when you find ones that fit your organization's needs.
+- Install content in your workspace all at once or individually.
-- View solutions in list view and quickly see which ones have updates. Update them all at once.
+- View content in list view and quickly see which solutions have updates. Update solutions all at once while standalone content updates automatically.
- Manage a solution to install its content types and get the latest changes.
+- Configure standalone content to create new active items based on the most up-to-date template.
+ If you're a partner who wants to create your own solution, see the [Microsoft Sentinel Solutions Build Guide](https://aka.ms/sentinelsolutionsbuildguide) for solution authoring and publishing. > [!IMPORTANT] >
-> Microsoft Sentinel solutions and the Microsoft Sentinel Content Hub are currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Microsoft Sentinel solutions and standalone content in the Microsoft Sentinel Content Hub are currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Prerequisites
-In order to install, update or delete solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
+In order to install, update and delete standalone content or solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
This is in addition to Sentinel specific roles. For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
-## Discover solutions
+## Discover content
-The content hub offers the best way to find new solutions or manage the ones you already have installed.
+The content hub offers the best way to find new content or manage the solutions you already have installed.
1. From the Microsoft Sentinel navigation menu, under **Content management**, select **Content hub (Preview)**.
-1. The **Content hub** page displays a searchable grid or list of solutions.
+1. The **Content hub** page displays a searchable grid or list of solutions and standalone content.
- Filter the list displayed, either by selecting specific values from the filters, or entering any part of a product name or description in the **Search** field.
+ Filter the list displayed, either by selecting specific values from the filters, or entering any part of a content name or description in the **Search** field.
For more information, see [Categories for Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions.md#categories-for-microsoft-sentinel-out-of-the-box-content-and-solutions).
The content hub offers the best way to find new solutions or manage the ones you
> If a solution that you've deployed has updates since you deployed it, the list view will have a blue up arrow in the status column, and will be included in the **Updates** blue up arrow count at the top of the page. >
-Each solution shows categories that apply to it, and the types of content included.
+Each content item shows categories that apply to it, and solutions show the types of content included.
-For example, in the following image, the **Cisco Umbrella** solution shows a category of **Security - Cloud Security**, and indicates it includes a data connector, analytics rules, hunting queries, playbooks, and more.
+For example, in the following image, the **Cisco Umbrella** solution lists one of its categories as **Security - Cloud Security**, and indicates it includes a data connector, analytics rules, hunting queries, playbooks, and more.
:::image type="content" source="./media/sentinel-solutions-deploy/solutions-list.png" alt-text="Screenshot of the Microsoft Sentinel content hub.":::
-## Install or update a solution
+## Install or update content
-Solutions can be installed and updated individually or in bulk. Here's the process for an individual solution.
+Standalone content and solutions can be installed individually or all together in bulk. For more information on bulk operations, see [Bulk install and update content](#bulk-install-and-update-content) in the next section. Here's an example showing the install of an individual solution.
-1. In the content hub, select a solution to view more information on the right. Then select **Install**, or **Update**. For example:
+1. In the content hub, select a solution to view more information on the right. Then select **Install**, or **Update**.
1. On the solution details page, select **Create** or **Update** to start the solution wizard. On the **Basics** tab, enter the subscription, resource group, and workspace to deploy the solution. For example:
Solutions can be installed and updated individually or in bulk. Here's the proce
1. Each content type within the solution may require additional steps to configure. For more information, see [Enable content items in a solution](#enable-content-items-in-a-solution).
-## Bulk install and update solutions
+## Bulk install and update content
-Content hub supports a list view in addition to the default card view. Multiple solutions can be selected with this view to install and update them all at once.
+Content hub supports a list view in addition to the default card view. Multiple solutions and standalone content can be selected with this view to install and update them all at once. Standalone content is kept up-to-date automatically. Any active or
+custom content created based on solutions or standalone content installed from content hub remains untouched.
1. To install and/or update items in bulk, change to the list view.
- :::image type="content" source="media/sentinel-solutions-deploy/content-hub-list-view.png" alt-text="Screenshot of the list view icon button highlighted." lightbox="media/sentinel-solutions-deploy/content-hub-list-view.png":::
-
-1. The list view is paginated, so choose a filter to ensure the solutions you want to bulk install and modify are in view. Select their checkboxes and click the **Install/Update** button.
-
-1. The content hub interface will indicate *in progress* for installs and updates. Azure notifications will also indicate the action taken.
+1. The list view is paginated, so choose a filter to ensure the content you want to bulk install are in view. Select their checkboxes and click the **Install/Update** button.
:::image type="content" source="media/sentinel-solutions-deploy/bulk-install-update.png" alt-text="Screenshot of solutions list view with multiple solutions selected and in progress for installation." lightbox="media/sentinel-solutions-deploy/bulk-install-update.png":::
+1. The content hub interface will indicate *in progress* for installs and updates. Azure notifications will also indicate the action taken. If a solution or standalone content that was already installed or updated was selected, no action will be taken on that item and it won't interfere with the update and install of the other items.
+ 1. Check each installed solution's **Manage** view. Content types within the solution may require additional steps to configure. For more information, see [Enable content items in a solution](#enable-content-items-in-a-solution). ## Enable content items in a solution
Centrally manage content items for installed solutions from the content hub.
1. Select a content item to get started. ### Management options for each content type
-Below are some tips on how to interact with various content types when managing the solution.
+Below are some tips on how to interact with various content types when managing a solution.
#### Data connector 1. Select **Open connector page**.
When a solution is installed, any parsers included are added as workspace functi
:::image type="content" source="media/sentinel-solutions-deploy/manage-solution-playbook.png" alt-text="Screenshot of playbook type content type in a solution." lightbox="media/sentinel-solutions-deploy/manage-solution-playbook.png":::
-## Find the support model for your solution
+## Find the support model for your content
Each solution explains its support model on the solution's details pane, in the **Support** box, where either **Microsoft** or a partner's name is listed. For example:
When contacting support, you may need other details about your solution, such as
## Next steps
-In this document, you learned about Microsoft Sentinel solutions and how to find and deploy built-in content.
+In this document, you learned how to find and deploy built-in solutions and standalone content for Microsoft Sentinel.
- Learn more about [Microsoft Sentinel solutions](sentinel-solutions.md). - See the full Microsoft Sentinel solutions catalog in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=solution-templates&page=1&search=sentinel).
sentinel Sentinel Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions.md
The Microsoft Sentinel Content Hub provides in-product discoverability, single-s
- In the **Content hub**, filter by [categories](#categories-for-microsoft-sentinel-out-of-the-box-content-and-solutions) and other parameters, or use the powerful text search, to find the content that works best for your organization's needs. The **Content hub** also indicates the [support model](#support-models-for-microsoft-sentinel-out-of-the-box-content-and-solutions) applied to each piece of content, as some content is maintained by Microsoft and others are maintained by partners or the community.
- Manage [updates for out-of-the-box content](sentinel-solutions-deploy.md#install-or-update-a-solution) via the Microsoft Sentinel **Content hub**, and for custom content via the **Repositories** page.
+ Manage [updates for out-of-the-box content](sentinel-solutions-deploy.md#install-or-update-content) via the Microsoft Sentinel **Content hub**, and for custom content via the **Repositories** page.
- Customize out-of-the-box content for your own needs, or create custom content, including analytics rules, hunting queries, notebooks, workbooks, and more. Manage your custom content directly in your Microsoft Sentinel workspace, via the [Microsoft Sentinel API](/rest/api/securityinsights/), or in your own source control repository, via the Microsoft Sentinel [Repositories](ci-cd.md) page.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn about Kaspersky integration with Microsoft Sentinel](https://support.kaspersky.com/15908)
-### PickupSTIX
--- [Fill out this web form](https://www.celerium.com/pickupstix) to get the API Root, Collection IDs, Username, and Password for the free TAXII 2.1 Feeds on the PickupSTIX TAXII Server.- ### Pulsedive - [Learn about Pulsedive integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-pulsedive-feed-into-microsoft-sentinel/ba-p/3478953)
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
Previously updated : 12/08/2022 Last updated : 01/17/2023 # Service Connector internals
Service Connector runs multiple tasks while creating or updating service connect
If a step fails during this process, Service Connector rolls back all previous steps to keep the initial settings in the source and target instances.
+## Resource provider
++ ## Connection configurations Connection configurations are set in the source service.
static-web-apps Front Door Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-door-manual.md
Previously updated : 07/05/2022 Last updated : 01/24/2023 # Tutorial: Manually configure Azure Front Door for Azure Static Web Apps
-Learn to add [Azure Front Door](../frontdoor/front-door-overview.md) as the CDN for your static web app. Azure Front Door is a scalable and secure entry point for fast delivery of your web applications.
+Add [Azure Front Door](../frontdoor/front-door-overview.md) as the CDN for your static web app. Azure Front Door is a scalable and secure entry point for fast delivery of your web applications.
-> [!NOTE]
-> Consider using [enterprise-grade edge](enterprise-edge.md) for faster page loads, enhanced security, and optimized reliability for global applications.
-
-In this tutorial, you learn how to:
+In this tutorial, learn how to create an Azure Front Door Standard/Premium instance and associate Azure Front Door with your Azure Static Web Apps site.
-> [!div class="checklist"]
->
-> - Create an Azure Front Door Standard/Premium instance
-> - Associate Azure Front Door with your Azure Static Web Apps site
-
-> [!NOTE]
-> This tutorial requires the Azure Static Web Apps Standard and Azure Front Door Standard / Premium plans.
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- An Azure Static Web Apps site. [Build your first static web app](get-started-portal.md)
+- Azure Static Web Apps Standard and Azure Front Door Standard / Premium plans. For more information, see [Static Web Apps pricing](https://azure.microsoft.com/pricing/details/app-service/static/)
+- Consider using [enterprise-grade edge](enterprise-edge.md) for faster page loads, enhanced security, and optimized reliability for global applications.
+<!--
## Copy web app URL
-1. Go to the Azure portal.
-
-1. Open the static web app that you want to apply Azure Front Door.
-
-1. Go to the *Overview* section.
-
-1. Copy the *URL* to your clipboard for later use.
-
-## Add Azure Front Door
-
-When creating an Azure Front Door profile, you must select an origin from the same subscription as the selected the Front Door.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the Azure home screen.
+2. Open the static web app that you want to apply Azure Front Door.
-1. Select **Create a resource**.
+3. Go to the *Overview* section.
-1. Search for **Front Door**.
+4. Copy the *URL* to your clipboard for later use.
-1. Select **Front Door and CDN profiles**.
+ :::image type="content" source="media/front-door-manual/copy-url-static-web-app.png" alt-text="Screenshot of Static Web App Overview page.":::
+-->
-1. Select **Create**.
+## Create an Azure Front Door
-1. Select the **Azure Front Door** option.
-
-1. Select the **Quick create** option.
-
-1. Select **Continue to create a front door**.
-
-1. In the *Basics* tab, enter the following values:
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door and CDN profiles*, and then select **Create** > **Front Door and CDN profiles**.
+3. On the Compare offerings page, select **Quick create**, and then select **Continue to create a Front Door**.
+4. On the **Create a Front Door profile** page, enter or select the following settings.
| Setting | Value | |||
When creating an Azure Front Door profile, you must select an origin from the sa
| Compression | Select **Enable compression** | | WAF policy | Select **Create new** or select an existing Web Application Firewall policy from the dropdown if you want to enable this feature. |
-1. Select **Review + create**.
-
-1. Select **Create**.
+ > [!NOTE]
+ > When you create an Azure Front Door profile, you must select an origin from the same subscription the Front Door is created in.
- The creation process may take a few minutes to complete.
-
-1. Select **Go to resource**.
+5. Select **Review + create**, and then select **Create**. The creation process may take a few minutes to complete.
+6. When deployment completes, select **Go to resource**.
+7. [Add a condition](#add-a-condition).
## Disable cache for auth workflow
Add the following settings to disable Front Door's caching policies from trying
### Add a condition
-1. Under *Settings*, select **Rule set**.
-
-1. Select **Add**.
+1. From your Front Door, under *Settings*, select **Rule set**.
-1. In the *Rule set name* textbox, enter **Security**.
+2. Select **Add**.
-1. In the *Rule name* textbox, enter **NoCacheAuthRequests**.
+3. In the *Rule set name* textbox, enter **Security**.
-1. Select **Add a condition**.
+4. In the *Rule name* textbox, enter **NoCacheAuthRequests**.
-1. Select **Request path**.
+5. Select **Add a condition**.
-1. Select **Begins With** in the *Operator* drop-down.
+6. Select **Request path**.
-1. Select the **Edit** link above the *Value* textbox.
+7. Select the *Operator* drop-down, and then **Begins With**.
-1. Enter **/.auth** in the textbox.
+8. Select the **Edit** link above the *Value* textbox.
-1. Select **Update**.
+9. Enter `/.auth` in the textbox, and then select **Update**.
-1. Select the **No transform** option from the *Case transform* dropdown.
+10. Select no options from the *String transform* dropdown.
### Add an action 1. Select the **Add an action** dropdown.
-1. Select **Route configuration override**.
+2. Select **Route configuration override**.
-1. Select **Disabled** in the *Caching* dropdown.
+3. Select **Disabled** in the *Caching* dropdown.
-2. Select **Save**.
+4. Select **Save**.
### Associate rule to an endpoint
-Now that the rule is created, you apply the rule to a Front Door endpoint.
+Now that the rule is created, apply the rule to a Front Door endpoint.
-1. Select the **Unassociated** link.
+1. From your Front Door, select **Rule set**, and then the **Unassociated** link.
-1. Select the endpoint name to which you want to apply the caching rule.
+ :::image type="content" source="media/front-door-manual/rule-set-select-unassociated.png" alt-text="Screenshot showing selections for Rule set and Unassociated links.":::
-2. Select **Next**.
+2. Select the endpoint name to which you want to apply the caching rule, and then select **Next**.
3. Select **Associate**.
+ :::image type="content" source="media/front-door-manual/associate-route.png" alt-text="Screenshot showing highlighted button, Associate.":::
+ ## Copy Front Door ID Use the following steps to copy the Front Door instance's unique identifier.
-1. Select the **Overview** link on the left-hand navigation.
+1. From your Front Door, select the **Overview** link on the left-hand navigation.
+
+1. Copy the value labeled **Front Door ID** and paste it into a file for later use.
-1. From the *Overview* window, copy the value labeled **Front Door ID** and paste it into a file for later use.
+ :::image type="content" source="media/front-door-manual/copy-front-door-id.png" alt-text="Screenshot showing highlighted Overview item and highlighted Front Door ID number.":::
## Update static web app configuration
-To complete the integration with Front Door, you need to update the application configuration file to:
+To complete the integration with Front Door, you need to update the application configuration file to do the following functions:
- Restrict traffic to your site only through Front Door - Restrict traffic to your site only from your Front Door instance
With this configuration, your site is no longer available via the generated `*.a
## Considerations -- **Custom domains**: Now that Front Door is managing your site, you no long use the Azure Static Web Apps custom domain feature. Azure Front Door has a separate process for adding a custom domain. Refer to [Add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md). When you add a custom domain to Front Door, you'll need to update your static web app configuration file to include it in the `allowedForwardedHosts` list.
+- **Custom domains**: Now that Front Door is managing your site, you no longer use the Azure Static Web Apps custom domain feature. Azure Front Door has a separate process for adding a custom domain. Refer to [Add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md). When you add a custom domain to Front Door, you'll need to update your static web app configuration file to include it in the `allowedForwardedHosts` list.
- **Traffic statistics**: By default, Azure Front Door configures [health probes](../frontdoor/front-door-health-probes.md) that may affect your traffic statistics. You may want to edit the default values for the [health probes](../frontdoor/front-door-health-probes.md).
static-web-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/quotas.md
Previously updated : 10/13/2021 Last updated : 1/24/2023
The following quotas exist for Azure Static Web Apps.
| Included bandwidth | 100 GB per month, per subscription | 100 GB per month, per subscription | | Overage bandwidth | Unavailable | $0.20 per GB | | Apps per Azure subscription | 10 | Unlimited |
-| App size | 250 MB | 500 MB |
-| Plan size | 500 MB max app size for a single deployment, and 0.50 GB max for all staging and production environments | 500 MB max app size for a single deployment, and 2.00 GB max combined across all staging and production environments |
+| Storage | ΓÇó 500 MB max for all staging and production environments<br><br>ΓÇó 250 MB max per app | ΓÇó 2 GB max for all staging and production environments<br><br>ΓÇó 500 MB max per app |
| Pre-production environments | 3 | 10 | | Custom domains | 2 per app | 5 per app | | Allowed IP ranges | Unavailable | 25 |
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 01/05/2023 Last updated : 01/25/2023
Filters include:
| Filter name | Filter type | Notes | Is Required | |-|-|-|-| | blobTypes | An array of predefined enum values. | The current release supports `blockBlob` and `appendBlob`. Only delete is supported for `appendBlob`, set tier isn't supported. | Yes |
-| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. | No |
+| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`.<br /><br />To match the blob name exactly, include the trailing forward slash ('/'), *e.g.*, `sample-container/blob1/`. To match the name pattern, omit the trailing forward slash, *e.g.*, `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. | No |
| blobIndexMatch | An array of dictionary values consisting of blob index tag key and value conditions to be matched. Each rule can define up to 10 blob index tag condition. For example, if you want to match all blobs with `Project = Contoso` under `https://myaccount.blob.core.windows.net/` for a rule, the blobIndexMatch is `{"name": "Project","op": "==","value": "Contoso"}`. | If you don't define blobIndexMatch, the rule applies to all blobs within the storage account. | No | To learn more about the blob index feature together with known issues and limitations, see [Manage and find data on Azure Blob Storage with blob index](storage-manage-find-blobs.md).
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Previously updated : 08/09/2022 Last updated : 01/25/2023
Object replication asynchronously copies block blobs in a container according to
Object replication requires that blob versioning is enabled on both the source and destination accounts. When a replicated blob in the source account is modified, a new version of the blob is created in the source account that reflects the previous state of the blob, before modification. The current version in the source account reflects the most recent updates. Both the current version and any previous versions are replicated to the destination account. For more information about how write operations affect blob versions, see [Versioning on write operations](versioning-overview.md#versioning-on-write-operations).
+If your storage account has object replication policies in effect, you cannot disable blob versioning for that account. You must delete any object replication policies on the account before disabling blob versioning.
+ ### Deleting a blob in the source account When a blob in the source account is deleted, the current version of the blob becomes a previous version, and there's no longer a current version. All existing previous versions of the blob are preserved. This state is replicated to the destination account. For more information about how to delete operations affect blob versions, see [Versioning on delete operations](versioning-overview.md#versioning-on-delete-operations).
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Previously updated : 01/23/2023 Last updated : 01/25/2023 -+ # Point-in-time restore for block blobs
Point-in-time restore for block blobs has the following limitations and known is
- Snapshots are not created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - Point-in-time restore is not supported for hierarchical namespaces or operations via Azure Data Lake Storage Gen2. - Point-in-time restore is not supported when a private endpoint is enabled on the storage account.
+- Point-in-time restore is not supported when the storage account's **AllowedCopyScope** property is set to restrict copy scope to the same Azure AD tenant or virtual network. For more information, see [About Permitted scope for copy operations (preview)](../common/security-restrict-copy-operations.md?toc=/azure/storage/blobs/toc.json&tabs=portal#about-permitted-scope-for-copy-operations-preview).
> [!IMPORTANT] > If you restore block blobs to a point that is earlier than September 22, 2020, preview limitations for point-in-time restore will be in effect. Microsoft recommends that you choose a restore point that is equal to or later than September 22, 2020 to take advantage of the generally available point-in-time restore feature.
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
Only Blob storage data operations support the `--auth-mode` parameter. Managemen
To begin, sign-in to to your Azure account with the [az login](/cli/azure/reference-index#az-login). ```azurecli
-az login \
- --name <resource-group> \
- --location <location>
+az login
``` ## Create a resource group
storage Versioning Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-enable.md
Previously updated : 06/07/2021 Last updated : 01/25/2023
This article shows how to enable or disable blob versioning for the storage acco
## Enable blob versioning
+You can enable blob versioning with the Azure portal, PowerShell, Azure CLI, or an Azure Resource Manager template.
+ # [Azure portal](#tab/portal) To enable blob versioning for a storage account in the Azure portal:
To enable blob versioning for a storage account in the Azure portal:
1. Under **Blob service**, choose **Data protection**. 1. In the **Versioning** section, select **Enabled**.
+ :::image type="content" source="media/versioning-enable/portal-enable-versioning.png" alt-text="Screenshot showing how to enable blob versioning in Azure portal":::
# [PowerShell](#tab/powershell)
For more information about deploying resources with templates in the Azure porta
+## List blob versions
+
+To display a blob's versions, use the Azure portal, PowerShell, or Azure CLI. You can also list a blob's versions using one of the Blob Storage SDKs.
+
+# [Azure portal](#tab/portal)
+
+To list a blob's versions in the Azure portal:
+
+1. Navigate to your storage account in the portal, then navigate to the container that contains your blob.
+1. Select the blob for which you want to list versions.
+1. Select the **Versions** tab to display the blob's versions.
+
+ :::image type="content" source="media/versioning-enable/portal-list-blob-versions.png" alt-text="Screenshot showing how to list blob versions in the Azure portal":::
+
+# [PowerShell](#tab/powershell)
+
+To list a blob's versions with PowerShell, call the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) command with the `-IncludeVersion` parameter:
+
+```azurepowershell
+$account = Get-AzStorageAccount -ResourceGroupName <resource-group> -Name <storage-account>
+$ctx = $account.Context
+$container = "<container-name>"
+
+$blobs = Get-AzStorageBlob -Container $container -Prefix "ab" -IncludeVersion -Context $ctx
+
+foreach($blob in $blobs)
+{
+ Write-Host $blob.Name
+ Write-Host $blob.VersionId
+ Write-Host $blob.IsLatestVersion
+}
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To list a blob's versions with Azure CLI, call the [az storage blob directory list](/cli/azure/storage/blob/directory#az-storage-blob-directory-list) command with the `--include v` parameter:
+
+```azurecli
+storageAccount="<storage-account>"
+containerName="<container-name>"
+
+az storage blob list \
+ --container-name $containerName \
+ --prefix "ab" \
+ --query "[[].name, [].versionId]" \
+ --account-name $storageAccount \
+ --include v \
+ --auth-mode login \
+ --output tsv
+```
+
+# [Template](#tab/template)
+
+N/A
+++ ## Modify a blob to trigger a new version The following code example shows how to trigger the creation of a new version with the Azure Storage client library for .NET, version [12.5.1](https://www.nuget.org/packages/Azure.Storage.Blobs/12.5.1) or later. Before running this example, make sure you have enabled versioning for your storage account.
The example creates a block blob, and then updates the blob's metadata. Updating
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_UpdateVersionedBlobMetadata":::
-## List blob versions
+## List blob versions with .NET
To list blob versions or snapshots with the .NET v12 client library, specify the [BlobStates](/dotnet/api/azure.storage.blobs.models.blobstates) parameter with the **Version** field.
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
Previously updated : 01/20/2023 Last updated : 01/25/2023
After versioning is disabled, modifying the current version creates a blob that
You can read or delete versions using the version ID after versioning is disabled. You can also list a blob's versions after versioning is disabled.
+Object replication relies on blob versioning. Before you can disable blob versioning, you must delete any object replication policies on the account. For more information about object replication, see [Object replication for block blobs](object-replication-overview.md).
+ The following diagram shows how modifying a blob after versioning is disabled creates a blob that is not versioned. Any existing versions associated with the blob persist. :::image type="content" source="media/versioning-overview/modify-base-blob-versioning-disabled.png" alt-text="Diagram showing that modification of a current version after versioning is disabled creates a blob that is not a version.":::
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
Previously updated : 11/17/2022 Last updated : 01/25/2023
If the deleted storage account used customer-managed keys with Azure Key Vault a
To recover a deleted storage account from the Azure portal, follow these steps:
-1. Navigate to the list of your storage accounts in the Azure portal.
+1. Navigate to the [list of your storage accounts](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Storage%2FStorageAccounts) in the Azure portal.
1. Select the **Restore** button to open the **Restore deleted account** pane.+
+ :::image type="content" source="media/storage-account-recover/restore-button-portal.png" alt-text="Screenshot showing the Restore button in the Azure portal.":::
+ 1. Select the subscription for the account that you want to recover from the **Subscription** drop-down. 1. From the dropdown, select the account to recover, as shown in the following image. If the storage account that you want to recover is not in the dropdown, then it cannot be recovered.
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Previously updated : 01/23/2023 Last updated : 01/24/2023
To learn how to view your account access keys and copy a connection string, see
## Store a connection string
-Your application needs to access the connection string at runtime to authorize requests made to Azure Storage. You have several options for storing your connection string:
+Your application needs to access the connection string at runtime to authorize requests made to Azure Storage. You have several options for storing your account access keys or connection string:
+- You can store your account keys securely in Azure Key Vault. For more information, see [About Azure Key Vault managed storage account keys](../../key-vault/secrets/about-managed-storage-account-keys.md).
- You can store your connection string in an environment variable.-- An application running on the desktop or on a device can store the connection string in an **app.config** or **web.config** file. Add the connection string to the **AppSettings** section in these files.-- An application running in an Azure cloud service can store the connection string in the [Azure service configuration schema (.cscfg) file](/previous-versions/azure/reference/ee758710(v=azure.100)). Add the connection string to the **ConfigurationSettings** section of the service configuration file.
+- An application can store the connection string in an **app.config** or **web.config** file. Add the connection string to the **AppSettings** section in these files.
-Storing your connection string in a configuration file makes it easy to update the connection string to switch between the [Azurite storage emulator](../common/storage-use-azurite.md) and an Azure storage account in the cloud. You only need to edit the connection string to point to your target environment.
-
-You can use the [Microsoft Azure Configuration Manager](https://www.nuget.org/packages/Microsoft.Azure.ConfigurationManager/) to access your connection string at runtime regardless of where your application is running.
+> [!WARNING]
+> Storing your account access keys or connection string in clear text presents a security risk and is not recommended. Store your account keys in an encrypted format, or migrate your applications to use Azure AD authorization for access to your storage account.
## Configure a connection string for Azurite
AccountKey=<account-key>;
EndpointSuffix=core.chinacloudapi.cn; ```
-## Parsing a connection string
+## Authorizing access with Shared Key
+
+To learn how to authorize access to Azure Storage with the account key or with a connection string, see one of the following articles:
+- [Authorize access and connect to Blob Storage with .NET](../blobs/storage-blob-dotnet-get-started.md?tabs=account-key#authorize-access-and-connect-to-blob-storage)
+- [Authorize access and connect to Blob Storage with Java](../blobs/storage-blob-java-get-started.md?tabs=account-key#authorize-access-and-connect-to-blob-storage)
## Next steps -- [Use the Azurite emulator for local Azure Storage development](storage-use-azurite.md)
+- [Use the Azure Identity library to get an access token for authorization](identity-library-acquire-token.md)
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md)
+- [Use the Azurite emulator for local Azure Storage development](storage-use-azurite.md)
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
Previously updated : 10/25/2022 Last updated : 01/25/2023
storage Table Storage Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-quickstart-portal.md
Title: Quickstart - Create an Azure Storage table in the Azure portal
-description: In this quickstart, use the Table service in the Azure portal to create an Azure Storage table. Also see how you can create an Azure storage account.
+ Title: Create a table in the Azure portal
+
+description: Learn how to use the Azure portal to create a new table in Azure Table storage.
+ Previously updated : 12/02/2019 Last updated : 01/25/2023 -+
-# Quickstart: Create an Azure Storage table in the Azure portal
+
+# Quickstart: Create a table in the Azure portal
This quickstart shows how to create tables and entities in the web-based Azure portal. This quickstart also shows you how to create an Azure storage account.
To complete this quickstart, first create an Azure storage account in the [Azure
## Add a table
-You can now use Table service in the Azure portal to create a table.
+To create a table in the Azure portal:
+
+1. Navigate to your storage account in the Azure portal.
+1. Select **Storage Browser** in the left-hand navigation panel.
+1. In the Storage Browser tree, select select **Tables**.
+1. Select the **Add table** button to add a new table.
+1. In the **Add table** dialog, provide a name for the new table.
+
+ :::image type="content" source="media/table-storage-quickstart-portal/storage-browser-table-create.png" alt-text="Screenshot showing how to create a table in Storage Browser in the Azure portal.":::
+
+1. Select **Ok** to create the new table.
+
+## Add an entity to the table
-1. Click Overview > Tables.
+To add an entity to your table from the Azure portal:
- ![On vmamcgestorage, a Storage Account, the Overview tab is highlighted. On the Overview pane, under Services, Tables is highlighted.](media/table-storage-quickstart-portal/table-storage-quickstart-01.png)
+1. In the Storage Browser in the Azure portal, select the table you created previously.
+1. Select the **Add entity** button to add a new entity.
-2. Click **+ Table**.
+ :::image type="content" source="media/table-storage-quickstart-portal/storage-browser-table-add-entity.png" alt-text="Screenshot showing how to add a new entity to a table in Storage Browser in the Azure portal.":::
- ![On Table service for vmamcgestorage, the + Table option is highlighted.](media/table-storage-quickstart-portal/table-storage-quickstart-02.png)
+1. In the **Add entity** dialog, provide a partition key and a row key, then add any additional properties for data that you want to write to the entity.
-3. Type a name for your table in the **Table name** box, then click **OK**.
+ :::image type="content" source="media/table-storage-quickstart-portal/storage-browser-table-add-properties.png" alt-text="Screenshot showing how to add properties to an entity in Storage Browser in the Azure portal.":::
- ![On the Add Table tab of Table service, My Table is entered into Table name and is highlighted. The OK button is selected and highlighted.](media/table-storage-quickstart-portal/table-storage-quickstart-03.png)
+For more information on working with entities and properties, see [Understanding the Table service data model](/rest/api/storageservices/understanding-the-table-service-data-model).
## Next steps
virtual-desktop Azure Stack Hci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci.md
Title: Set up Azure Virtual Desktop for Azure Stack HCI (preview) - Azure
description: How to set up Azure Virtual Desktop for Azure Stack HCI (preview). Previously updated : 12/16/2022 Last updated : 1/24/2023
To set up Azure Virtual Desktop for Azure Stack HCI:
8. Go to [the web client](./user-documentation/connect-web.md) and grant your users access to the new deployment.
+## Windows OS activation
+
+Windows VMs must be licensed and activated before you can use them on Azure Stack HCI.
+
+For activating your multi-session OS VMs (Windows 10, Windows 11, or later), enable Azure Benefits on the VM once it is created. Make sure that Azure Benefits are also enabled on the host computer. For more information, see [Azure Benefits on Azure Stack HCI](/azure-stack/hci/manage/azure-benefits).
+
+> [!NOTE]
+> You must manually enable access for each VM that requires Azure Benefits.
+
+For all other OS images (such as Windows Server or single-session OS), Azure Benefits is not required. Continue to use the existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
+ ## Optional configurations Now that you've set up Azure Virtual Desktop for Azure Stack HCI, here are a few extra things you can do depending on your deployment's needs.
virtual-desktop Screen Capture Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md
description: How to set up screen capture protection for Azure Virtual Desktop. Previously updated : 01/03/2023 Last updated : 01/24/2023
Screen capture protection prevents sensitive information from being captured on the client endpoints. When you enable this feature, remote content will be automatically blocked or hidden in screenshots and screen shares. Also, the Remote Desktop client will hide content from malicious software that may be capturing the screen.
+In Windows 11, version 22H2 or later, you can enable screen capture protection on session host VMs as well as remote clients. Protection on session host VMs works just like protection for remote clients.
+ ## Prerequisites Screen capture protection is configured on the session host level and enforced on the client. Only clients that support this feature can connect to the remote session. You must connect to Azure Virtual Desktop with one of the following clients to use support screen capture protection: -- The Windows Desktop client supports screen capture protection for full desktops only.
+- The Windows Desktop client supports screen capture protection for full desktops.
- The macOS client (version 10.7.0 or later) supports screen capture protection for both RemoteApps and full desktops.
+- The Windows Desktop client supports screen capture protection for RemoteApps in VMs running Windows 11, Version 22H2 or later.
## Configure screen capture protection
To configure screen capture protection:
> You can also install administrative templates to the group policy Central Store in your Active Directory domain. > For more information, see [How to create and manage the Central Store for Group Policy Administrative Templates in Windows](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
-5. Finally, open the **"Enable screen capture protection"** policy and set it to **"Enabled"**.
+5. Open the **"Enable screen capture protection"** policy and set it to **"Enabled"**.
+6. To configure screen capture for client and server, set the **"Enable screen capture protection"** policy to **"Block Screen capture on client and server"**. By default, the policy will be set to **"Block Screen capture on client"**.
+
+ >[!NOTE]
+ >You can only use screen capture protection on session host VMs that use Windows 11, version 22H2 or later.
## Limitations and known issues
virtual-machines Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts.md
Previously updated : 12/14/2022 Last updated : 1/25/2023
Azure Dedicated Host is a service that provides physical servers able to host one or more virtual machines assigned to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided instead as a directly accessible hardware resource. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place VMs directly into your provisioned hosts in whatever configuration best meets your needs.
+## Video Introduction
+<!-- markdownlint-disable MD034 -->
+
+> [!VIDEO https://www.youtube.com/embed/Lk9mA1WzfAQ]
+
+<!-- markdownlint-enable MD034 -->
+ ## Benefits Reserving the entire host provides several benefits beyond those of a standard shared virtual machine host:
virtual-machines Disks Copy Incremental Snapshot Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-copy-incremental-snapshot-across-regions.md
description: Learn how to copy an incremental snapshot of a managed disk to a di
Previously updated : 05/13/2022 Last updated : 01/25/2023
ms.devlang: azurecli
# Copy an incremental snapshot to a new region
-Incremental snapshots can be copied to any region. The process is managed by Azure, removing the maintenance overhead of managing the copy process by staging a storage account in the target region. Azure ensures that only changes since the last snapshot in the target region are copied to the target region to reduce the data footprint, reducing the recovery point objective. You can check the progress of the copy so you know when a target snapshot is ready to restore disks in the target region. You're only charged for the bandwidth cost of the data transfer across the region and the read transactions on the source snapshots. Don't delete your source snapshot while the target snapshot is being copied.
+There are two options for copying an incremental snapshot across regions. The first option, a managed process (recommended), that will perform the copy for you. This process is handled by Azure and removes the maintenance overhead of managing the copy process by staging a storage account in the target region. Azure ensures that only changes since the last snapshot in the target region are copied to the target region to reduce the data footprint, reducing the recovery point objective. You can check the process of a copy so you know when a target snapshot is ready to restore disks. For this managed process, you're only billed for the bandwidth cost of the data transfer across the region, and the read transactions on the source snapshot. Don't delete your source snapshot while the target snapshot is being copied.
+
+The second option is a [manual copy](#manual-copy), where you get the changes between two incremental snapshots, down to the block level, and manually copy it from one region to another. Most users should use the managed process but, if you're interested in improving the copy speed, the second option allows you to use your compute resources to make the copy faster.
This article covers copying an incremental snapshot from one region to another. See [Create an incremental snapshot for managed disks](disks-incremental-snapshots.md) for conceptual details on incremental snapshots.
This article covers copying an incremental snapshot from one region to another.
- You can copy 100 incremental snapshots in parallel at the same time per subscription per region. - If you use the REST API, you must use version 2020-12-01 or newer of the Azure Compute REST API.
-## Get started
+## Managed copy
# [Azure CLI](#tab/azure-cli)
az snapshot show -n $sourceSnapshotName -g $resourceGroupName --query [completio
# [Azure PowerShell](#tab/azure-powershell)
-You can use the Azure PowerShell module to copy an incremental snapshot. You will need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
+You can use the Azure PowerShell module to copy an incremental snapshot. You'll need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
```PowerShell Install-Module -Name Az -AllowClobber -Scope CurrentUser ```
-Once that is installed, login to your PowerShell session with `Connect-AzAccount`.
+Once that is installed, sign in to your PowerShell session with `Connect-AzAccount`.
The following script will copy an incremental snapshot from one region to another.
You can also use Azure Resource Manager templates to copy an incremental snapsho
```
+## Manual copy
+
+Incremental snapshots offer a differential capability. They enable you to get the changes between two incremental snapshots of the same managed disk, down to the block level. You can use this to reduce your data footprint when copying snapshots across regions. For example, you can download the first incremental snapshot as a base blob in another region. For the subsequent incremental snapshots, you can copy only the changes since the last snapshot to the base blob. After copying the changes, you can take snapshots on the base blob that represent your point in time backup of the disk in another region. You can restore your disk either from the base blob or from a snapshot on the base blob in another region.
++ ## Next steps If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see [Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots](https://github.com/Azure-Samples/managed-disks-dotnet-backup-with-incremental-snapshots).
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 01/20/2023 Last updated : 01/23/2023
Now that you know which zone to deploy to, follow the deployment steps in this a
### VMs with no redundancy options
-Ultra disks deployed in select regions must be deployed without any redundancy options, for now. However, not every disk size that supports ultra disks may be in these region. To determine which disk sizes support ultra disks, you can use either of the following code snippets. Make sure to replace the `vmSize` and `subscription` values first:
+Ultra disks deployed in select regions must be deployed without any redundancy options, for now. However, not every disk size that supports ultra disks may be in these regions. To determine which disk sizes support ultra disks, you can use either of the following code snippets. Make sure to replace the `vmSize` and `subscription` values first:
```azurecli subscription="<yourSubID>"
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 10/12/2022 Last updated : 01/25/2023 -+ ms.devlang: azurecli
ms.devlang: azurecli
# [Azure CLI](#tab/azure-cli)
-You can use the Azure CLI to create an incremental snapshot. You will need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI.
+You can use the Azure CLI to create an incremental snapshot. You'll need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI.
The following script will create an incremental snapshot of a particular disk:
yourDiskID=$(az disk show -n $diskName -g $resourceGroupName --query "id" --outp
az snapshot create -g $resourceGroupName -n $snapshotName --source $yourDiskID --incremental true ```
+> [!IMPORTANT]
+> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
+ You can identify incremental snapshots from the same disk with the `SourceResourceId` property of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. You can use `SourceResourceId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots:
diskId=$(az disk show -n $diskName -g $resourceGroupName --query [id] -o tsv)
az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremental]" -g $resourceGroupName --output table ``` - # [Azure PowerShell](#tab/azure-powershell)
-You can use the Azure PowerShell module to create an incremental snapshot. You will need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
+You can use the Azure PowerShell module to create an incremental snapshot. You'll need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
```PowerShell Install-Module -Name Az -AllowClobber -Scope CurrentUser ```
-Once that is installed, login to your PowerShell session with `Connect-AzAccount`.
+Once that is installed, sign in to your PowerShell session with `Connect-AzAccount`.
To create an incremental snapshot with Azure PowerShell, set the configuration with [New-AzSnapShotConfig](/powershell/module/az.compute/new-azsnapshotconfig) with the `-Incremental` parameter and then pass that as a variable to [New-AzSnapshot](/powershell/module/az.compute/new-azsnapshot) through the `-Snapshot` parameter.
$snapshotConfig=New-AzSnapshotConfig -SourceUri $yourDisk.Id -Location $yourDisk
New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName -Snapshot $snapshotConfig ```
-You can identify incremental snapshots from the same disk with the `SourceResourceId` and the `SourceUniqueId` properties of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. `SourceUniqueId` is the value inherited from the `UniqueId` property of the disk. If you were to delete a disk and then create a new disk with the same name, the value of the `UniqueId` property changes.
+> [!IMPORTANT]
+> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
+
+You can identify incremental snapshots from the same disk with the `SourceResourceId` and the `SourceUniqueId` properties of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. `SourceUniqueId` is the value inherited from the `UniqueId` property of the disk. If you delete a disk and then create a new disk with the same name, the value of the `UniqueId` property changes.
You can use `SourceResourceId` and `SourceUniqueId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots:
$incrementalSnapshots
# [Portal](#tab/azure-portal) [!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../includes/virtual-machines-disks-incremental-snapshots-portal.md)]
+> [!IMPORTANT]
+> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
+ # [Resource Manager Template](#tab/azure-resource-manager)
-You can also use Azure Resource Manager templates to create an incremental snapshot. You'll need to make sure the apiVersion is set to **2019-03-01** and that the incremental property is also set to true. The following snippet is an example of how to create an incremental snapshot with Resource Manager templates:
+You can also use Azure Resource Manager templates to create an incremental snapshot. You'll need to make sure the apiVersion is set to **2022-03-22** and that the incremental property is also set to true. The following snippet is an example of how to create an incremental snapshot with Resource Manager templates:
```json {
You can also use Azure Resource Manager templates to create an incremental snaps
"type": "Microsoft.Compute/snapshots", "name": "[concat( parameters('diskName'),'_snapshot1')]", "location": "[resourceGroup().location]",
- "apiVersion": "2019-03-01",
+ "apiVersion": "2022-03-22",
"properties": { "creationData": { "createOption": "Copy",
You can also use Azure Resource Manager templates to create an incremental snaps
] } ```
+> [!IMPORTANT]
+> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
+
+## Check status of snapshots or disks
+
+Incremental snapshots of Ultra Disks (preview) can't be used to create new disks until the background process copying the data into the snapshot has completed. Similarly, Ultra Disks created from incremental snapshots can't be attached to a VM until the background process copying the data into the disk has completed.
+
+You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot and you can use the [Check disk creation status](#check-disk-creation-status) section to check the status of a background copy from a snapshot to a disk.
+
+### CLI
+
+First, get a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following script to list your existing incremental snapshots of Ultra Disks:
++
+```azurecli
+# Declare variables and create snapshot list
+subscriptionId="yourSubscriptionId"
+resourceGroupName="yourResourceGroupNameHere"
+diskName="yourDiskNameHere"
+
+az account set --subscription $subscriptionId
+
+diskId=$(az disk show -n $diskName -g $resourceGroupName --query [id] -o tsv)
+
+az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremental]" -g $resourceGroupName --output table
+```
+
+Now that you have a list of snapshots, you can check the `CompletionPercent` property of an individual snapshot to get its status. Replace `$sourceSnapshotName` with the name of your snapshot. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
+
+```azurecli
+az snapshot show -n $sourceSnapshotName -g $resourceGroupName --query [completionPercent] -o tsv
+```
+
+### PowerShell
+
+The following script creates a list of all incremental snapshots associated with a particular disk that haven't completed their background copy. Replace `yourResourceGroupNameHere` and `yourDiskNameHere`, then run the script.
+
+```azurepowershell
+$resourceGroupName = "yourResourceGroupNameHere"
+$snapshots = Get-AzSnapshot -ResourceGroupName $resourceGroupName
+$diskName = "yourDiskNameHere"
+
+$yourDisk = Get-AzDisk -DiskName $diskName -ResourceGroupName $resourceGroupName
+
+$incrementalSnapshots = New-Object System.Collections.ArrayList
+
+foreach ($snapshot in $snapshots)
+{
+ if($snapshot.Incremental -and $snapshot.CreationData.SourceResourceId -eq $yourDisk.Id -and $snapshot.CreationData.SourceUniqueId -eq $yourDisk.UniqueId)
+ {
+ $targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName
+ {
+ if($targetSnapshot.CompletionPercent -lt 100)
+ {
+ $incrementalSnapshots.Add($targetSnapshot)
+ }
+ }
+ }
+}
+
+$incrementalSnapshots
+```
+
+Now that you have a list of snapshots, you can check the `CompletionPercent` property of an individual snapshot to get its status. Replace `yourResourceGroupNameHere` and `yourSnapshotName` then run the script. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
+
+```azurepowershell
+$resourceGroupName = "yourResourceGroupNameHere"
+$snapshotName = "yourSnapshotName"
+
+$targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName
+
+$targetSnapshot.CompletionPercent
+```
+
+### Check disk creation status
+
+When creating a disk from an Ultra Disk snapshot, you must wait for the background copy process to complete before you can attach it. Currently, you must use the Azure CLI to check the progress of the copy process.
+
+The following script gives you the status of an individual disk's copy process. The value of `completionPercent` must be 100 before the disk can be attached.
+
+```azurecli
+subscriptionId=yourSubscriptionID
+resourceGroupName=yourResourceGroupName
+diskName=yourDiskName
+
+az account set --subscription $subscriptionId
+
+az disk show -n $diskName -g $resourceGroupName --query [completionPercent] -o tsv
+```
+
+## Check sector size
+
+Snapshots with a 4096 logical sector size can only be used to create Ultra Disks. They can't be used to create other disk types. Snapshots of disks with 4096 logical sector size are stored as VHDX, whereas snapshots of disks with 512 logical sector size are stored as VHD. Snapshots inherit the logical sector size from the parent disk.
+
+To determine whether or your Ultra Disk snapshot is a VHDX or a VHD, get the `LogicalSectorSize` property of the snapshot.
+
+The following command displays the logical sector size of a snapshot:
+
+```azurecli
+az snapshot show -g resourcegroupname -n snapshotname --query [creationData.logicalSectorSize] -o tsv
+```
+ ## Next steps See [Copy an incremental snapshot to a new region](disks-copy-incremental-snapshot-across-regions.md) to learn how to copy an incremental snapshot across regions.
virtual-machines Disks Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-reserved-capacity.md
Title: Optimize costs for Azure Disk Storage with reservations
description: Learn about purchasing Azure Disk Storage reservations to save costs on premium SSD managed disks. Previously updated : 06/29/2021 Last updated : 01/25/2023
We recommend the following practices when considering disk reservation purchase:
Reservation discounts are currently unavailable for the following: -- Unmanaged disks or page blobs.-- Standard SSDs or standard hard-disk drives (HDDs).-- Premium SSD SKUs smaller than P30: P1, P2, P3, P4, P6, P10, P15, and P20 SSD SKUs.-- Disks in Azure Government, Azure Germany, or Azure China regions.
+- Unmanaged disks or page blobs
+- Ultra Disks
+- Standard solid-state drives (SSDs) or standard hard-disk drives (HDDs)
+- Premium SSD SKUs smaller than P30: P1, P2, P3, P4, P6, P10, P15, and P20 SSD SKUs
+- Disks in Azure Government, Azure Germany, or Azure China regions
In rare circumstances, Azure limits the purchase of new reservations to a subset of disk SKUs because of low capacity in a region.
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Azure now offers generation 2 support for the following selected VM series:
|[Dsv3-series](dv3-dsv3-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Dv4-series](dv4-dsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Dsv4-series](dv4-dsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
-|[Dav4-series](dav4-dasv4-series.md) | :heavy_check_mark: | :x: |
+|[Dav4-series](dav4-dasv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
|[Dasv4-series](dav4-dasv4-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Ddv4-series](ddv4-ddsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Ddsv4-series](ddv4-ddsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Previously updated : 01/06/2023 Last updated : 01/24/2023
lrwxrwxrwx. 1 root root 13 Sep 9 21:54 lun2-part1 -> ../../../sde1
### Expand without downtime
-You now may be able to expand your managed disks without deallocating your VM.
+You may be able to expand your managed disks without deallocating your VM.
This feature has the following limitations: [!INCLUDE [virtual-machines-disks-expand-without-downtime-restrictions](../../../includes/virtual-machines-disks-expand-without-downtime-restrictions.md)]
-To register for the feature, use the following command:
-
-```azurecli
-az feature register --namespace Microsoft.Compute --name LiveResize
-```
-
-It may take a few minutes for registration to take complete. To confirm that you've registered, use the following command:
-
-```azurecli
-az feature show --namespace Microsoft.Compute --name LiveResize
-```
- ### Expand Azure Managed Disk Make sure that you have the latest [Azure CLI](/cli/azure/install-az-cli2) installed and are signed in to an Azure account by using [az login](/cli/azure/reference-index#az-login).
This article requires an existing VM in Azure with at least one data disk attach
In the following samples, replace example parameter names such as *myResourceGroup* and *myVM* with your own values. > [!IMPORTANT]
-> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
+> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
1. Operations on virtual hard disks can't be performed with the VM running. Deallocate your VM with [az vm deallocate](/cli/azure/vm#az-vm-deallocate). The following example deallocates the VM named *myVM* in the resource group named *myResourceGroup*:
virtual-machines Tutorial Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-custom-images.md
Title: Tutorial - Create custom VM images with the Azure CLI description: In this tutorial, you learn how to use the Azure CLI to create a custom virtual machine image in Azure-+ Previously updated : 05/04/2020- Last updated : 01/25/2023+ -+ #Customer intent: As an IT administrator, I want to learn about how to create custom VM images to minimize the number of post-deployment configuration tasks.
Custom images are like marketplace images, but you create them yourself. Custom
This tutorial uses the CLI within the [Azure Cloud Shell](../../cloud-shell/overview.md), which is constantly updated to the latest version. To open the Cloud Shell, select **Try it** from the top of any code block.
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.35.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.35.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
## Overview An [Azure Compute Gallery](../shared-image-galleries.md) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations.
-The Azure Compute Gallery lets you share your custom VM images with others. Choose which images you want to share, which regions you want to make them available in, and who you want to share them with.
+The Azure Compute Gallery lets you share your custom VM images with others. Choose which images you want to share, which regions you want them to be available in, and who you want to share them with.
The Azure Compute Gallery feature has multiple resource types:
The Azure Compute Gallery feature has multiple resource types:
## Before you begin
-The steps below detail how to take an existing VM and turn it into a reusable custom image that you can use to create new VM instances.
+The following steps show how to take an existing VM and turn it into a reusable custom image that you can use to create new VM instances.
To complete the example in this tutorial, you must have an existing virtual machine. If needed, you can see the [CLI quickstart](quick-create-cli.md) to create a VM to use for this tutorial. When working through the tutorial, replace the resource names where needed.
To open the Cloud Shell, just select **Try it** from the upper right corner of a
A gallery is the primary resource used for enabling image sharing.
-Allowed characters for gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name cannot contain dashes. Gallery names must be unique within your subscription.
+Allowed characters for gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name can't contain dashes. Gallery names must be unique within your subscription.
Create a gallery using [az sig create](/cli/azure/sig#az-sig-create). The following example creates a resource group named gallery named *myGalleryRG* in *East US*, and a gallery named *myGallery*.
Copy the ID of your VM to use later.
## Create an image definition
-Image definitions create a logical grouping for images. They are used to manage information about the image versions that are created within them.
+Image definitions create a logical grouping for images. They're used to manage information about the image versions that are created within them.
Image definition names can be made up of uppercase or lowercase letters, digits, dots, dashes, and periods.
Create an image version from the VM using [az sig image-version create](/cli/azu
Allowed characters for image version are numbers and periods. Numbers must be within the range of a 32-bit integer. Format: *MajorVersion*.*MinorVersion*.*Patch*.
-In this example, the version of our image is *1.0.0* and we are going to create 2 replicas in the *West Central US* region, 1 replica in the *South Central US* region and 1 replica in the *East US 2* region using zone-redundant storage. The replication regions must include the region the source VM is located.
+In this example, the version of our image is *1.0.0* and we're going to create two replicas in the *West Central US* region, one replica in the *South Central US* region and one replica in the *East US 2* region using zone-redundant storage. The replication regions must include the region the source VM is located.
Replace the value of `--managed-image` in this example with the ID of your VM from the previous step.
az sig image-version create \
## Create the VM
-Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the --specialized parameter to indicate the image is a specialized image.
+Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the `--specialized` parameter to indicate the image is a specialized image.
Use the image definition ID for `--image` to create the VM from the latest version of the image that is available. You can also create the VM from a specific version by supplying the image version ID for `--image`.
-In this example, we are creating a VM from the latest version of the *myImageDefinition* image.
+In this example, we're creating a VM from the latest version of the *myImageDefinition* image.
```azurecli az group create --name myResourceGroup --location eastus
For more information about how to share resources using Azure RBAC, see [Add or
## Azure Image Builder
-Azure also offers a service, built on Packer, [Azure VM Image Builder](../image-builder-overview.md). Simply describe your customizations in a template, and it will handle the image creation.
+Azure also offers a service, built on Packer, [Azure VM Image Builder](../image-builder-overview.md). Describe your customizations in a template, and it will handle the image creation.
## Next steps
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
description: Overview of Azure managed disks, which handle the storage accounts
Previously updated : 01/10/2023 Last updated : 01/25/2023
This disk has a maximum capacity of 4,095 GiB, however, many operating systems a
Most VMs contain a temporary disk, which is not a managed disk. The temporary disk provides short-term storage for applications and processes, and is intended to only store data such as page or swap files. Data on the temporary disk may be lost during a [maintenance event](./understand-vm-reboots.md) or when you [redeploy a VM](/troubleshoot/azure/virtual-machines/redeploy-to-new-node-windows?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). During a successful standard reboot of the VM, data on the temporary disk will persist. For more information about VMs without temporary disks, see [Azure VM sizes with no local temporary disk](azure-vms-no-temp-disk.yml).
-On Azure Linux VMs, the temporary disk is typically /dev/sdb and on Windows VMs the temporary disk is D: by default. The temporary disk is not encrypted by server side encryption unless you enable encryption at host.
+On Azure Linux VMs, the temporary disk is typically /dev/sdb and on Windows VMs the temporary disk is D: by default. The temporary disk is not encrypted unless (for server side encryption) you enable encryption at host or (for Azure Disk Encryption) with the [VolumeType parameter set to All on Windows](./windows/disk-encryption-windows.md#enable-encryption-on-a-newly-added-data-disk) or [EncryptFormatAll on Linux](./linux/disk-encryption-linux.md#use-encryptformatall-feature-for-data-disks-on-linux-vms).
## Managed disk snapshots
If you'd like a video going into more detail on managed disks, check out: [Bette
Learn more about the individual disk types Azure offers, which type is a good fit for your needs, and learn about their performance targets in our article on disk types. > [!div class="nextstepaction"]
-> [Select a disk type for IaaS VMs](disks-types.md)
+> [Select a disk type for IaaS VMs](disks-types.md)
virtual-machines Migration Classic Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-cli.md
Title: Migrate VMs to Resource Manager using Azure CLI description: This article walks through the platform-supported migration of resources from classic to Azure Resource Manager by using Azure CLI.-+ Previously updated : 02/06/2020- Last updated : 01/23/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on March 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
These steps show you how to use CLI commands to migrate infrastructure as a service (IaaS) resources from the classic deployment model to the Azure Resource Manager deployment model. The article requires the [Azure classic CLI](/cli/azure/install-classic-cli). Since Azure CLI only applies to Azure Resource Manager resources, it cannot be used for this migration.
virtual-machines Migration Classic Resource Manager Community Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-community-tools.md
Title: Community tools - Move classic resources to Azure Resource Manager description: This article catalogs the tools that have been provided by the community to help migrate IaaS resources from classic to the Azure Resource Manager deployment model.-+ Previously updated : 02/06/2020- Last updated : 01/25/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on March 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
This article catalogs the tools that have been provided by the community to assist with migration of IaaS resources from classic to the Azure Resource Manager deployment model.
virtual-machines Migration Classic Resource Manager Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-deep-dive.md
Title: Platform-supported migration tool. description: Technical deep dive on platform-supported migration of resources from the classic deployment model to Azure Resource Manager.-+ Previously updated : 12/17/2020- Last updated : 1/25/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on March 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](./classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](./classic-vm-deprecation.md#how-does-this-affect-me).
Let's take a deep-dive on migrating from the Azure classic deployment model to the Azure Resource Manager deployment model. We look at resources at a resource and feature level to help you understand how the Azure platform migrates resources between the two deployment models. For more information, please read the service announcement article: [Platform-supported migration of IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-overview.md).
As part of migrating your resources from the classic deployment model to the Res
* [Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model](../expressroute/expressroute-migration-classic-resource-manager.md) * [Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-community-tools.md) * [Review most common migration errors](migration-classic-resource-manager-errors.md)
-* [Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-faq.yml)
+* [Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-faq.yml)
virtual-machines Migration Classic Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-overview.md
Title: Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager description: Walk through the platform-supported migration of resources from classic to Azure Resource Manager.-+ Previously updated : 10/21/2022- Last updated : 1/25/2023+ # Platform-supported migration of IaaS resources from classic to Azure Resource Manager
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on March 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
virtual-machines Migration Classic Resource Manager Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-plan.md
Title: Planning for migration from classic to Azure Resource Manager description: In this article, learn how to plan for migration of IaaS resources from classic to Azure Resource Manager. -+ Previously updated : 02/06/2020- Last updated : 01/25/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on March 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
-While Azure Resource Manager offers a lot of amazing features, it is critical to plan out your migration journey to make sure things go smoothly. Spending time on planning will ensure that you do not encounter issues while executing migration activities.
+While Azure Resource Manager offers numerous amazing features, it's critical to plan out your migration journey to make sure things go smoothly. Spending time on planning will ensure that you don't encounter issues while executing migration activities.
> [!NOTE] > The following guidance was heavily contributed to by the Azure Customer Advisory team and Cloud Solution architects working with customers on migrating large environments. As such this document will continue to get updated as new patterns of success emerge, so check back from time to time to see if there are any new recommendations.
Depending on your technical requirements size, geographies and operational pract
4. Which scenarios are supported with the migration API? Review the [unsupported features and configurations](migration-classic-resource-manager-overview.md). 5. Will your operational teams now support applications/VMs in both Classic and Azure Resource Manager? 6. How (if at all) does Azure Resource Manager change your VM deployment, management, monitoring, and reporting processes? Do your deployment scripts need to be updated?
-7. What is the communications plan to alert stakeholders (end users, application owners, and infrastructure owners)?
+7. What are the communications plan to alert stakeholders (end users, application owners, and infrastructure owners)?
8. Depending on the complexity of the environment, should there be a maintenance period where the application is unavailable to end users and to application owners? If so, for how long? 9. What is the training plan to ensure stakeholders are knowledgeable and proficient in Azure Resource Manager? 10. What is the program management or project management plan for the migration?
Successful customers have detailed plans where the preceding questions are discu
### Patterns of success
-The following were issues discovered in many of the larger migrations. This is not an exhaustive list and you should refer to the [unsupported features and configurations](migration-classic-resource-manager-overview.md) for more detail. You may or may not encounter these technical issues but if you do solving these before attempting migration will ensure a smoother experience.
+The following were issues discovered in many of the larger migrations. This isn't an exhaustive list and you should refer to the [unsupported features and configurations](migration-classic-resource-manager-overview.md) for more detail. You may or may not encounter these technical issues but if you do solving these before attempting migration will ensure a smoother experience.
-- **Do a Validate/Prepare/Abort Dry Run** - This is perhaps the most important step to ensure Classic to Azure Resource Manager migration success. The migration API has three main steps: Validate, Prepare and Commit. Validate will read the state of your classic environment and return a result of all issues. However, because some issues might exist in the Azure Resource Manager stack, Validate will not catch everything. The next step in migration process, Prepare will help expose those issues. Prepare will move the metadata from Classic to Azure Resource Manager, but will not commit the move, and will not remove or change anything on the Classic side. The dry run involves preparing the migration, then aborting (**not committing**) the migration prepare. The goal of validate/prepare/abort dry run is to see all of the metadata in the Azure Resource Manager stack, examine it (*programmatically or in Portal*), and verify that everything migrates correctly, and work through technical issues. It will also give you a sense of migration duration so you can plan for downtime accordingly. A validate/prepare/abort does not cause any user downtime; therefore, it is non-disruptive to application usage.
+- **Do a Validate/Prepare/Abort Dry Run** - This is perhaps the most important step to ensure Classic to Azure Resource Manager migration success. The migration API has three main steps: Validate, Prepare and Commit. Validate will read the state of your classic environment and return a result of all issues. However, because some issues might exist in the Azure Resource Manager stack, Validate won't catch everything. The next step in migration process, Prepare will help expose those issues. Prepare will move the metadata from Classic to Azure Resource Manager, but won't commit the move, and won't remove or change anything on the Classic side. The dry run involves preparing the migration, then aborting (**not committing**) the migrations prepare. The goal of validate/prepare/abort dry run is to see all of the metadata in the Azure Resource Manager stack, examine it (*programmatically or in Portal*), and verify that everything migrates correctly, and work through technical issues. It will also give you a sense of migration duration so you can plan for downtime accordingly. A validate/prepare/abort does not cause any user downtime; therefore, it is non-disruptive to application usage.
- The items below will need to be solved before the dry run, but a dry run test will also safely flush out these preparation steps if they are missed. During enterprise migration, we've found the dry run to be a safe and invaluable way to ensure migration readiness.
- - When prepare is running, the control plane (Azure management operations) will be locked for the whole virtual network, so no changes can be made to VM metadata during validate/prepare/abort. But otherwise any application function (RD, VM usage, etc.) will be unaffected. Users of the VMs will not know that the dry run is being executed.
+ - When prepare is running, the control plane (Azure management operations) will be locked for the whole virtual network, so no changes can be made to VM metadata during validate/prepare/abort. But otherwise any application function (RD, VM usage, etc.) will be unaffected. Users of the VMs won't know that the dry run is being executed.
- **Express Route Circuits and VPN**. Currently Express Route Gateways with authorization links cannot be migrated without downtime. For the workaround, see [Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model](../expressroute/expressroute-migration-classic-resource-manager.md).
The following were issues discovered in many of the larger migrations. This is n
- If connectivity to a DNS server is lost during migration, all VM Extensions except BGInfo v1.\* need to first be removed from every VM before migration prepare, and subsequently re-added back to the VM after Azure Resource Manager migration. **This is only for VMs that are running.** If the VMs are stopped deallocated, VM Extensions do not need to be removed. **Note:** Many extensions like Azure diagnostics and Defender for Cloud monitoring will reinstall themselves after migration, so removing them is not a problem. - In addition, make sure Network Security Groups are not restricting outbound internet access. This can happen with some Network Security Groups configurations. Outbound internet access (and DNS) is needed for VM Extensions to be migrated to Azure Resource Manager. - There are two versions of the BGInfo extension: v1 and v2. If the VM was created using the Azure portal or PowerShell, the VM will likely have the v1 extension on it. This extension does not need to be removed and will be skipped (not migrated) by the migration API. However, if the Classic VM was created with the new Azure portal, it will likely have the JSON-based v2 version of BGInfo, which can be migrated to Azure Resource Manager provided the agent is working and has outbound internet access (and DNS).
- - **Remediation Option 1**. If you know your VMs will not have outbound internet access, a working DNS service, and working Azure agents on the VMs, then uninstall all VM extensions as part of the migration before Prepare, then reinstall the VM Extensions after migration.
+ - **Remediation Option 1**. If you know your VMs won't have outbound internet access, a working DNS service, and working Azure agents on the VMs, then uninstall all VM extensions as part of the migration before Prepare, then reinstall the VM Extensions after migration.
- **Remediation Option 2**. If VM extensions are too big of a hurdle, another option is to shutdown/deallocate all VMs before migration. Migrate the deallocated VMs, then restart them on the Azure Resource Manager side. The benefit here is that VM extensions will migrate. The downside is that all public facing Virtual IPs will be lost (this may be a non-starter), and obviously the VMs will shut down causing a much greater impact on working applications. > [!NOTE]
Now that you are in Azure Resource Manager, maximize the platform. Read the [ov
Things to consider: - Bundling the migration with other activities. Most customers opt for an application maintenance window. If so, you might want to use this downtime to enable other Azure Resource Manager capabilities like encryption and migration to Managed Disks.-- Revisit the technical and business reasons for Azure Resource Manager; enable the additional services available only on Azure Resource Manager that apply to your environment.
+- Revisit the technical and business reasons for Azure Resource Manager; enable the additional services available only on Azure Resource Manager that applies to your environment.
- Modernize your environment with PaaS services. ### Patterns of success
virtual-machines Migration Classic Resource Manager Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-ps.md
Title: Migrate to Resource Manager with PowerShell description: This article walks through the platform-supported migration of IaaS resources such as virtual machines (VMs), virtual networks, and storage accounts from classic to Azure Resource Manager by using Azure PowerShell commands-+ Previously updated : 02/06/2020- Last updated : 01/25/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs > [!IMPORTANT]
-> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on March 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
+> Today, about 90% of IaaS VMs are using [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). As of February 28, 2020, classic VMs have been deprecated and will be fully retired on September 1, 2023. [Learn more]( https://aka.ms/classicvmretirement) about this deprecation and [how it affects you](classic-vm-deprecation.md#how-does-this-affect-me).
These steps show you how to use Azure PowerShell commands to migrate infrastructure as a service (IaaS) resources from the classic deployment model to the Azure Resource Manager deployment model.
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Previously updated : 09/22/2022 Last updated : 01/25/2023
Azure offers trusted launch as a seamless way to improve the security of [genera
- Ev5-series, Esv5-series - Edv5-series, Edsv5-series - Easv5-series, Eadsv5-series-- Ebsv5-series, Ebdsv5-series - Eav4-series, Easv4-series - Ev4-series, Esv4-series, Esv3-series - Edv4-series, Edsv4-series
Azure offers trusted launch as a seamless way to improve the security of [genera
- NVadsA10 v5-series **OS support**:-- Redhat Enterprise Linux 8.3, 8.4, 8.5, 8.6, 9.0 LVM
+- Redhat Enterprise Linux 8.3, 8.4, 8.5, 8.6, 9.0, 9.1 LVM
- SUSE Enterprise Linux 15 SP3 - Ubuntu Server 22.04 LTS - Ubuntu Server 20.04 LTS
Azure offers trusted launch as a seamless way to improve the security of [genera
**Regions**: - All public regions
+- All Fairfax regions
**Pricing**: No additional cost to existing VM pricing.
No additional cost to existing VM pricing.
- Ultra disk - Managed image - Nested Virtualization
+- Azure Automanage
## Secure boot
With trusted launch and VBS you can enable Windows Defender Credential Guard. Th
## Microsoft Defender for Cloud integration
-Trusted launch is integrated with Azure Defender for Cloud to ensure your VMs are properly configured. Azure Defender for Cloud will continually assess compatible VMs and issue relevant recommendations.
+Trusted launch is integrated with Microsoft Defender for Cloud to ensure your VMs are properly configured. Microsoft Defender for Cloud will continually assess compatible VMs and issue relevant recommendations.
-- **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Azure Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it.-- **Recommendation to enable vTPM** - If your VM has vTPM enabled, Azure Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Azure Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it will issue a low severity recommendation to enable it.-- **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Azure Defender for Cloud will issue a low severity recommendation to install the guest attestation extension on it. This extension allows Azure Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.-- **Attestation health assessment or Boot Integrity Monitoring** - If your VM has Secure Boot and vTPM enabled and attestation extension installed, Azure Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as boot integrity monitoring. Azure Defender for Cloud issues an assessment, indicating the status of remote attestation. Currently boot integrity monitoring is supported for both Windows and Linux single virtual machines and uniform scale sets.
+- **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Mirosoft Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it.
+- **Recommendation to enable vTPM** - If your VM has vTPM enabled, Microsoft Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Microsoft Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it will issue a low severity recommendation to enable it.
+- **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Microsoft Defender for Cloud will issue a low severity recommendation to install the guest attestation extension on it. This extension allows Microsoft Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.
+- **Attestation health assessment or Boot Integrity Monitoring** - If your VM has Secure Boot and vTPM enabled and attestation extension installed, Microsoft Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as boot integrity monitoring. Microsoft Defender for Cloud issues an assessment, indicating the status of remote attestation.
If your VMs are properly set up with trusted launch, Microsoft Defender for Cloud can detect and alert you of VM health problems.
Trusted launch for Azure virtual machines is monitored for advanced threats. If
Defender for Cloud periodically performs attestation. If the attestation fails, a medium severity alert will be triggered. Trusted launch attestation can fail for the following reasons:
-Trusted launch for Azure virtual machines is monitored for advanced threats. If such threats are detected, an alert will be triggered. Alerts are only available in the [Standard Tier](../security-center/security-center-pricing.md) of Azure Defender for Cloud.
-Azure Defender for Cloud periodically performs attestation. If the attestation fails, a medium severity alert will be triggered. Trusted launch attestation can fail for the following reasons:
+Trusted launch for Azure virtual machines is monitored for advanced threats. If such threats are detected, an alert will be triggered. Alerts are only available in the [Standard Tier](../security-center/security-center-pricing.md) of Microsoft Defender for Cloud.
+Microsoft Defender for Cloud periodically performs attestation. If the attestation fails, a medium severity alert will be triggered. Trusted launch attestation can fail for the following reasons:
- The attested information, which includes a log of the Trusted Computing Base (TCB), deviates from a trusted baseline (like when Secure Boot is enabled). This can indicate that untrusted modules have been loaded and the OS may be compromised. - The attestation quote could not be verified to originate from the vTPM of the attested VM. This can indicate that malware is present and may be intercepting traffic to the TPM. - The attestation extension on the VM is not responding. This can indicate a denial-of-service attack by malware, or an OS admin.
virtual-machines Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/vm-extension-for-sap-new.md
The new VM Extension for SAP uses a managed identity that is assigned to the VM
## <a name="ba74712c-4b1f-44c2-9412-de101dbb1ccc"></a>Manually configure the Azure VM extension for SAP solutions
-If you want to use Azure Resource Manager, Terraform or other tools to deploy the VM Extension for SAP, please use the following publisher and extension type:
+If you want to use Azure Resource Manager, Terraform or other tools to deploy the VM Extension for SAP, you can also deploy the VM Extension for SAP manually i.e. without using the dedicated PowerShell or Azure CLI commands.
-For Linux:
-* **Publisher**: Microsoft.AzureCAT.AzureEnhancedMonitoring
-* **Extension Type**: MonitorX64Linux
-* **Version**: 1.*
+Before deploying the VM Extension for SAP, please make sure to assign a user or system assigned managed identity to the virtual machine. For more information, read the following guides:
-For Windows:
-* **Publisher**: Microsoft.AzureCAT.AzureEnhancedMonitoring
-* **Extension Type**: MonitorX64Windows
-* **Version**: 1.*
+* [Configure managed identities for Azure resources on a VM using the Azure portal](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm)
+* [Configure managed identities for Azure resources on an Azure VM using Azure CLI](/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm)
+* [Configure managed identities for Azure resources on an Azure VM using PowerShell](/azure/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm)
+* [Configure managed identities for Azure resources on an Azure VM using templates](/azure/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm)
+* [Terraform VM Identity](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine#identity)
-If you want to disable automatic updates for the VM extension or want to deploy a spefici version of the extension, you can retrieve the available versions with Azure CLI or Azure PowerShell.
+After assigning an identity to the virtual machine, give the VM read access to either the resource group or the individual resources associated to the virtual machine (VM, Network Interfaces, OS Disks and Data Disks). It is recommended to use the built-in Reader role to grant the access to these resources. You can also grant this access by adding the VM identity to an Azure Active Directory group that already has read access to the required resources. It is then no longer needed to have Owner privileges when deploying the VM Extension for SAP if you use a user assigned identity that already has the required permissions.
+
+There are different ways how to deploy the VM Extension for SAP manually. Please find a few examples in the next chapters.
+
+The extension currently supports the following configuration keys. In the example below, the msi_res_id is shown.
+
+* msi_res_id: ID of the user assigned identity the extension should use to get the required information about the VM and its resources
+* proxy: URL of the proxy the extension should use to connect to the internet, for example to retrieve information about the virtual machine and its resources.
+
+### Deploy manually with Azure PowerShell
+
+The following code contains four examples. It shows how to deploy the extension on Windows and Linux, using a system or user assigned identity. Make sure to replace the name of the resource group, the location and VM name in the example.
+
+``` powershell
+# Windows VM - user assigned identity
+Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -ExtensionType "MonitorX64Windows" -ResourceGroupName "<rg name>" -VMName "<vm name>" `
+ -Name "MonitorX64Windows" -TypeHandlerVersion "1.0" -Location "<location>" -SettingString '{"cfg":[{"key":"msi_res_id","value":"<user assigned resource id>"}]}'
+
+# Windows VM - system assigned identity
+Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -ExtensionType "MonitorX64Windows" -ResourceGroupName "<rg name>" -VMName "<vm name>" `
+ -Name "MonitorX64Windows" -TypeHandlerVersion "1.0" -Location "<location>" -SettingString '{"cfg":[]}'
+
+# Linux VM - user assigned identity
+Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -ExtensionType "MonitorX64Linux" -ResourceGroupName "<rg name>" -VMName "<vm name>" `
+ -Name "MonitorX64Linux" -TypeHandlerVersion "1.0" -Location "<location>" -SettingString '{"cfg":[{"key":"msi_res_id","value":"<user assigned resource id>"}]}'
+
+# Linux VM - system assigned identity
+Set-AzVMExtension -Publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" -ExtensionType "MonitorX64Linux" -ResourceGroupName "<rg name>" -VMName "<vm name>" `
+ -Name "MonitorX64Linux" -TypeHandlerVersion "1.0" -Location "<location>" -SettingString '{"cfg":[]}'
+```
+
+### Deploy manually with Azure CLI
+
+The following code contains four examples. It shows how to deploy the extension on Windows and Linux, using a system or user assigned identity. Make sure to replace the name of the resource group, the location and VM name in the example.
+
+``` bash
+# Windows VM - user assigned identity
+az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" --name "MonitorX64Windows" --resource-group "<rg name>" --vm-name "<vm name>" \
+ --extension-instance-name "MonitorX64Windows" --settings '{"cfg":[{"key":"msi_res_id","value":"<user assigned resource id>"}]}'
+
+# Windows VM - system assigned identity
+az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" --name "MonitorX64Windows" --resource-group "<rg name>" --vm-name "<vm name>" \
+ --extension-instance-name "MonitorX64Windows" --settings '{"cfg":[]}'
+
+# Linux VM - user assigned identity
+az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" --name "MonitorX64Linux" --resource-group "<rg name>" --vm-name "<vm name>" \
+ --extension-instance-name "MonitorX64Linux" --settings '{"cfg":[{"key":"msi_res_id","value":"<user assigned resource id>"}]}'
+
+# Linux VM - system assigned identity
+az vm extension set --publisher "Microsoft.AzureCAT.AzureEnhancedMonitoring" --name "MonitorX64Linux" --resource-group "<rg name>" --vm-name "<vm name>" \
+ --extension-instance-name "MonitorX64Linux" --settings '{"cfg":[]}'
+```
+
+### Deploy manually with Terraform
+
+The following manifest contains four examples. It shows how to deploy the extension on Windows and Linux, using a system or user assigned identity. Make sure to replace the ID of the VM and ID of the user assigned identity in the example.
+
+```terraform
+
+# Windows VM - user assigned identity
+
+resource "azurerm_virtual_machine_extension" "example" {
+ name = "MonitorX64Windows"
+ virtual_machine_id = "<vm id>"
+ publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
+ type = "MonitorX64Windows"
+ type_handler_version = "1.0"
+ auto_upgrade_minor_version = true
+
+ settings = <<SETTINGS
+{
+ "cfg":[
+ {
+ "key":"msi_res_id",
+ "value":"<user assigned resource id>"
+ }
+ ]
+}
+SETTINGS
+}
+
+# Windows VM - system assigned identity
+
+resource "azurerm_virtual_machine_extension" "example" {
+ name = "MonitorX64Windows"
+ virtual_machine_id = "<vm id>"
+ publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
+ type = "MonitorX64Windows"
+ type_handler_version = "1.0"
+ auto_upgrade_minor_version = true
+
+ settings = <<SETTINGS
+{
+ "cfg":[
+ ]
+}
+SETTINGS
+}
+
+# Linux VM - user assigned identity
+
+resource "azurerm_virtual_machine_extension" "example" {
+ name = "MonitorX64Linux"
+ virtual_machine_id = "<vm id>"
+ publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
+ type = "MonitorX64Linux"
+ type_handler_version = "1.0"
+ auto_upgrade_minor_version = true
+
+ settings = <<SETTINGS
+{
+ "cfg":[
+ {
+ "key":"msi_res_id",
+ "value":"<user assigned resource id>"
+ }
+ ]
+}
+SETTINGS
+}
+
+# Linux VM - system assigned identity
+
+resource "azurerm_virtual_machine_extension" "example" {
+ name = "MonitorX64Linux"
+ virtual_machine_id = "<vm id>"
+ publisher = "Microsoft.AzureCAT.AzureEnhancedMonitoring"
+ type = "MonitorX64Linux"
+ type_handler_version = "1.0"
+ auto_upgrade_minor_version = true
+
+ settings = <<SETTINGS
+{
+ "cfg":[
+ ]
+}
+SETTINGS
+}
+```
+
+### Versions of the VM Extension for SAP
+
+If you want to disable automatic updates for the VM extension or want to deploy a specific version of the extension, you can retrieve the available versions with Azure CLI or Azure PowerShell.
**Azure PowerShell** ```powershell
This check makes sure that all performance metrics that appear inside your SAP a
curl http://127.0.0.1:11812/azure4sap/metrics ``` **Expected result**: Returns an XML document that contains the monitoring information of the virtual machine, its disks and network interfaces.
- 1. Connect to the Azure Virtual Machine by using SSH.
-
-1. Check the output of the following command
-
- ```console
- curl http://127.0.0.1:11812/azure4sap/metrics
- ```
-
- **Expected result**: Returns an XML document that contains the monitoring information of the virtual machine, its disks and network interfaces.
If the preceding check was not successful, run these additional checks:
virtual-network Tutorial Protect Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-protect-nat-gateway.md
+
+ Title: 'Tutorial: Protect your NAT gateway with Azure DDoS Protection Standard'
+
+description: Learn how to create an NAT gateway in an Azure DDoS Protection Standard protected virtual network.
+++++ Last updated : 01/24/2022++
+# Tutorial: Protect your NAT gateway with Azure DDoS Protection Standard
+
+This article helps you create an Azure Virtual Network NAT gateway with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your NAT gateway from large scale DDoS attacks.
+
+> [!IMPORTANT]
+> Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../../ddos-protection/ddos-protection-overview.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a NAT gateway
+> * Create a DDoS protection plan
+> * Create a virtual network and associate the DDoS protection plan
+> * Create a test virtual machine
+> * Test the NAT gateway
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create a NAT gateway
+
+Before you deploy the NAT gateway resource and the other resources, a resource group is required to contain the resources deployed. In the following steps, you'll create a resource group, NAT gateway resource, and a public IP address. You can use one or more public IP address resources, public IP prefixes, or both.
+
+For information about public IP prefixes and a NAT gateway, see [Manage NAT gateway](./manage-nat-gateway.md?tabs=manage-nat-portal#add-or-remove-a-public-ip-prefix).
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+3. Select **+ Create**.
+
+4. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **Create new**. </br> Enter **myResourceGroupNAT**. </br> Select **OK**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway** |
+ | Region | Select **West Europe** |
+ | Availability Zone | Select **No Zone**. |
+ | Idle timeout (minutes) | Enter **10**. |
+
+ For information about availability zones and NAT gateway, see [NAT gateway and availability zones](./nat-availability-zones.md).
+
+5. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
+
+6. In the **Outbound IP** tab, enter or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
+ | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myPublicIP**. </br> Select **OK**. |
+
+7. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+8. Select **Create**.
+
+## Create a DDoS protection plan
+
+1. In the search box at the top of the portal, enter **DDoS protection**. Select **DDoS protection plans** in the search results and then select **+ Create**.
+
+1. In the **Basics** tab of **Create a DDoS protection plan** page, enter or select the following information:
+
+ | Setting | Value |
+ |--|--|
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Enter **myResourceGroupNAT**. |
+ | **Instance details** | |
+ | Name | Enter **myDDoSProtectionPlan**. |
+ | Region | Select **West Europe**. |
+
+1. Select **Review + create** and then select **Create** to deploy the DDoS protection plan.
+
+## Create a virtual network
+
+Before you deploy a virtual machine and can use your NAT gateway, you need to create the virtual network. This virtual network will contain the virtual machine created in later steps.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **Create**.
+
+3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **myResourceGroupNAT**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **(Europe) West Europe** |
+
+4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+5. Accept the default IPv4 address space of **10.1.0.0/16**.
+
+6. In the subnet section in **Subnet name**, select the **default** subnet.
+
+7. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **mySubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+ | **NAT GATEWAY** |
+ | NAT gateway | Select **myNATgateway**. |
+
+8. Select **Save**.
+
+9. Select the **Security** tab.
+
+10. In **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+
+11. In **DDoS protection** select **Enable**. Select **myDDoSProtectionPlan** in DDoS protection plan.
+
+12. Select the **Review + create** tab or select the **Review + create** button.
+
+13. Select **Create**.
+
+It can take a few minutes for the deployment of the virtual network to complete. Proceed to the next steps when the deployment completes.
+
+## Create test virtual machine
+
+In this section, you'll create a virtual machine to test the NAT gateway and verify the public IP address of the outbound connection.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **+ Create** > **Azure virtual machine**.
+
+2. In the **Create a virtual machine** page in the **Basics** tab, enter, or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroupNAT**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM**. |
+ | Region | Select **(Europe) West Europe**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Username | Enter a username for the virtual machine. |
+ | Password | Enter a password. |
+ | Confirm password | Confirm password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+3. Select the **Disks** tab, or select the **Next: Disks** button at the bottom of the page.
+
+4. Leave the default in the **Disks** tab.
+
+5. Select the **Networking** tab, or select the **Next: Networking** button at the bottom of the page.
+
+6. In the **Networking** tab, enter or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **mySubnet (10.1.0.0/24)**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **None**. |
+
+7. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+8. Select **Create**.
+
+## Test NAT gateway
+
+In this section, you'll test the NAT gateway. You'll first discover the public IP of the NAT gateway. You'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
+
+1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
+
+2. Select **myPublicIP**.
+
+3. Make note of the public IP address:
+
+ :::image type="content" source="./media/quickstart-create-nat-gateway-portal/find-public-ip.png" alt-text="Screenshot of discover public IP address of NAT gateway." border="true":::
+
+4. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+5. Select **myVM**.
+
+4. On the **Overview** page, select **Connect**, then **Bastion**.
+
+6. Enter the username and password entered during VM creation. Select **Connect**.
+
+7. Open **Microsoft Edge** on **myTestVM**.
+
+8. Enter **https://whatsmyip.com** in the address bar.
+
+9. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+
+ :::image type="content" source="./media/quickstart-create-nat-gateway-portal/my-ip.png" alt-text="Screenshot of Internet Explorer showing external outbound IP." border="true":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual network, virtual machine, and NAT gateway with the following steps:
+
+1. From the left-hand menu, select **Resource groups**.
+
+2. Select the **myResourceGroupNAT** resource group.
+
+3. Select **Delete resource group**.
+
+4. Enter **myResourceGroupNAT** and select **Delete**.
+
+## Next steps
+
+For more information on Azure Virtual Network NAT, see:
+> [!div class="nextstepaction"]
+> [Virtual Network NAT overview](nat-overview.md)
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
The following section describes common issues encountered when you configure Rou
### Troubleshooting data path
-* Currently, using Azure Firewall to inspect inter-hub traffic is available for Virtual WAN hubs that are deployed in the **same** Azure Region. Inter-hub inspection for Virtual WAN hubs that are in different Azure regions is available on a limited basis. For a list of available regions, please email previewinterhub@microsoft.com.
+* Currently, using Azure Firewall to inspect inter-hub traffic is available for Virtual WAN hubs that are deployed in the **same** Azure Region.
* Currently, Private Traffic Routing Policies are not supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity). * You can verify that the Routing Policies have been applied properly by checking the Effective Routes of the DefaultRouteTable. If Private Routing Policies are configured, you should see routes in the DefaultRouteTable for private traffic prefixes with next hop Azure Firewall. If Internet Traffic Routing Policies are configured, you should see a default (0.0.0.0/0) route in the DefaultRouteTable with next hop Azure Firewall. * If there are any Site-to-site VPN gateways or Point-to-site VPN gateways created **after** the feature has been confirmed to be enabled on your deployment, you will have to reach out again to previewinterhub@microsoft.com to get the feature enabled.
vpn-gateway Tutorial Protect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-protect-vpn-gateway.md
+
+ Title: 'Tutorial: Protect your VPN gateway with Azure DDoS Protection Standard'
+
+description: Learn how to set up a VPN gateway and protect it with Azure DDoS protection
++++ Last updated : 01/25/2023+++
+# Tutorial: Protect your VPN gateway with Azure DDoS Protection Standard
+
+This article helps you create an Azure VPN Gateway with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your VPN gateway from large scale DDoS attacks.
+
+> [!IMPORTANT]
+> Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a DDoS protection plan
+> * Create a virtual network
+> * Enable DDoS protection on the virtual network
+> * Create a VPN gateway
+> * View the gateway public IP address
+> * Resize a VPN gateway (resize SKU)
+> * Reset a VPN gateway
+
+The following diagram shows the virtual network and the VPN gateway created as part of this tutorial.
++
+## Prerequisites
+
+An Azure account with an active subscription. If you don't have one, [create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+
+## <a name="CreatVNet"></a>Create a virtual network
+
+Create a VNet using the following values:
+
+* **Resource group:** TestRG1
+* **Name:** VNet1
+* **Region:** (US) East US
+* **IPv4 address space:** 10.1.0.0/16
+* **Subnet name:** FrontEnd
+* **Subnet address space:** 10.1.0.0/24
++
+## Create a DDoS protection plan
+
+1. In the search box at the top of the portal, enter **DDoS protection**. Select **DDoS protection plans** in the search results and then select **+ Create**.
+
+1. In the **Basics** tab of **Create a DDoS protection plan** page, enter or select the following information:
+
+ | Setting | Value |
+ |--|--|
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **TestRG1**. |
+ | **Instance details** | |
+ | Name | Enter **myDDoSProtectionPlan**. |
+ | Region | Select **East US**. |
+
+1. Select **Review + create** and then select **Create** to deploy the DDoS protection plan.
+
+## Enable DDoS protection
+
+Azure DDoS protection Standard is enabled at the virtual network where the resource you want to protect reside.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **VNet1**.
+
+3. Select **DDoS protection** in **Settings**.
+
+4. Select **Enable**.
+
+5. In the pull-down box in DDoS protection plan, select **myDDoSProtectionPlan**.
+
+6. Select **Save**.
+
+## <a name="VNetGateway"></a>Create a VPN gateway
+
+In this step, you create the virtual network gateway (VPN gateway) for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
+
+Create a virtual network gateway using the following values:
+
+* **Name:** VNet1GW
+* **Region:** East US
+* **Gateway type:** VPN
+* **VPN type:** Route-based
+* **SKU:** VpnGw2
+* **Generation:** Generation 2
+* **Virtual network:** VNet1
+* **Gateway subnet address range:** 10.1.255.0/27
+* **Public IP address:** Create new
+* **Public IP address name:** VNet1GWpip
+++
+A gateway can take 45 minutes or more to fully create and deploy. You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
++
+## <a name="view"></a>View the public IP address
+
+You can view the gateway public IP address on the **Overview** page for your gateway.
++
+To see additional information about the public IP address object, select the name/IP address link next to **Public IP address**.
+
+## <a name="resize"></a>Resize a gateway SKU
+
+There are specific rules regarding resizing vs. changing a gateway SKU. In this section, we'll resize the SKU. For more information, see [Gateway settings - resizing and changing SKUs](vpn-gateway-about-vpn-gateway-settings.md#resizechange).
++
+## <a name="reset"></a>Reset a gateway
++
+## Clean up resources
+
+If you're not going to continue to use this application or go to the next tutorial, delete
+these resources using the following steps:
+
+1. Enter the name of your resource group in the **Search** box at the top of the portal and select it from the search results.
+
+1. Select **Delete resource group**.
+
+1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+
+## Next steps
+
+Once you have a VPN gateway, you can configure connections. The following articles will help you create a few of the most common configurations:
+
+> [!div class="nextstepaction"]
+> [Site-to-Site VPN connections](./tutorial-site-to-site-portal.md)
+
+> [!div class="nextstepaction"]
+> [Point-to-Site VPN connections](vpn-gateway-howto-point-to-site-resource-manager-portal.md)